Change search
Refine search result
1234567 1 - 50 of 2359
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Abbas, Gulfam
    et al.
    Blekinge Institute of Technology, School of Computing.
    Asif, Naveed
    Blekinge Institute of Technology, School of Computing.
    Performance Tradeoffs in Software Transactional Memory2010Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Transactional memory (TM), a new programming paradigm, is one of the latest approaches to write programs for next generation multicore and multiprocessor systems. TM is an alternative to lock-based programming. It is a promising solution to a hefty and mounting problem that programmers are facing in developing programs for Chip Multi-Processor (CMP) architectures by simplifying synchronization to shared data structures in a way that is scalable and compos-able. Software Transactional Memory (STM) a full software approach of TM systems can be defined as non-blocking synchronization mechanism where sequential objects are automatically converted into concurrent objects. In this thesis, we present performance comparison of four different STM implementations – RSTM of V. J. Marathe, et al., TL2 of D. Dice, et al., TinySTM of P. Felber, et al. and SwissTM of A. Dragojevic, et al. It helps us in deep understanding of potential tradeoffs involved. It further helps us in assessing, what are the design choices and configuration parameters that may provide better ways to build better and efficient STMs. In particular, suitability of an STM is analyzed against another STM. A literature study is carried out to sort out STM implementations for experimentation. An experiment is performed to measure performance tradeoffs between these STM implementations. The empirical evaluations done as part of this thesis conclude that SwissTM has significantly higher throughput than state-of-the-art STM implementations, namely RSTM, TL2, and TinySTM, as it outperforms consistently well while measuring execution time and aborts per commit parameters on STAMP benchmarks. The results taken in transaction retry rate measurements show that the performance of TL2 is better than RSTM, TinySTM and SwissTM.

    Download full text (pdf)
    FULLTEXT01
  • 2.
    Abdeen, Waleed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reducing the Distance Between Requirements Engineering and Verification2022Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Background Requirements engineering and verification (REV) processes play es-sential roles in software product development. There are physical and non-physicaldistances between entities (actors, artifacts, and activities) in these processes. Cur-rent practices that reduce the distances, such as automated testing and alignmentof document structure and tracing only partially close the above mentioned gap.Objective The aim of this thesis is to investigate solutions w.r.t their abilityto reduce the distances between requirements engineering and verification. Twotechniques that are explored in this thesis are automated testing (model-basedtesting, MBT) and alignment of document structure and tracing (traceability).Method The research methods used in this thesis are systematic mapping, soft-ware requirements mining, case study, literature survey, validation study, and de-sign science.Results MBT and traceability are effective in reducing the distance between re-quirements and verification. However, both activities have some shortcoming thatneeds to be addressed when used for that purpose. Current MBT techniques inthe context of software performance do not attain all the goals of MBT: 1) require-ments validation, 2) checking the testability of requirements, and 3) the generationof an efficient test suite. These goals are essential to reduce the distance. We de-veloped and assessed performance requirements verification and test environmentgeneration approach to tackle these shortcomings. Also, traceability between re-quirements and verification suffers from the low granularity of trace links and doesnot support the verification of all requirements. We propose the use of taxonomictrace links to trace and align the structure of requirements specifications and ver-ification artifacts. The results from the validation study show that the solution isfeasible in practice. However, this comes with challenges that need to be addressed.Conclusion MBT and improved traceability reduce multiple distances betweenactors, artifacts, and activities in the requirements engineering and verificationprocess. MBT is most effective in reducing the distances when the model used isbuilt from the requirements. Traceability is essential in easing access to relevantinformation when needed and should not be seen as an overhead. When creatingtrace links, we need to consider the difference in the abstraction, structure, andtime between the linked artifacts

    Download full text (pdf)
    fulltext
  • 3.
    Abdeen, Waleed
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomic Trace Links Recommender: Context Aware Hierarchical Classification2023In: CEUR Workshop Proceedings / [ed] Ferrari A., Penzenstadler B., Penzenstadler B., Hadar I., Oyedeji S., Abualhaija S., Vogelsang A., Deshpande G., Rachmann A., Gulden J., Wohlgemuth A., Hess A., Fricker S., Guizzardi R., Horkoff J., Perini A., Susi A., Karras O., Dalpiaz F., Moreira A., Amyot D., Spoletini P., CEUR-WS , 2023, Vol. 3378Conference paper (Refereed)
    Abstract [en]

    In the taxonomic trace links concept, the source and target artifacts are connected through knowledge organization structure (e.g., taxonomy). We introduce in this paper a recommender system that recommends labels to requirements artifacts from domain-specific taxonomy to establish taxonomic trace links. The tool exploits the hierarchical nature of taxonomies and uses requirements text and context information as input to the recommender. © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).

    Download full text (pdf)
    fulltext
  • 4.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Model-Based Testing for Performance Requirements: A Systematic Mapping Study and A Sample Study2019Independent thesis Advanced level (degree of Master (Two Years)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Model-Based Testing is a method that supports automated test design by using amodel. Although it is adopted in industrial, it is still an open area within performancerequirements. We aim to look into MBT for performance requirements and find out aframework that can model the performance requirements. We conducted a systematicmapping study, after that we conducted a sample study on software requirementsspecifications, then we introduced the Performance Requirements Verification andValidation (PRVV) model and finally, we completed another sample study to seehow the model works in practice. We found that there are many models can beused for performance requirement while the maturity is not enough. MBT can beimplemented in the context of performance, and it has been gaining momentum inrecent years compared to earlier. The PRVV model we developed can verify theperformance requirements and help to generate the test case.

    Download full text (pdf)
    fulltext
  • 5.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chen, Xingru
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    An approach for performance requirements verification and test environments generation2023In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, Vol. 28, no 1, p. 117-144Article in journal (Refereed)
    Abstract [en]

    Model-based testing (MBT) is a method that supports the design and execution of test cases by models that specify theintended behaviors of a system under test. While systematic literature reviews on MBT in general exist, the state of the arton modeling and testing performance requirements has seen much less attention. Therefore, we conducted a systematic map-ping study on model-based performance testing. Then, we studied natural language software requirements specificationsin order to understand which and how performance requirements are typically specified. Since none of the identified MBTtechniques supported a major benefit of modeling, namely identifying faults in requirements specifications, we developed thePerformance Requirements verificatiOn and Test EnvironmentS generaTion approach (PRO-TEST). Finally, we evaluatedPRO-TEST on 149 requirements specifications. We found and analyzed 57 primary studies from the systematic mappingstudy and extracted 50 performance requirements models. However, those models don’t achieve the goals of MBT, whichare validating requirements, ensuring their testability, and generating the minimum required test cases. We analyzed 77 Soft-ware Requirements Specification (SRS) documents, extracted 149 performance requirements from those SRS, and illustratethat with PRO-TEST we can model performance requirements, find issues in those requirements and detect missing ones.We detected three not-quantifiable requirements, 43 not-quantified requirements, and 180 underspecified parameters in the149 modeled performance requirements. Furthermore, we generated 96 test environments from those models. By modelingperformance requirements with PRO-TEST, we can identify issues in the requirements related to their ambiguity, measur-ability, and completeness. Additionally, it allows to generate parameters for test environments

    Download full text (pdf)
    fulltext
  • 6.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chirtoglou, Alexandros
    HOCHTIEF ViCon GmbH, DEU.
    Paul Schimanski, Christoph
    HOCHTIEF ViCon GmbH, DEU.
    Goli, Heja
    HOCHTIEF ViCon GmbH, DEU.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Taxonomic Trace Links - Rethinking Traceability and its BenefitsManuscript (preprint) (Other academic)
    Abstract [en]

    Background: Traceability is an important quality of artifacts that are used in knowledge-intensive tasks. When projectbudgets and time pressure are a reality, this leads often to a down-prioritization of creating trace links. Objective:We propose a new idea that uses knowledge organization structures, such as taxonomies, ontologies and thesauri, asan auxiliary artifact to establish trace links. In order to investigate the novelty and feasibility of this idea, we studytraceability in the area of requirements engineering. Method: First, we conduct a literature survey to investigate towhat extent and how auxiliary artifacts have been used in the past for requirements traceability. Then, we conduct avalidation study in industry, testing the idea of taxonomic trace links with realistic artifacts. Results: We have reviewed126 studies that investigate requirements traceability; ninetey-one of them use auxiliary artifacts in the traceabilityprocess. In the validation study, while we have encountered six challenges when classifying requirements with a domain-specific taxonomy, we found that designers and engineers are able to classify design objects comprehensively and reliably.Conclusions: The idea of taxonomic trace links is novel and feasible in practice. However, the identified challenges needto be addressed to allow for an adoption in practice and enable a transfer to software intensive contexts.

  • 7.
    Abdeen, Waleed
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wnuk, Krzysztof
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Chirtoglou, Alexandros
    HOCHTIEF ViCon GmbH, Essen, DEU.
    Challenges of Requirements Communication and Digital Assets Verification in Infrastructure ProjectsManuscript (preprint) (Other academic)
    Abstract [en]

    Context: In infrastructure projects with design-build contracts, the supplier delivers digital assets (e.g., 2D or 3Dmodels) as a part of the design deliverable. These digital assets should align with the customer requirements. Poorrequirements communication between the customer and the supplier is one of the reasons for project overrun. To thebest of our knowledge, no study have yet investigated challenges in requirements communication in the customer-supplierinterface.Objective: In this article, we investigated the processes of requirements validation, requirements communication, anddigital assets verification, and explored the challenges associated with these processes.Methods: We conducted two exploratory case studies. We interviewed ten experts working with digital assets fromthree companies working on two infrastructure projects (road and railway).Results: We illustrate the activities, stakeholders, and artifacts involved in requirements communication, requirementsvalidation, and digital asset verification. Furthermore, we identified 14 challenges (in four clusters: requirements quality,trace links, common requirements engineering (RE), and project management) and their causes and consequences inthose processes.Conclusion: Communication between the client and supplier in sub-contracted work in infrastructure projects is oftenindirect. This puts pressure on the quality of the tender documents (mainly requirements documents) that provides themeans for communication and controls the design verification processes. Hence, it is crucial to ensure the quality of therequirements documents by implementing quality assurance techniques

  • 8.
    Abheeshta, Putta
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Comparative Analysis of Software Development Practices across Software Organisations: India and Sweden2016Independent thesis Advanced level (degree of Master (One Year)), 20 credits / 30 HE creditsStudent thesis
    Abstract [en]

    Context. System Development Methodologies (SDM’s) have been an area of intensive research in the field of software engineering. Different software organisations adopt different development methodologies and use different development practices. The frequency of usage of development practices and acceptance factors for adoption of development methodology are crucial for software organisations. The factors of acceptance and development practices differ across geographical locations. Many challenges have been presented in the literature with respect to the mismatch of the development practices across organisations while collaborating across organisations in distributed development. There is no considerable amount of research done in context of differences across development practices and acceptance factors for adoption of a particular development methodology. Objectives. The primary objectives of the research are to find out a) differences in (i) practice usage (ii) acceptance factors such as organisational, social and cultural b) explore the reasons for the differences and also investigate consequences of such differences while collaborating, across organisations located in India and Sweden. Methods. A literature review was conducted by searching in scientific databases for identifying common agile and plan-driven development practices and acceptance theories for development methodologies. Survey was conducted across organisations located in India and Sweden to find out the usage frequency of development practices and acceptance factors. Ten interviews were conducted to investigate, reasons for differences and consequences of differences from the software practitioners from organisations located in India and Sweden. Literature evidences were used to support the results collected from interviews. Results. From the survey, organisations in India have adopted a higher frequency of plan driven practices when compared to Sweden and agile practices were adopted at higher frequency in Sweden when compared to India. The number of organisations adopting "pure agile" methodologies have been significantly higher in Sweden. There was significant differences were found across the acceptance factors such as cultural, organisational, image and career factors between India and Sweden. The factors such as cultural, social, human, business and organisational factors are responsible for such differences across development practices and acceptance factors. Challenges related to communication, coordination and control were found due to the differences, while collaborating between Indian and Sweden sites. Conclusions. The study signifies the importance of identifying the frequency of development practices and also the acceptance factors responsible for adoption of development methodologies in the software organisations. The mismatch between these practices will led to various challenges. The study draws insights into various non-technical factors such as cultural, human, organisational, business and social while collaborating between organisations. Variations across these factors will lead to many coordination, communication and control issues. Keywords: Development Practices, Agile Development, Plan Driven Development, Acceptance Factors, Global Software Development.

    Download full text (pdf)
    fulltext
  • 9.
    Abualhaija, Sallam
    et al.
    University of Luxembourg, LUX.
    Fucci, Davide
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Dalpiaz, Fabiano
    Utrecht University, NLD.
    Franch, Xavier
    Universitat Politècnica de Catalunya, ESP.
    3rd workshop on natural language processing for requirements engineering (NLP4RE'20)2020In: CEUR Workshop Proceedings / [ed] Sabetzadeh M.,Vogelsang A.,Abualhaija S.,Borg M.,Dalpiaz F.,Daneva M.,Fernandez N.C.,Franch X.,Fucci D.,Gervasi V.,Groen E.,Guizzardi R.,Herrmann A.,Horkoff J.,Mich L.,Perini A.,Susi A., CEUR-WS , 2020, Vol. 2584Conference paper (Refereed)
    Download full text (pdf)
    3rd workshop on natural language processing for requirements engineering (NLP4RE'20)
  • 10.
    Abu-Sheikh, Khalil
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Reviewing and Evaluating Techniques for Modeling and Analyzing Security Requirements2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The software engineering community recognized the importance of addressing security requirements with other functional requirements from the beginning of the software development life cycle. Therefore, there are some techniques that have been developed to achieve this goal. Thus, we conducted a theoretical study that focuses on reviewing and evaluating some of the techniques that are used to model and analyze security requirements. Thus, the Abuse Cases, Misuse Cases, Data Sensitivity and Threat Analyses, Strategic Modeling, and Attack Trees techniques are investigated in detail to understand and highlight the similarities and differences between them. We found that using these techniques, in general, help requirements engineer to specify more detailed security requirements. Also, all of these techniques cover the concepts of security but in different levels. In addition, the existence of different techniques provides a variety of levels for modeling and analyzing security requirements. This helps requirements engineer to decide which technique to use in order to address security issues for the system under investigation. Finally, we found that using only one of these techniques will not be suitable enough to satisfy the security requirements of the system under investigation. Consequently, we consider that it would be beneficial to combine the Abuse Cases or Misuse Cases techniques with the Attack Trees technique or to combine the Strategic Modeling and Attack Trees techniques together in order to model and analyze security requirements of the system under investigation. The concentration on using the Attack Trees technique is due to the reusability of the produced attack trees, also this technique helps in covering a wide range of attacks, thus covering security concepts as well as security requirements in a proper way.

    Download full text (pdf)
    FULLTEXT01
  • 11.
    Acharya, Mod Nath
    et al.
    Blekinge Institute of Technology, School of Computing.
    Aslam, Nazam
    Blekinge Institute of Technology, School of Computing.
    Coordination in Global Software Development: Challenges, associated threats, and mitigating practices2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Global Software Development (GSD) is an emerging trend in today's software world in which teams are geographically dispersed, either in close proximity or globally. GSD provides certain advantages to development companies like low development cost, access to cheap and skilled labour etc. This type of development is noted as a more risky and challenging as compared to projects developed with teams under same roof. Inherently the nature of GSD projects are cooperative in which many software developers work on a common project, share information and coordinate activities. Coordination is a fundamental part of software development. GSD comprises different types of development systems i.e. insourcing, outsourcing, nearshoring, or farshoring, whatever the types of development systems selected by a company there exist the challenges to coordination. Therefore the knowledge of potential challenges, associated threats to coordination and practices to mitigate them plays a vital role for running a successful global project.

    Download full text (pdf)
    FULLTEXT01
  • 12.
    Adolfsen, Linus
    Blekinge Institute of Technology, School of Engineering.
    Parameterstyrd tillverkning av rör för marina fartyg2012Student thesis
    Abstract [sv]

    Innehållet i denna rapport är ett resultat av ett moment i utbildningen till Utvecklingsingenjör i Maskinteknik. Arbetet har skett genom ett samarbete mellan Linus Adolfsen, Kockums AB och Blekinge Tekniska Högskola. Rapporten behandlar i stort två moment, ett praktiskt och ett teoretiskt. Den första delen, den praktiska, gick ut på att finna en metod för att överbrygga steget från modell till verklighet på ett effektivt sätt. Detta resulterade i en egenutvecklad programvara som kan läsa in utdatafilen från Tribon (CAD programvara) och översätta detta till en programfil för Herber CNC 90 bockningsmaskin. Den andra delen är teoretisk och är en analys av verksamheten utifrån perspektivet att medge förtillverkning. Resultatet blev en analys av den berörda verksamheten med förslag på hur man ska åtgärda de problem och hinder som finns idag. Det gav även stort upphov till förslag på vidare studier.

    Download full text (pdf)
    FULLTEXT01
  • 13.
    Adolfsson, Victor
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    The State of the Art in Distributed Mobile Robotics2001Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Distributed Mobile Robotics (DMR) is a multidisciplinary research area with many open research questions. This is a survey of the state of the art in Distributed Mobile Robotics research. DMR is sometimes referred to as cooperative robotics or multi-robotic systems. DMR is about how multiple robots can cooperate to achieve goals and complete tasks better than single robot systems. It covers architectures, communication, learning, exploration and many other areas presented in this master thesis.

    Download full text (pdf)
    FULLTEXT01
  • 14.
    Aftarczuk, Kamila
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Evaluation of selected data mining algorithms implemented in Medical Decision Support Systems2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    The goal of this master’s thesis is to identify and evaluate data mining algorithms which are commonly implemented in modern Medical Decision Support Systems (MDSS). They are used in various healthcare units all over the world. These institutions store large amounts of medical data. This data may contain relevant medical information hidden in various patterns buried among the records. Within the research several popular MDSS’s are analyzed in order to determine the most common data mining algorithms utilized by them. Three algorithms have been identified: Naïve Bayes, Multilayer Perceptron and C4.5. Prior to the very analyses the algorithms are calibrated. Several testing configurations are tested in order to determine the best setting for the algorithms. Afterwards, an ultimate comparison of the algorithms orders them with respect to their performance. The evaluation is based on a set of performance metrics. The analyses are conducted in WEKA on five UCI medical datasets: breast cancer, hepatitis, heart disease, dermatology disease, diabetes. The analyses have shown that it is very difficult to name a single data mining algorithm to be the most suitable for the medical data. The results gained for the algorithms were very similar. However, the final evaluation of the outcomes allowed singling out the Naïve Bayes to be the best classifier for the given domain. It was followed by the Multilayer Perceptron and the C4.5.

    Download full text (pdf)
    FULLTEXT01
  • 15. Afzal, Wasif
    Lessons from applying experimentation in software engineering prediction systems2008Conference paper (Refereed)
    Abstract [en]

    Within software engineering prediction systems, experiments are undertaken primarliy to investigate relationships and to measure/compare models' accuracy. This paper discusses our experience and presents useful lessons/guidelines in experimenting with software engineering prediction systems. For this purpose, we use a typical software engineering experimentation process as a baseline. We found that the typical software engineering experimentation process in software engineering is supportive in developing prediction systems and have highlighted issues more central to the domain of software engineering prediction systems.

    Download full text (pdf)
    Lessons from applying experimentation in software engineering prediction systems
  • 16.
    Afzal, Wasif
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Metrics in Software Test Planning and Test Design Processes2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Software metrics plays an important role in measuring attributes that are critical to the success of a software project. Measurement of these attributes helps to make the characteristics and relationships between the attributes clearer. This in turn supports informed decision making. The field of software engineering is affected by infrequent, incomplete and inconsistent measurements. Software testing is an integral part of software development, providing opportunities for measurement of process attributes. The measurement of software testing process attributes enables the management to have better insight in to the software testing process. The aim of this thesis is to investigate the metric support for software test planning and test design processes. The study comprises of an extensive literature study and follows a methodical approach. This approach consists of two steps. The first step comprises of analyzing key phases in software testing life cycle, inputs required for starting the software test planning and design processes and metrics indicating the end of software test planning and test design processes. After establishing a basic understanding of the related concepts, the second step identifies the attributes of software test planning and test design processes including metric support for each of the identified attributes. The results of the literature survey showed that there are a number of different measurable attributes for software test planning and test design processes. The study partitioned these attributes in multiple categories for software test planning and test design processes. For each of these attributes, different existing measurements are studied. A consolidation of these measurements is presented in this thesis which is intended to provide an opportunity for management to consider improvement in these processes.

    Download full text (pdf)
    FULLTEXT01
  • 17. Afzal, Wasif
    Search-based approaches to software fault prediction and software testing2009Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation activities are essential for software quality but also constitute a large part of software development costs. Therefore efficient and cost-effective software verification and validation activities are both a priority and a necessity considering the pressure to decrease time-to-market and intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions related to software quality, when to stop testing, testing schedule and testing resource allocation needs to be as accurate as possible. This thesis investigates the application of search-based techniques within two activities of software verification and validation: Software fault prediction and software testing for non-functional system properties. Software fault prediction modeling can provide support for making important decisions as outlined above. In this thesis we empirically evaluate symbolic regression using genetic programming (a search-based technique) as a potential method for software fault predictions. Using data sets from both industrial and open-source software, the strengths and weaknesses of applying symbolic regression in genetic programming are evaluated against competitive techniques. In addition to software fault prediction this thesis also consolidates available research into predictive modeling of other attributes by applying symbolic regression in genetic programming, thus presenting a broader perspective. As an extension to the application of search-based techniques within software verification and validation this thesis further investigates the extent of application of search-based techniques for testing non-functional system properties. Based on the research findings in this thesis it can be concluded that applying symbolic regression in genetic programming may be a viable technique for software fault prediction. We additionally seek literature evidence where other search-based techniques are applied for testing of non-functional system properties, hence contributing towards the growing application of search-based techniques in diverse activities within software verification and validation.

    Download full text (pdf)
    FULLTEXT01
  • 18. Afzal, Wasif
    Search-Based Prediction of Software Quality: Evaluations and Comparisons2011Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    Software verification and validation (V&V) activities are critical for achieving software quality; however, these activities also constitute a large part of the costs when developing software. Therefore efficient and effective software V&V activities are both a priority and a necessity considering the pressure to decrease time-to-market and the intense competition faced by many, if not all, companies today. It is then perhaps not unexpected that decisions that affects software quality, e.g., how to allocate testing resources, develop testing schedules and to decide when to stop testing, needs to be as stable and accurate as possible. The objective of this thesis is to investigate how search-based techniques can support decision-making and help control variation in software V&V activities, thereby indirectly improving software quality. Several themes in providing this support are investigated: predicting reliability of future software versions based on fault history; fault prediction to improve test phase efficiency; assignment of resources to fixing faults; and distinguishing fault-prone software modules from non-faulty ones. A common element in these investigations is the use of search-based techniques, often also called metaheuristic techniques, for supporting the V&V decision-making processes. Search-based techniques are promising since, as many problems in real world, software V&V can be formulated as optimization problems where near optimal solutions are often good enough. Moreover, these techniques are general optimization solutions that can potentially be applied across a larger variety of decision-making situations than other existing alternatives. Apart from presenting the current state of the art, in the form of a systematic literature review, and doing comparative evaluations of a variety of metaheuristic techniques on large-scale projects (both industrial and open-source), this thesis also presents methodological investigations using search-based techniques that are relevant to the task of software quality measurement and prediction. The results of applying search-based techniques in large-scale projects, while investigating a variety of research themes, show that they consistently give competitive results in comparison with existing techniques. Based on the research findings, we conclude that search-based techniques are viable techniques to use in supporting the decision-making processes within software V&V activities. The accuracy and consistency of these techniques make them important tools when developing future decision-support for effective management of software V&V activities.

    Download full text (pdf)
    FULLTEXT01
  • 19.
    Afzal, Wasif
    Blekinge Institute of Technology.
    Using faults-slip-through metric as a predictor of fault-proneness2010In: Proceedings - Asia-Pacific Software Engineering Conference, APSEC, IEEE , 2010Conference paper (Refereed)
    Abstract [en]

    The majority of software faults are present in small number of modules, therefore accurate prediction of fault-prone modules helps improve software quality by focusing testing efforts on a subset of modules. This paper evaluates the use of the faults-slip-through (FST) metric as a potential predictor of fault-prone modules. Rather than predicting the fault-prone modules for the complete test phase, the prediction is done at the specific test levels of integration and system test. We applied eight classification techniques to the task of identifying fault-prone modules, representing a variety of approaches, including a standard statistical technique for classification (logistic regression), tree-structured classifiers (C4.5 and random forests), a Bayesian technique (Na\"{i}ve Bayes), machine-learning techniques (support vector machines and back-propagation artificial neural networks) and search-based techniques (genetic programming and artificial immune recognition systems) on FST data collected from two large industrial projects from the telecommunication domain. \emph{Results:} Using area under the receiver operating characteristic (ROC) curve and the location of (PF, PD) pairs in the ROC space, GP showed impressive results in comparison with other techniques for predicting fault-prone modules at both integration and system test levels. The use of faults-slip-through metric in general provided good prediction results at the two test levels. The accuracy of GP is statistically significant in comparison with majority of the techniques for predicting fault-prone modules at integration and system test levels. (ii) Faults-slip-through metric has the potential to be a generally useful predictor of fault-proneness at integration and system test levels.

    Download full text (pdf)
    fulltext
  • 20. Afzal, Wasif
    et al.
    Ghazi, Ahmad Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Itkonen, Juha
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andrews, Anneliese
    Bhatti, Khurram
    An experiment on the effectiveness and efficiency of exploratory testing2015In: Empirical Software Engineering, ISSN 1382-3256, Vol. 20, no 3, p. 844-878Article in journal (Refereed)
    Abstract [en]

    The exploratory testing (ET) approach is commonly applied in industry, but lacks scientific research. The scientific community needs quantitative results on the performance of ET taken from realistic experimental settings. The objective of this paper is to quantify the effectiveness and efficiency of ET vs. testing with documented test cases (test case based testing, TCT). We performed four controlled experiments where a total of 24 practitioners and 46 students performed manual functional testing using ET and TCT. We measured the number of identified defects in the 90-minute testing sessions, the detection difficulty, severity and types of the detected defects, and the number of false defect reports. The results show that ET found a significantly greater number of defects. ET also found significantly more defects of varying levels of difficulty, types and severity levels. However, the two testing approaches did not differ significantly in terms of the number of false defect reports submitted. We conclude that ET was more efficient than TCT in our experiment. ET was also more effective than TCT when detection difficulty, type of defects and severity levels are considered. The two approaches are comparable when it comes to the number of false defect reports submitted.

    Download full text (pdf)
    fulltext
  • 21. Afzal, Wasif
    et al.
    Torkar, Richard
    A Comparative Evaluation of Using Genetic Programming for Predicting Fault Count Data2008Conference paper (Refereed)
    Abstract [en]

    There have been a number of software reliability growth models (SRGMs) proposed in literature. Due to several reasons, such as violation of models' assumptions and complexity of models, the practitioners face difficulties in knowing which models to apply in practice. This paper presents a comparative evaluation of traditional models and use of genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The motivation of using a GP approach is its ability to evolve a model based entirely on prior data without the need of making underlying assumptions. The results show the strengths of using GP for predicting fault count data.

  • 22.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Torkar, Richard
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Incorporating Metrics in an Organizational Test Strategy2008Conference paper (Refereed)
    Abstract [en]

    An organizational level test strategy needs to incorporate metrics to make the testing activities visible and available to process improvements. The majority of testing measurements that are done are based on faults found in the test execution phase. In contrast, this paper investigates metrics to support software test planning and test design processes. We have assembled metrics in these two process types to support management in carrying out evidence-based test process improvements and to incorporate suitable metrics as part of an organization level test strategy. The study is composed of two steps. The first step creates a relevant context by analyzing key phases in the software testing lifecycle, while the second step identifies the attributes of software test planning and test design processes along with metric(s) support for each of the identified attributes.

    Download full text (pdf)
    FULLTEXT01
  • 23. Afzal, Wasif
    et al.
    Torkar, Richard
    On the application of genetic programming for software engineering predictive modeling: A systematic review2011In: Expert Systems with Applications, ISSN 0957-4174 , Vol. 38, no 9, p. 11984-11997Article, review/survey (Refereed)
    Abstract [en]

    The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation.

  • 24. Afzal, Wasif
    et al.
    Torkar, Richard
    Suitability of Genetic Programming for Software Reliability Growth Modeling2008Conference paper (Refereed)
    Abstract [en]

    Genetic programming (GP) has been found to be effective in finding a model that fits the given data points without making any assumptions about the model structure. This makes GP a reasonable choice for software reliability growth modeling. This paper discusses the suitability of using GP for software reliability growth modeling and highlights the mechanisms that enable GP to progressively search for fitter solutions.

  • 25. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Towards benchmarking feature subset selection methods for software fault prediction2016In: Studies in Computational Intelligence, Springer, 2016, 617, Vol. 617, p. 33-58Chapter in book (Refereed)
    Abstract [en]

    Despite the general acceptance that software engineering datasets often contain noisy, irrelevant or redundant variables, very few benchmark studies of feature subset selection (FSS) methods on real-life data from software projects have been conducted. This paper provides an empirical comparison of state-of-the-art FSS methods: information gain attribute ranking (IG); Relief (RLF); principal component analysis (PCA); correlation-based feature selection (CFS); consistencybased subset evaluation (CNS); wrapper subset evaluation (WRP); and an evolutionary computation method, genetic programming (GP), on five fault prediction datasets from the PROMISE data repository. For all the datasets, the area under the receiver operating characteristic curve—the AUC value averaged over 10-fold cross-validation runs—was calculated for each FSS method-dataset combination before and after FSS. Two diverse learning algorithms, C4.5 and naïve Bayes (NB) are used to test the attribute sets given by each FSS method. The results show that although there are no statistically significant differences between the AUC values for the different FSS methods for both C4.5 and NB, a smaller set of FSS methods (IG, RLF, GP) consistently select fewer attributes without degrading classification accuracy. We conclude that in general, FSS is beneficial as it helps improve classification accuracy of NB and C4.5. There is no single best FSS method for all datasets but IG, RLF and GP consistently select fewer attributes without degrading classification accuracy within statistically significant boundaries. © Springer International Publishing Switzerland 2016.

  • 26. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A Systematic Mapping Study on Non-Functional Search-Based Software Testing2008Conference paper (Refereed)
  • 27. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    A systematic review of search-based testing for non-functional system properties2009In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 51, no 6, p. 957-976Article in journal (Refereed)
    Abstract [en]

    Search-based software testing is the application of metaheuristic search techniques to generate software tests. The test adequacy criterion is transformed into a fitness function and a set of solutions in the search space are evaluated with respect to the fitness function using a metaheuristic search technique. The application of metaheuristic search techniques for testing is promising due to the fact that exhaustive testing is infeasible considering the size and complexity of software under test. Search-based software testing has been applied across the spectrum of test case design methods; this includes white-box (structural), black-box (functional) and grey-box (combination of structural and functional) testing. In addition, metaheuristic search techniques have also been applied to test non-functional properties. The overall objective of undertaking this systematic review is to examine existing work into non-functional search-based software testing (NFSBST). We are interested in types of non-functional testing targeted using metaheuristic search techniques, different fitness functions used in different types of search-based non-functional testing and challenges in the application of these techniques. The systematic review is based on a comprehensive set of 35 articles obtained after a multi-stage selection process and have been published in the time span 1996-2007. The results of the review show that metaheuristic search techniques have been applied for non-functional testing of execution time, quality of service, security, usability and safety. A variety of metaheuristic search techniques are found to be applicable for non-functional testing including simulated annealing, tabu search, genetic algorithms, ant colony methods, grammatical evolution, genetic programming (and its variants including linear genetic programming) and swarm intelligence methods. The review reports on different fitness functions used to guide the search for each of the categories of execution time, safety, usability, quality of service and security; along with a discussion of possible challenges in the application of metaheuristic search techniques.

  • 28. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Prediction of fault count data using genetic programming2008Conference paper (Refereed)
    Abstract [en]

    Software reliability growth modeling helps in deciding project release time and managing project resources. A large number of such models have been presented in the past. Due to the existence of many models, the models' inherent complexity, and their accompanying assumptions; the selection of suitable models becomes a challenging task. This paper presents empirical results of using genetic programming (GP) for modeling software reliability growth based on weekly fault count data of three different industrial projects. The goodness of fit (adaptability) and predictive accuracy of the evolved model is measured using five different measures in an attempt to present a fair evaluation. The results show that the GP evolved model has statistically significant goodness of fit and predictive accuracy.

  • 29.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology, School of Computing.
    Torkar, Richard
    Blekinge Institute of Technology, School of Computing.
    Feldt, Robert
    Blekinge Institute of Technology, School of Computing.
    Resampling Methods in Software Quality Classification2012In: International Journal of Software Engineering and Knowledge Engineering, ISSN 0218-1940, Vol. 22, no 2, p. 203-223Article in journal (Refereed)
    Abstract [en]

    In the presence of a number of algorithms for classification and prediction in software engineering, there is a need to have a systematic way of assessing their performances. The performance assessment is typically done by some form of partitioning or resampling of the original data to alleviate biased estimation. For predictive and classification studies in software engineering, there is a lack of a definitive advice on the most appropriate resampling method to use. This is seen as one of the contributing factors for not being able to draw general conclusions on what modeling technique or set of predictor variables are the most appropriate. Furthermore, the use of a variety of resampling methods make it impossible to perform any formal meta-analysis of the primary study results. Therefore, it is desirable to examine the influence of various resampling methods and to quantify possible differences. Objective and method: This study empirically compares five common resampling methods (hold-out validation, repeated random sub-sampling, 10-fold cross-validation, leave-one-out cross-validation and non-parametric bootstrapping) using 8 publicly available data sets with genetic programming (GP) and multiple linear regression (MLR) as software quality classification approaches. Location of (PF, PD) pairs in the ROC (receiver operating characteristics) space and area under an ROC curve (AUC) are used as accuracy indicators. Results: The results show that in terms of the location of (PF, PD) pairs in the ROC space, bootstrapping results are in the preferred region for 3 of the 8 data sets for GP and for 4 of the 8 data sets for MLR. Based on the AUC measure, there are no significant differences between the different resampling methods using GP and MLR. Conclusion: There can be certain data set properties responsible for insignificant differences between the resampling methods based on AUC. These include imbalanced data sets, insignificant predictor variables and high-dimensional data sets. With the current selection of data sets and classification techniques, bootstrapping is a preferred method based on the location of (PF, PD) pair data in the ROC space. Hold-out validation is not a good choice for comparatively smaller data sets, where leave-one-out cross-validation (LOOCV) performs better. For comparatively larger data sets, 10-fold cross-validation performs better than LOOCV.

  • 30. Afzal, Wasif
    et al.
    Torkar, Richard
    Feldt, Robert
    Search-based prediction of fault count data2009Conference paper (Refereed)
    Abstract [en]

    Symbolic regression, an application domain of genetic programming (GP), aims to find a function whose output has some desired property, like matching target values of a particular data set. While typical regression involves finding the coefficients of a pre-defined function, symbolic regression finds a general function, with coefficients, fitting the given set of data points. The concepts of symbolic regression using genetic programming can be used to evolve a model for fault count predictions. Such a model has the advantages that the evolution is not dependent on a particular structure of the model and is also independent of any assumptions, which are common in traditional time-domain parametric software reliability growth models. This research aims at applying experiments targeting fault predictions using genetic programming and comparing the results with traditional approaches to compare efficiency gains.

    Download full text (pdf)
    FULLTEXT01
  • 31.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Genetic programming for cross-release fault count predictions in large and complex software projects2010In: Evolutionary Computation and Optimization Algorithms in Software Engineering: Applications and Techniques / [ed] Chis, Monica, IGI Global, Hershey, USA , 2010Chapter in book (Refereed)
    Abstract [en]

    Software fault prediction can play an important role in ensuring software quality through efficient resource allocation. This could, in turn, reduce the potentially high consequential costs due to faults. Predicting faults might be even more important with the emergence of short-timed and multiple software releases aimed at quick delivery of functionality. Previous research in software fault prediction has indicated that there is a need i) to improve the validity of results by having comparisons among number of data sets from a variety of software, ii) to use appropriate model evaluation measures and iii) to use statistical testing procedures. Moreover, cross-release prediction of faults has not yet achieved sufficient attention in the literature. In an attempt to address these concerns, this paper compares the quantitative and qualitative attributes of 7 traditional and machine-learning techniques for modeling the cross-release prediction of fault count data. The comparison is done using extensive data sets gathered from a total of 7 multi-release open-source and industrial software projects. These software projects together have several years of development and are from diverse application areas, ranging from a web browser to a robotic controller software. Our quantitative analysis suggests that genetic programming (GP) tends to have better consistency in terms of goodness of fit and accuracy across majority of data sets. It also has comparatively less model bias. Qualitatively, ease of configuration and complexity are less strong points for GP even though it shows generality and gives transparent models. Artificial neural networks did not perform as well as expected while linear regression gave average predictions in terms of goodness of fit and accuracy. Support vector machine regression and traditional software reliability growth models performed below average on most of the quantitative evaluation criteria while remained on average for most of the qualitative measures.

  • 32. Afzal, Wasif
    et al.
    Torkar, Richard
    Blekinge Institute of Technology, School of Computing.
    Feldt, Robert
    Blekinge Institute of Technology, School of Computing.
    Gorschek, Tony
    Blekinge Institute of Technology, School of Computing.
    Prediction of faults-slip-through in large software projects: an empirical evaluation2014In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 22, no 1, p. 51-86Article in journal (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software test process. Therefore, determination of which software test phases to focus improvement work on has considerable industrial interest. We evaluate a number of prediction techniques for predicting the number of faults slipping through to unit, function, integration, and system test phases of a large industrial project. The objective is to quantify improvement potential in different test phases by striving toward finding the faults in the right phase. The results show that a range of techniques are found to be useful in predicting the number of faults slipping through to the four test phases; however, the group of search-based techniques (genetic programming, gene expression programming, artificial immune recognition system, and particle swarm optimization-based artificial neural network) consistently give better predictions, having a representation at all of the test phases. Human predictions are consistently better at two of the four test phases. We conclude that the human predictions regarding the number of faults slipping through to various test phases can be well supported by the use of search-based techniques. A combination of human and an automated search mechanism (such as any of the search-based techniques) has the potential to provide improved prediction results.

    Download full text (pdf)
    FULLTEXT01
  • 33.
    Afzal, Wasif
    et al.
    Blekinge Institute of Technology.
    Torkar, Richard
    Blekinge Institute of Technology.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wikstrand, Greger
    KnowIT YAHM Sweden AB, SWE.
    Search-based prediction of fault-slip-through in large software projects2010In: Proceedings - 2nd International Symposium on Search Based Software Engineering, SSBSE 2010, IEEE , 2010, p. 79-88Conference paper (Refereed)
    Abstract [en]

    A large percentage of the cost of rework can be avoided by finding more faults earlier in a software testing process. Therefore, determination of which software testing phases to focus improvements work on, has considerable industrial interest. This paper evaluates the use of five different techniques, namely particle swarm optimization based artificial neural networks (PSO-ANN), artificial immune recognition systems (AIRS), gene expression programming (GEP), genetic programming (GP) and multiple regression (MR), for predicting the number of faults slipping through unit, function, integration and system testing phases. The objective is to quantify improvement potential in different testing phases by striving towards finding the right faults in the right phase. We have conducted an empirical study of two large projects from a telecommunication company developing mobile platforms and wireless semiconductors. The results are compared using simple residuals, goodness of fit and absolute relative error measures. They indicate that the four search-based techniques (PSO-ANN, AIRS, GEP, GP) perform better than multiple regression for predicting the fault-slip-through for each of the four testing phases. At the unit and function testing phases, AIRS and PSO-ANN performed better while GP performed better at integration and system testing phases. The study concludes that a variety of search-based techniques are applicable for predicting the improvement potential in different testing phases with GP showing more consistent performance across two of the four test phases.

  • 34.
    Ahl, Viggo
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    An experimental comparison of five prioritization methods: Investigating ease of use, accuracy and scalability2005Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Requirements prioritization is an important part of developing the right product in the right time. There are different ideas about which method is the best to use when prioritizing requirements. This thesis takes a closer look at five different methods and then put them into an controlled experiment, in order to find out which of the methods that would be the best method to use. The experiment was designed to find out which method yields the most accurate result, the method’s ability to scale up to many more requirements, what time it took to prioritize with the method, and finally how easy the method was to use. These four criteria combined will indicate which method is more suitable, i.e. be the best method, to use in prioritizing of requirements. The chosen methods are the well-known analytic hierarchy process, the computer algorithm binary search tree, and from the ideas of extreme programming come planning game. The fourth method is an old but well used method, the 100 points method. The last method is a new method, which combines planning game with the analytic hierarchy process. Analysis of the data from the experiment indicates that the planning game combined with analytic hierarchy process could be a good candidate. However, the result from the experiment clearly indicates that the binary search tree yields accurate result, is able to scale up and was the easiest method to use. For these three reasons the binary search tree clearly is the better method to use for prioritizing requirements

    Download full text (pdf)
    FULLTEXT01
  • 35.
    Ahlberg, Mårten
    et al.
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    Liedstrand, Peter
    Blekinge Institute of Technology, School of Technoculture, Humanities and Planning.
    24-timmarsmyndighetens användbarhet2004Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    Kommunikationen med myndigheter och förvaltningar via Internet har ökat under de senaste åren. Därför har vi valt att fokusera vårt kandidatarbete på detta områ-de, samt på behovet av användbara webbtjänster för medborgarna. I detta kandi-datarbete studerar vi en växande grupp användare, denna grupp är äldre medbor-gare. Under studien har vi analyserat 24-timmarsmyndighetens användbarhet ge-nom användartester. Kombinationen av samtal och möten med individer, iaktta-gelser av interaktioner och litteraturstudier ger oss möjligheten att utforska använ-darnas behov. Behovet hos användarna är det som är centralt i hur de uppfattar och interagerar med 24-timmarsmyndigheten. Webbplatserna vi har använt oss av vid våra användartester har anknytning till 24-timmarsmyndigheten. Genom att analytiskt studera informationen har vi kommit fram till fem viktiga designförslag och riktlinjer, som vi anser behövs när e-tjänster inom 24-timmarsmyndigheten utvecklas.

    Download full text (pdf)
    FULLTEXT01
    Download full text (pdf)
    FULLTEXT02
  • 36.
    Ahlström, Catharina
    et al.
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    Fridensköld, Kristina
    Blekinge Institute of Technology, Department of Human Work Science and Media Technology.
    How to support and enhance communication: in a student software development project2002Independent thesis Basic level (degree of Bachelor)Student thesis
    Abstract [en]

    This report, in which we have put an emphasis on the word communication, is based on a student software development project conducted during spring 2002. We describe how the use of design tools plays a key role in supporting communication in group activities and to what extent communication can be supported and enhanced by tools such as mock-ups and metaphors in a group project. We also describe a design progress from initial sketches to a final mock-up of a GUI for a postcard demo application.

    Download full text (pdf)
    FULLTEXT01
    Download full text (pdf)
    FULLTEXT02
    Download full text (pdf)
    FULLTEXT03
    Download full text (pdf)
    FULLTEXT04
  • 37.
    Ahlström, Frida
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Karlsson, Janni
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Utvecklarens förutsättningar för säkerställande av tillgänglig webb2022Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
    Abstract [en]

    Since 2019, all public websites in Sweden are legally bound to meet a certain degree of digital accessibility. An additional EU directive is being transposed into national law at the time of publication of this thesis, which will impose corresponding requirements on part of the private sector, such as banking services and e-commerce. This will likely cause increased demand which suppliers of web development and, in turn, their developers must be able to meet. 

    The aims of this study are to create an increased awareness of digital accessibility as well as to clarify, from the developer’s perspective, how this degree of accessibility is achieved and what could make application of digital accessibility more efficient. 

    In order to achieve this, eight qualitative interviews were conducted, transcribed and thematized in the results section. An inductive thematic analysis has been carried out related to the research questions. It compares the results of previous studies with the outcomes from this study, and shows clear similarities but also differences and new discoveries. 

    The study shows that developers have access to evaluation tools and guidelines that provide good support in their work, but that the responsibility often lies with individual developers rather than with the business as a whole. This is one of the main challenges, together with the fact that inaccessible development is still being carried out in parallel, and that time pressure leads to deprioritization of accessibility. However, the respondents agree that it does not take any more time to develop accessible rather than inaccessible websites, provided that this is taken into account from the outset. Success factors for digital accessibility are to sell the idea to the customer, to work in a structured way with knowledge sharing and to document solutions in order to save time. In addition to this, it appears that the implementation of accessibility would benefit from the ownership being raised to a higher decision level and the competence being broadened in the supplier's organization, and that developers gain access to specialist competence and user tests to support their work. A basic knowledge of accessibility could be included in web development training to a greater extent, and an extension of the legal requirements could also create additional incentives for the customer. 

    Download full text (pdf)
    fulltext
  • 38. Ahmad, A
    et al.
    Shahzad, Aamir
    Padmanabhuni, Kumar
    Mansoor, Ali
    Joseph, Sushma
    Arshad, Zaki
    Requirements prioritization with respect to Geographically Distributed Stakeholders2011Conference paper (Refereed)
    Abstract [en]

    Requirements selection for software releases can play a vital role in the success of software product. This selection of requirements is done by different requirements prioritization techniques. This paper discusses limitations of these Requirements Prioritization Techniques (100$ Method and Binary Search Tree) with respect to Geographical Distribution of Stakeholders. We conducted two experiments, in this paper, in order to analyze the variations among the results of these Requirements Prioritization Techniques. This paper also discusses attributes that can affect the requirements prioritization when dealing with Geographically Distributed Stakeholders. We conducted first experiment with 100$ Dollar method and Binary Search Tree technique and second experiment has been conducted with modified 100$ Dollar method and Binary search tree technique. Results of these experiments have been discussed in this paper. This paper provides a framework that can be used to identify those requirements that can play an important role in a product success during distributed development.

  • 39.
    Ahmad, Al Ghaith
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Abd ULRAHMAN, Ibrahim
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science.
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model2023Independent thesis Basic level (degree of Bachelor), 12 credits / 18 HE creditsStudent thesis
    Abstract [en]

    Background: As the demand for cybersecurity professionals continues to rise, it is crucial to identify the key skills necessary to thrive in this field. This research project sheds light on the cybersecurity skills landscape by analyzing the recommendations provided by the European Cybersecurity Skills Framework (ECSF), examining the most required skills in the Swedish job market, and investigating the common skills identified through the findings. The project utilizes the large language model, ChatGPT, to classify common cybersecurity skills and evaluate its accuracy compared to human classification.

    Objective: The primary objective of this research is to examine the alignment between the European Cybersecurity Skills Framework (ECSF) and the specific skill demands of the Swedish cybersecurity job market. This study aims to identify common skills and evaluate the effectiveness of a Language Model (ChatGPT) in categorizing jobs based on ECSF profiles. Additionally, it seeks to provide valuable insights for educational institutions and policymakers aiming to enhance workforce development in the cybersecurity sector.

    Methods: The research begins with a review of the European Cybersecurity Skills Framework (ECSF) to understand its recommendations and methodology for defining cybersecurity skills as well as delineating the cybersecurity profiles along with their corresponding key cybersecurity skills as outlined by ECSF. Subsequently, a Python-based web crawler, implemented to gather data on cybersecurity job announcements from the Swedish Employment Agency's website. This data is analyzed to identify the most frequently required cybersecurity skills sought by employers in Sweden. The Language Model (ChatGPT) is utilized to classify these positions according to ECSF profiles. Concurrently, two human agents manually categorize jobs to serve as a benchmark for evaluating the accuracy of the Language Model. This allows for a comprehensive assessment of its performance.

    Results: The study thoroughly reviews and cites the recommended skills outlined by the ECSF, offering a comprehensive European perspective on key cybersecurity skills (Tables 4 and 5). Additionally, it identifies the most in-demand skills in the Swedish job market, as illustrated in Figure 6. The research reveals the matching between ECSF-prescribed skills in different profiles and those sought after in the Swedish cybersecurity market. The skills of the profiles 'Cybersecurity Implementer' and 'Cybersecurity Architect' emerge as particularly critical, representing over 58% of the market demand. This research further highlights shared skills across various profiles (Table 7).

    Conclusion: This study highlights the matching between the European Cybersecurity Skills Framework (ECSF) recommendations and the evolving demands of the Swedish cybersecurity job market. Through a review of ECSF-prescribed skills and a thorough examination of the Swedish job landscape, this research identifies crucial areas of alignment. Significantly, the skills associated with 'Cybersecurity Implementer' and 'Cybersecurity Architect' profiles emerge as central, collectively constituting over 58% of market demand. This emphasizes the urgent need for educational programs to adapt and harmonize with industry requisites. Moreover, the study advances our understanding of the Language Model's effectiveness in job categorization. The findings hold significant implications for workforce development strategies and educational policies within the cybersecurity domain, underscoring the pivotal role of informed skills development in meeting the evolving needs of the cybersecurity workforce.

    Download full text (pdf)
    Matching ESCF Prescribed Cyber Security Skills with the Swedish Job Market: Evaluating the Effectiveness of a Language Model
  • 40.
    Ahmad, Arshad
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Khan, Hashim
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    The Importance of Knowledge Management Practices in Overcoming the Global Software Engineering Challenges in Requirements Understanding2008Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Going offshore has become a norm in current software organizations due to several benefits like availability of competent people, cost, proximity to market and customers, time and so on. Despite the fact that Global Software Engineering (GSE) offers many benefits to software organizations but it has also created several challenges/issues for practitioners and researchers like culture, communication, co-ordination and collaboration, team building and so on. As Requirements Engineering (RE) is more human intensive activity and is one of the most challenging and important phase in software development. Therefore, RE becomes even more challenging when comes to GSE context because of culture, communication, coordination, collaboration and so on. Due to the fore mentioned GSE factors, requirements’ understanding has become a challenge for software organizations involved in GSE. Furthermore, Knowledge Management (KM) is considered to be the most important asset of an organization because it not only enables organizations to efficiently share and create knowledge but also helps in resolving culture, communication and co-ordination issues especially in GSE. The aim of this study is to present how KM practices helps globally dispersed software organizations in requirements understanding. For this purpose a thorough literature study is performed along with interviews in two industries with the intent to identify useful KM practices and challenges of requirements understanding in GSE. Then based on the analysis of identified challenges of requirements understanding in GSE both from literature review and industrial interviews, useful KM practices are shown and discussed to reduce requirements understanding issues faced in GSE.

    Download full text (pdf)
    FULLTEXT01
  • 41. Ahmad, Azeem
    et al.
    Göransson, Magnus
    Shahzad, Aamir
    Limitations of the analytic hierarchy process technique with respect to geographically distributed stakeholders2010In: Proceedings of World Academy of Science, Engineering and Technology, ISSN 2010-376X, Vol. 70, no Sept., p. 111-116Article in journal (Refereed)
    Abstract [en]

    The selection of appropriate requirements for product releases can make a big difference in a product success. The selection of requirements is done by different requirements prioritization techniques. These techniques are based on pre-defined and systematic steps to calculate the requirements relative weight. Prioritization is complicated by new development settings, shifting from traditional co-located development to geographically distributed development. Stakeholders, connected to a project, are distributed all over the world. These geographically distributions of stakeholders make it hard to prioritize requirements as each stakeholder have their own perception and expectations of the requirements in a software project. This paper discusses limitations of the Analytical Hierarchy Process with respect to geographically distributed stakeholders' (GDS) prioritization of requirements. This paper also provides a solution, in the form of a modified AHP, in order to prioritize requirements for GDS. We will conduct two experiments in this paper and will analyze the results in order to discuss AHP limitations with respect to GDS. The modified AHP variant is also validated in this paper.

  • 42.
    Ahmad, Azeem
    et al.
    Blekinge Institute of Technology, School of Computing.
    Kolla, Sushma Joseph
    Blekinge Institute of Technology, School of Computing.
    Effective Distribution of Roles and Responsibilities in Global Software Development Teams2012Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Industry is moving from co-located form of development to a distributed development in order to achieve different benefits such as cost reduction, access to skillful labor and around the clock working etc. This transfer requires industry to face different challenges such as communication, coordination and monitoring problems. Risk of project failure can be increased, if industry does not address these problems. This thesis is about providing the solutions of these problems in term of effective roles and responsibilities that may have positive impact on GSD team. Objectives. In this study we have developed framework for suggesting roles and responsibilities for GSD team. This framework consists of problems and casual dependencies between them which are related to team’s ineffectiveness, then suggestions in terms of roles and responsibilities have been presented in order to have an effective team in GSD. This framework, further, has been validated in industry through a survey that determines which are the effective roles and responsibilities in GSD. Methods. We have two research methods in this study 1) systematic literature review and 2) survey. Complete protocol for planning, conducting and reporting the review as well as survey has been described in their respective sections in this thesis. A systematic review is used to develop the framework whereas survey is used for framework validation. We have done static validation of framework. Results. Through SLR, we have identified 30 problems, 33 chains of problems. We have identified 4 different roles and 40 different responsibilities to address these chains of problems. During the validation of the framework, we have validated the links between suggested roles and responsibilities and chains of problems. Addition to this, through survey, we have identified 20 suggestions that represents strong positive impact on chains of problems in GSD in relation to team’s effectiveness. Conclusions. We conclude that implementation of effective roles and responsibilities in GSD team to avoid different problems require considerable attention from researchers and practitioners which can guarantee team’s effectiveness. Implementation of proper roles and responsibilities has been mentioned as one of the successful strategies for increasing team’s effectiveness in the literature, but which particular roles and responsibilities should be implemented still need to be addressed. We also conclude that there must be basic responsibilities associated with any particular role. Moreover, we conclude that there is a need for further development and empirical validation of different frameworks for suggesting roles and responsibilities in full scale industry trials.

    Download full text (pdf)
    FULLTEXT01
  • 43.
    Ahmad, Ehsan
    et al.
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Raza, Bilal
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Towards Optimization of Software V&V Activities in the Space Industry [Two Industrial Case Studies]2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Developing software for high-dependable space applications and systems is a formidable task. With new political and market pressures on the space industry to deliver more software at a lower cost, optimization of their methods and standards need to be investigated. The industry has to follow standards that strictly sets quality goals and prescribes engineering processes and methods to fulfill them. The overall goal of this study is to evaluate if current use of ECSS standards is cost efficient and if there are ways to make the process leaner while still maintaining the quality and to analyze if their V&V activities can be optimized. This paper presents results from two industrial case studies of companies in the European space industry that are following ECSS standards and have various V&V activities. The case studies reported here focused on how the ECSS standards were used by the companies and how that affected their processes and how their V&V activities can be optimized.

    Download full text (pdf)
    FULLTEXT01
  • 44. Ahmad, Ehsan
    et al.
    Raza, Bilal
    Feldt, Robert
    Assessment and support for software capstone projects at the undergraduate level: A survey and rubrics2011Conference paper (Refereed)
    Abstract [en]

    Software engineering and computer science students conduct a capstone project during the final year of their degree programs. These projects are essential in validating that students have gained required knowledge and they can synthesize and use that knowledge to solve real world problems. However, the external requirements on educational programs often do not provide detailed guidelines for how to conduct or support these capstone projects, which may lead to variations among universities. This paper presents the results from a survey conducted at 19 different Pakistani universities of the current management practices and assessment criteria used for the capstone project courses at Undergraduate level. Based upon the results of this survey and similar work on Master Thesis capstone projects in Sweden, we present assessment rubrics for software-related undergraduate capstone projects. We also present recommendations for the continuous improvement of capstone projects.

  • 45.
    Ahmad, Saleem Zubair
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Analyzing Suitability of SysML for System Engineering Applications2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    During last decade UML have to face different tricky challenges. For instance as a single unified, general purpose modeling language it should offer simple and explicit semantic which can be applicable to wide range of domains. Due to significant shift of focus from software to system “software-centric” attitude of UML has been exposed. So need of certain domain specific language is always there which can address problems of system rather then software only i.e. motivation for SysML. In this thesis SysML is evaluated to analyze its suitability for system engineering applications. A evaluation criteria is established, through which appropriateness of SysML is observed over system development life cycle. The study is conducted by taking case example of real life i.e. automobile product. Results of research not only provide an opportunity to get inside into SysML architecture but also offer an idea of SysML appropriateness for multidisciplinary product development

    Download full text (pdf)
    FULLTEXT01
  • 46.
    Ahmed, Abdifatah
    et al.
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Lindhe, Magnus
    Blekinge Institute of Technology, Department of Software Engineering and Computer Science.
    Efficient And Maintainable Test Automation2002Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    More and more companies experience problems with maintainability and time-consuming development of automated testing tools. The MPC department at Ericsson Software Technology AB use methods and tools often developed during time pressure that results in time-consuming testing and requires more effort and resources than planned. The tools are also such nature that they are hard to expand, maintain and in some cases they have been thrown out between releases. For this reason, we could identify two major objectives that MPC want to achieve; efficient and maintainable test automation. Efficient test automation is related to mainly how to perform tests with less effort, or in a shorter time. Maintainable test automation aims to keep tests up to date with the software. In order to decide how to achieve these objectives, we decided to investigate which test to automate, what should be improved in the testing process, what techniques to use, and finally whether or not the use of automated testing can reduce the cost of testing. These issues will be discussed in this paper.

    Download full text (pdf)
    FULLTEXT01
  • 47.
    Ahmed, Israr
    et al.
    Blekinge Institute of Technology, School of Computing.
    Nadeem, Shahid
    Blekinge Institute of Technology, School of Computing.
    Minimizing Defects Originating from Elicitation, Analysis and Negotiation (E and A&N) Phase in Bespoke Requirements Engineering2009Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Defect prevention (DP) in early stages of software development life cycle (SDLC) is very cost effective than in later stages. The requirements elicitation and analysis & negotiation (E and A&N) phases in requirements engineering (RE) process are very critical and are major source of requirements defects. A poor E and A&N process may lead to a software requirements specifications (SRS) full of defects like missing, ambiguous, inconsistent, misunderstood, and incomplete requirements. If these defects are identified and fixed in later stages of SDLC then they could cause major rework by spending extra cost and effort. Organizations are spending about half of their total project budget on avoidable rework and majority of defects originate from RE activities. This study is an attempt to prevent requirements level defects from penetrates into later stages of SDLC. For this purpose empirical and literature studies are presented in this thesis. The empirical study is carried out with the help of six companies from Pakistan & Sweden by conducting interviews and literature study is done by using literature reviews. This study explores the most common requirements defect types, their reasons, severity level of defects (i.e. major or minor), DP techniques (DPTs) & methods, defect identification techniques that have been using in software development industry and problems in these DPTs. This study also describes possible major differences between Swedish and Pakistani software companies in terms of defect types and rate of defects originating from E and A&N phases. On the bases of study results, some solutions have been proposed to prevent requirements defects during the RE process. In this way we can minimize defects originating from E and A&N phases of RE in the bespoke requirements engineering (BESRE).

    Download full text (pdf)
    FULLTEXT01
  • 48.
    Ahmed, Mohammad Abdur Razzak and Rajib
    Blekinge Institute of Technology, School of Computing.
    Knowledge Management in Distributed Agile Projects2013Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Knowledge management (KM) is essential for success in Global Soft- ware Development (GSD); Distributed Software Development (DSD); or Global Software Engineering (GSE). Software organizations are managing knowledge in innovative ways to increase productivity. One of the major objectives of KM is to improve productivity through effective knowledge sharing and transfer. Therefore, to maintain effective knowledge sharing in distributed agile projects, practitioners need to adopt different types of knowledge sharing techniques and strategies. Distributed projects introduce new challenges to KM. So, practices that are used in agile teams become difficult to put into action in distributed development. Though, informal communication is the key enabler for knowledge sharing, when an agile project is distributed, informal communication and knowledge sharing are challenged by the low communication bandwidth between distributed team members, as well as by social and cultural distance. In the work presented in this thesis, we have made an overview of empirical studies of knowledge management in distributed agile projects. Based on the main theme of this study, we have categorized and reported our findings on major concepts that need empirical investigation. We have classified the main research theme in this thesis within two sub-themes: • RT1: Knowledge sharing activities in distributed agile projects. • RT2: Spatial knowledge sharing in a distributed agile project. The main contributions are: • C1: Empirical observations regarding knowledge sharing activities in distributed agile projects. • C2: Empirical observations regarding spatial knowledge sharing in a distributed agile project. • C3: Process improvement scope and guidelines for the studied project.

    Download full text (pdf)
    FULLTEXT01
  • 49.
    Ahmed, Qutub Uddin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Mujib, Saifullah Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Computer Science and Engineering.
    Context Aware Reminder System: Activity Recognition Using Smartphone Accelerometer and Gyroscope Sensors Supporting Context-Based Reminder Systems2014Independent thesis Advanced level (degree of Master (Two Years))Student thesis
    Abstract [en]

    Context. Reminder system offers flexibility in daily life activities and assists to be independent. The reminder system not only helps reminding daily life activities, but also serves to a great extent for the people who deal with health care issues. For example, a health supervisor who monitors people with different health related problems like people with disabilities or mild dementia. Traditional reminders which are based on a set of defined activities are not enough to address the necessity in a wider context. To make the reminder more flexible, the user’s current activities or contexts are needed to be considered. To recognize user’s current activity, different types of sensors can be used. These sensors are available in Smartphone which can assist in building a more contextual reminder system. Objectives. To make a reminder context based, it is important to identify the context and also user’s activities are needed to be recognized in a particular moment. Keeping this notion in mind, this research aims to understand the relevant context and activities, identify an effective way to recognize user’s three different activities (drinking, walking and jogging) using Smartphone sensors (accelerometer and gyroscope) and propose a model to use the properties of the identification of the activity recognition. Methods. This research combined a survey and interview with an exploratory Smartphone sensor experiment to recognize user’s activity. An online survey was conducted with 29 participants and interviews were held in cooperation with the Karlskrona Municipality. Four elderly people participated in the interview. For the experiment, three different user activity data were collected using Smartphone sensors and analyzed to identify the pattern for different activities. Moreover, a model is proposed to exploit the properties of the activity pattern. The performance of the proposed model was evaluated using machine learning tool, WEKA. Results. Survey and interviews helped to understand the important activities of daily living which can be considered to design the reminder system, how and when it should be used. For instance, most of the participants in the survey are used to using some sort of reminder system, most of them use a Smartphone, and one of the most important tasks they forget is to take their medicine. These findings helped in experiment. However, from the experiment, different patterns have been observed for three different activities. For walking and jogging, the pattern is discrete. On the other hand, for drinking activity, the pattern is complex and sometimes can overlap with other activities or can get noisy. Conclusions. Survey, interviews and the background study provided a set of evidences fostering reminder system based on users’ activity is essential in daily life. A large number of Smartphone users promoted this research to select a Smartphone based on sensors to identify users’ activity which aims to develop an activity based reminder system. The study was to identify the data pattern by applying some simple mathematical calculations in recorded Smartphone sensors (accelerometer and gyroscope) data. The approach evaluated with 99% accuracy in the experimental data. However, the study concluded by proposing a model to use the properties of the identification of the activities and developing a prototype of a reminder system. This study performed preliminary tests on the model, but there is a need for further empirical validation and verification of the model.

    Download full text (pdf)
    FULLTEXT01
  • 50.
    Ahmed, Syed Rizwan
    Blekinge Institute of Technology, School of Engineering, Department of Systems and Software Engineering.
    Secure Software Development: Identification of Security Activities and Their Integration in Software Development Lifecycle2007Independent thesis Advanced level (degree of Master (One Year))Student thesis
    Abstract [en]

    Today’s software is more vulnerable to attacks due to increase in complexity, connectivity and extensibility. Securing software is usually considered as a post development activity and not much importance is given to it during the development of software. However the amount of loss that organizations have incurred over the years due to security flaws in software has invited researchers to find out better ways of securing software. In the light of research done by many researchers, this thesis presents how software can be secured by considering security in different phases of software development life cycle. A number of security activities have been identified that are needed to build secure software and it is shown that how these security activities are related with the software development activities of the software development lifecycle.

    Download full text (pdf)
    FULLTEXT01
1234567 1 - 50 of 2359
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf