Change search
Refine search result
123 1 - 50 of 117
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Ali, Nauman Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    A consolidated process for software process simulation: State of the Art and Industry Experience2012Conference paper (Refereed)
    Abstract [en]

    Software process simulation is a complex task and in order to conduct a simulation project practitioners require support through a process for software process simulation modelling (SPSM), including what steps to take and what guidelines to follow in each step. This paper provides a literature based consolidated process for SPSM where the steps and guidelines for each step are identified through a review of literature and are complemented by experience from using these recommendations in an action research at a large Telecommunication vendor. We found five simulation processes in SPSM literature, resulting in a seven-step process. The consolidated process was successfully applied at the studied company, with the experiences of doing so being reported.

    Download full text (pdf)
    fulltext
  • 2.
    Ali, Nauman bin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    FLOW-assisted value stream mapping in the early phases of large-scale software development2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 111, p. 213-227Article in journal (Refereed)
    Abstract [en]

    Value stream mapping (VSM) has been successfully applied in the context of software process improvement. However, its current adaptations from Lean manufacturing focus mostly on the flow of artifacts and have taken no account of the essential information flows in software development. A solution specifically targeted toward information flow elicitation and modeling is FLOW. This paper aims to propose and evaluate the combination of VSM and FLOW to identify and alleviate information and communication related challenges in large-scale software development. Using case study research, FLOW-assisted VSM was used for a large product at Ericsson AB, Sweden. Both the process and the outcome of FLOW-assisted VSM have been evaluated from the practitioners’ perspective. It was noted that FLOW helped to systematically identify challenges and improvements related to information flow. Practitioners responded favorably to the use of VSM and FLOW, acknowledged the realistic nature and impact on the improvement on software quality, and found the overview of the entire process using the FLOW notation very useful. The combination of FLOW and VSM presented in this study was successful in systematically uncovering issues and characterizing their solutions, indicating their practical usefulness for waste removal with a focus on information flow related issues.

    Download full text (pdf)
    fulltext
  • 3.
    Ali, Nauman Bin
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Mäntylä, Mika
    Testing highly complex system of systems: An industrial case study2012Conference paper (Refereed)
    Abstract [en]

    Systems of systems (SoS) are highly complex and are integrated on multiple levels (unit, component, system, system of systems). Many of the characteristics of SoS (such as operational and managerial independence, integration of system into system of systems, SoS comprised of complex systems) make their development and testing challenging. Contribution: This paper provides an understanding of SoS testing in large-scale industry settings with respect to challenges and how to address them. Method: The research method used is case study research. As data collection methods we used interviews, documentation, and fault slippage data. Results: We identified challenges related to SoS with respect to fault slippage, test turn-around time, and test maintainability. We also classified the testing challenges to general testing challenges, challenges amplified by SoS, and challenges that are SoS specific. Interestingly, the interviewees agreed on the challenges, even though we sampled them with diversity in mind, which meant that the number of interviews conducted was sufficient to answer our research questions. We also identified solution proposals to the challenges that were categorized under four classes of developer quality assurance, function test, testing in all levels, and requirements engineering and communication. Conclusion: We conclude that although over half of the challenges we identified can be categorized as general testing challenges still SoS systems have their unique and amplified challenges stemming from SoS characteristics. Furthermore, it was found that interviews and fault slippage data indicated that different areas in the software process should be improved, which indicates that using only one of these methods would have led to an incomplete picture of the challenges in the case company.

  • 4.
    Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Evaluating strategies for study selection in systematic literature studies2014In: ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM , 2014, Vol. article 45Conference paper (Refereed)
    Abstract [en]

    Context: The study selection process is critical to improve the reliability of secondary studies. Goal: To evaluate the selection strategies commonly employed in secondary studies in software engineering. Method: Building on these strate- gies, a study selection process was formulated and evalu- ated in a systematic review. Results: The selection process used a more inclusive strategy than the one typically used in secondary studies, which led to additional relevant articles. Conclusions: The results indicates that a good-enough sam- ple could be obtained by following a less inclusive but more efficient strategy, if the articles identified as relevant for the study are a representative sample of the population, and there is a homogeneity of results and quality of the articles.

    Download full text (pdf)
    fulltext
  • 5.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mattsson, Michael
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey2020In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 28, no 4, p. 1675-1707Article in journal (Refereed)
    Abstract [en]

    Modern software development relies on a combination of development and re-use of technical asset, e.g. software components, libraries and APIs.In the past, re-use was mostly conducted with internal assets but today external; open source, customer off-the-shelf (COTS) and assets developed through outsourcing are also common.This access to more asset alternatives presents new challenges regarding what assets to optimally chose and how to make this decision.To support decision-makers, decision-theory has been used to develop decision models for asset selection.However, very little industrial data has been presented in literature about the usefulness, or even perceived usefulness, of these models.Additionally, only limited information has been presented about what model characteristics that determine practitioner preference towards one model over another.

    Objective: The objective of this work is to evaluate what characteristics of decision models for asset selection that determine industrial practitioner preference of a model when given the choice of a decision-model of high precision or a model with high speed.

    Method: An industrial questionnaire survey is performed where a total of 33 practitioners, of varying roles, from 18 companies are tasked to compare two decision models for asset selection.Textual analysis and formal and descriptive statistics are then applied on the survey responses to answer the study's research questions.

    Results: The study shows that the practitioners had clear preference towards the decision model that emphasised speed over the one that emphasised decision precision.This conclusion was determined to be because one of the models was perceived faster, had lower complexity, had, was more flexible in use for different decisions, was more agile how it could be used in operation, its emphasis on people, its emphasis on ``good enough'' precision and ability to fail fast if a decision was a failure.Hence, seven characteristics that the practitioners considered important for their acceptance of the model.

    Conclusion: Industrial practitioner preference, which relates to acceptance, of decision models for asset selection is dependent on multiple characteristics that must be considered when developing a model for different types of decisions such as operational day-to-day decisions as well as more critical tactical or strategic decisions.The main contribution of this work are seven identified characteristics that can serve as industrial requirements for future research on decision models for asset selection.

    Download full text (pdf)
    fulltext
  • 6.
    Alégroth, Emil
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mattsson, Michael
    Characteristics that affect Preference of Decision Models for Asset Selection: An Industrial Questionnaire Survey - Appendix A: Questionnaire Introduction. Decision-making in Practice / Appendix B: Survey results2019Data set
    Download full text (pdf)
    Appendix A: Questionnaire Introduction Decision-making in Practice
    Download full text (csv)
    Appendix B: Survey results
  • 7.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Carlsson, Bengt
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Lundberg, Lars
    Blekinge Institute of Technology, School of Computing.
    Improving software security with static automated code analysis in an industry setting2013In: Software, practice & experience, ISSN 0038-0644, E-ISSN 1097-024X, Vol. 43, no 3, p. 259-279Article in journal (Refereed)
    Abstract [en]

    Software security can be improved by identifying and correcting vulnerabilities. In order to reduce the cost of rework, vulnerabilities should be detected as early and efficiently as possible. Static automated code analysis is an approach for early detection. So far, only few empirical studies have been conducted in an industrial context to evaluate static automated code analysis. A case study was conducted to evaluate static code analysis in industry focusing on defect detection capability, deployment, and usage of static automated code analysis with a focus on software security. We identified that the tool was capable of detecting memory related vulnerabilities, but few vulnerabilities of other types. The deployment of the tool played an important role in its success as an early vulnerability detector, but also the developers perception of the tools merit. Classifying the warnings from the tool was harder for the developers than to correct them. The correction of false positives in some cases created new vulnerabilities in previously safe code. With regard to defect detection ability, we conclude that static code analysis is able to identify vulnerabilities in different categories. In terms of deployment, we conclude that the tool should be integrated with bug reporting systems, and developers need to share the responsibility for classifying and reporting warnings. With regard to tool usage by developers, we propose to use multiple persons (at least two) in classifying a warning. The same goes for making the decision of how to act based on the warning.

  • 8.
    Baca, Dejan
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Countermeasure graphs for software security risk assessment: An action research2013In: Journal of Systems and Software, ISSN 0164-1212, Vol. 86, no 9, p. 2411-2428Article in journal (Refereed)
    Abstract [en]

    Software security risk analysis is an important part of improving software quality. In previous research we proposed countermeasure graphs (CGs), an approach to conduct risk analysis, combining the ideas of different risk analysis approaches. The approach was designed for reuse and easy evolvability to support agile software development. CGs have not been evaluated in industry practice in agile software development. In this research we evaluate the ability of CGs to support practitioners in identifying the most critical threats and countermeasures. The research method used is participatory action research where CGs were evaluated in a series of risk analyses on four different telecom products. With Peltier (used prior to the use of CGs at the company) the practitioners identified attacks with low to medium risk level. CGs allowed practitioners to identify more serious risks (in the first iteration 1 serious threat, 5 high risk threats, and 11 medium threats). The need for tool support was identified very early, tool support allowed the practitioners to play through scenarios of which countermeasures to implement, and supported reuse. The results indicate that CGs support practitioners in identifying high risk security threats, work well in an agile software development context, and are cost-effective.

  • 9. Baca, Dejan
    et al.
    Petersen, Kai
    Prioritizing Countermeasures through the Countermeasure Method for Software Security (CM-Sec)2010Conference paper (Refereed)
    Abstract [en]

    Software security is an important quality aspect of a software system. Therefore, it is important to integrate software security touch points throughout the development life-cycle. So far, the focus of touch points in the early phases has been on the identification of threats and attacks. In this paper we propose a novel method focusing on the end product by prioritizing countermeasures. The method provides an extension to attack trees and a process for identification and prioritization of countermeasures. The approach has been applied on an open-source application and showed that countermeasures could be identified. Furthermore, an analysis of the effectiveness and cost-efficiency of the countermeasures could be provided.

    Download full text (pdf)
    FULLTEXT01
  • 10. Baca, Dejan
    et al.
    Petersen, Kai
    Carlsson, Bengt
    Lundberg, Lars
    Static Code Analysis to Detect Software Security Vulnerabilities: Does Experience Matter?2009Conference paper (Refereed)
    Abstract [en]

    Code reviews with static analysis tools are today recommended by several security development processes. Developers are expected to use the tools' output to detect the security threats they themselves have introduced in the source code. This approach assumes that all developers can correctly identify a warning from a static analysis tool (SAT) as a security threat that needs to be corrected. We have conducted an industry experiment with a state of the art static analysis tool and real vulnerabilities. We have found that average developers do not correctly identify the security warnings and only developers with specific experiences are better than chance in detecting the security vulnerabilities. Specific SAT experience more than doubled the number of correct answers and a combination of security experience and SAT experience almost tripled the number of correct security answers.

  • 11.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Claes, Wohlin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Kai, Petersen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Software Component Decision-making: In-house, OSS, COTS or Outsourcing: A Systematic Literature Review2016In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 121, p. 105-124Article in journal (Refereed)
    Abstract [en]

    Component-based software systems require decisions on component origins for acquiring components. A component origin is an alternative of where to get a component from. Objective: To identify factors that could influence the decision to choose among different component origins and solutions for decision-making (For example, optimization) in the literature. Method: A systematic review study of peer-reviewed literature has been conducted. Results: In total we included 24 primary studies. The component origins compared were mainly focused on in-house vs. COTS and COTS vs. OSS. We identified 11 factors affecting or influencing the decision to select a component origin. When component origins were compared, there was little evidence on the relative (either positive or negative) effect of a component origin on the factor. Most of the solutions were proposed for in-house vs. COTS selection and time, cost and reliability were the most considered factors in the solutions. Optimization models were the most commonly proposed technique used in the solutions. Conclusion: The topic of choosing component origins is a green field for research, and in great need of empirical comparisons between the component origins, as well of how to decide between different combinations of them.

    Download full text (pdf)
    fulltext
  • 12.
    Badampudi, Deepika
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Experiences from Using Snowballing and Database Searches in Systematic Literature Studies2015Conference paper (Refereed)
    Abstract [en]

    Background: Systematic literature studies are commonly used in software engineering. There are two main ways of conducting the searches for these type of studies; they are snowballing and database searches. In snowballing, the reference list (backward snowballing - BSB) and citations (forward snowballing - FSB) of relevant papers are reviewed to identify new papers whereas in a database search, different databases are searched using predefined search strings to identify new papers. Objective: Snowballing has not been in use as extensively as database search. Hence it is important to evaluate its efficiency and reliability when being used as a search strategy in literature studies. Moreover, it is important to compare it to database searches. Method: In this paper, we applied snowballing in a literature study, and reflected on the outcome. We also compared database search with backward and forward snowballing. Database search and snowballing were conducted independently by different researchers. The searches of our literature study were compared with respect to the efficiency and reliability of the findings. Results: Out of the total number of papers found, snowballing identified 83% of the papers in comparison to 46% of the papers for the database search. Snowballing failed to identify a few relevant papers, which potentially could have been addressed by identifying a more comprehensive start set. Conclusion: The efficiency of snowballing is comparable to database search. It can potentially be more reliable than a database search however, the reliability is highly dependent on the creation of a suitable start set.

    Download full text (pdf)
    fulltext
  • 13.
    Barney, Sebastian
    et al.
    Blekinge Institute of Technology, School of Computing.
    Khurum, Mahvish
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Unterkalmsteiner, Michael
    Blekinge Institute of Technology, School of Computing.
    jabangwe, Ronald
    Blekinge Institute of Technology, School of Computing.
    Improving Students With Rubric-Based Self-Assessment and Oral Feedback2012In: IEEE Transactions on Education, ISSN 0018-9359, Vol. 55, no 3, p. 319-325Article in journal (Refereed)
    Abstract [en]

    Rubrics and oral feedback are approaches to help students improve performance and meet learning outcomes. However, their effect on the actual improvement achieved is inconclusive. This paper evaluates the effect of rubrics and oral feedback on student learning outcomes. An experiment was conducted in a software engineering course on requirements engineering, using the two approaches in course assignments. Both approaches led to statistically significant improvements, though no material improvement (i.e., a change by more than one grade) was achieved. The rubrics led to a significant decrease in the number of complaints and questions regarding grades.

  • 14.
    Barney, Sebastian
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Svahnberg, Mikael
    Blekinge Institute of Technology, School of Computing.
    Aurum, Aybueke
    Barney, Hamish
    Software quality trade-offs: A systematic map2012In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 54, no 7, p. 651-662Article, review/survey (Refereed)
    Abstract [en]

    Background: Software quality is complex with over investment, under investment and the interplay between aspects often being overlooked as many researchers aim to advance individual aspects of software quality. Aim: This paper aims to provide a consolidated overview the literature that addresses trade-offs between aspects of software product quality. Method: A systematic literature map is employed to provide an overview of software quality trade-off literature in general. Specific analysis is also done of empirical literature addressing the topic. Results: The results show a wide range of solution proposals being considered. However, there is insufficient empirical evidence to adequately evaluate and compare these proposals. Further a very large vocabulary has been found to describe software quality. Conclusion: Greater empirical research is required to sufficiently evaluate and compare the wide range of solution proposals. This will allow researchers to focus on the proposals showing greater signs of success and better support industrial practitioners.

  • 15. Bayer, J.
    et al.
    Eisenbarth, M.
    Lehner, T.
    Petersen, Kai
    Service Engineering Methodology2008In: Semantic Service Provisioning / [ed] Kuropka, D.; Tröger, P.; Weske, S. Staab and M., Berlin: Springer Verlag , 2008, p. 185-202Chapter in book (Refereed)
  • 16.
    Bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Blekinge Inst Technol, Karlskrona, Sweden..
    Nicolau de Franca, Breno Bernard
    Univ Fed Rio de Janeiro, ESE Grp, PESC COPPE, BR-68511 Rio De Janeiro, Brazil..
    Evaluation of simulation-assisted value stream mapping for software product development: Two industrial cases2015In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 68, p. 45-61Article in journal (Refereed)
    Abstract [en]

    Context: Value stream mapping (VSM) as a tool for lean development has led to significant improvements in different industries. In a few studies, it has been successfully applied in a software engineering context. However, some shortcomings have been observed in particular failing to capture the dynamic nature of the software process to evaluate improvements i.e. such improvements and target values are based on idealistic situations. Objective: To overcome the shortcomings of VSM by combining it with software process simulation modeling, and to provide reflections on the process of conducting VSM with simulation. Method: Using case study research, VSM was used for two products at Ericsson AB, Sweden. Ten workshops were conducted in this regard. Simulation in this study was used as a tool to support discussions instead of as a prediction tool. The results have been evaluated from the perspective of the participating practitioners, an external observer, and reflections of the researchers conducting the simulation that was elicited by the external observer. Results: Significant constraints hindering the product development from reaching the stated improvement goals for shorter lead time were identified. The use of simulation was particularly helpful in having more insightful discussions and to challenge assumptions about the likely impact of improvements. However, simulation results alone were found insufficient to emphasize the importance of reducing waiting times and variations in the process. Conclusion: The framework to assist VSM with simulation presented in this study was successfully applied in two cases. The involvement of various stakeholders, consensus building steps, emphasis on flow (through waiting time and variance analysis) and the use of simulation proposed in the framework led to realistic improvements with a high likelihood of implementation. (C) 2015 Elsevier B.V. All rights reserved.

  • 17.
    bin Ali, Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review on the industrial use of software process simulation2014In: Journal of Systems and Software, ISSN 0164-1212, Vol. 97Article in journal (Refereed)
    Abstract [en]

    Context Software process simulation modelling (SPSM) captures the dynamic behaviour and uncertainty in the software process. Existing literature has conflicting claims about its practical usefulness: SPSM is useful and has an industrial impact; SPSM is useful and has no industrial impact yet; SPSM is not useful and has little potential for industry. Objective To assess the conflicting standpoints on the usefulness of SPSM. Method A systematic literature review was performed to identify, assess and aggregate empirical evidence on the usefulness of SPSM. Results In the primary studies, to date, the persistent trend is that of proof-of-concept applications of software process simulation for various purposes (e.g. estimation, training, process improvement, etc.). They score poorly on the stated quality criteria. Also only a few studies report some initial evaluation of the simulation models for the intended purposes. Conclusion There is a lack of conclusive evidence to substantiate the claimed usefulness of SPSM for any of the intended purposes. A few studies that report the cost of applying simulation do not support the claim that it is an inexpensive method. Furthermore, there is a paramount need for improvement in conducting and reporting simulation studies with an emphasis on evaluation against the intended purpose.

    Download full text (pdf)
    fulltext
  • 18.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. University of Applied Sciences, Germany.
    Double-counting in software engineering tertiary studies — An overlooked threat to validity2023In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 158, article id 107174Article, review/survey (Refereed)
    Abstract [en]

    Context: Double-counting in a literature review occurs when the same data, population, or evidence is erroneously counted multiple times during synthesis. Detecting and mitigating the threat of double-counting is particularly challenging in tertiary studies. Although this topic has received much attention in the health sciences, it seems to have been overlooked in software engineering. Objective: We describe issues with double-counting in tertiary studies, investigate the prevalence of the issue in software engineering, and propose ways to identify and address the issue. Method: We analyze 47 tertiary studies in software engineering to investigate in which ways they address double-counting and whether double-counting might be a threat to validity in them. Results: In 19 of the 47 tertiary studies, double-counting might bias their results. Of those 19 tertiary studies, only 5 consider double-counting a threat to their validity, and 7 suggest strategies to address the issue. Overall, only 9 of the 47 tertiary studies, acknowledge double-counting as a potential general threat to validity for tertiary studies. Conclusions: Double-counting is an overlooked issue in tertiary studies in software engineering, and existing design and evaluation guidelines do not address it sufficiently. Therefore, we propose recommendations that may help to identify and mitigate double-counting in tertiary studies. © 2023 The Author(s)

    Download full text (pdf)
    fulltext
  • 19.
    Börstler, Jürgen
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Svensson, Martin
    Blekinge Institute of Technology, Faculty of Engineering, Department of Industrial Economics.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. University of Applied Sciences Flensburg, Germany.
    Investigating Acceptance Behavior in Software Engineering – Theoretical Perspectives2023In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 198, article id 111592Article in journal (Refereed)
    Abstract [en]

    Background: Software engineering research aims to establish software development practice on a scientific basis. However, the evidence of the efficacy of technology is insufficient to ensure its uptake in industry. In the absence of a theoretical frame of reference, we mainly rely on best practices and expert judgment from industry-academia collaboration and software process improvement research to improve the acceptance of the proposed technology. Objective: To identify acceptance models and theories and discuss their applicability in the research of acceptance behavior related to software development.Method: We analyzed literature reviews within an interdisciplinary team to identify models and theories relevant to software engineering research. We further discuss acceptance behavior from the human information processing perspective of automatic and affect-driven processes (“fast” system 1 thinking) and rational and rule-governed processes (“slow” system 2 thinking). Results: We identified 30 potentially relevant models and theories. Several of them have been used in researching acceptance behavior in contexts related to software development, but few have been validated in such contexts. They use constructs that capture aspects of (automatic) system 1 and (rational) system 2 oriented processes. However, their operationalizations focus on system 2-oriented processes indicating a rational view of behavior, thus overlooking important psychological processes underpinning behavior. Conclusions: Software engineering research may use acceptance behavior models and theories more extensively to understand and predict practice adoption in the industry. Such theoretical foundations will help improve the impact of software engineering research. However, more consideration should be given to their validation, overlap, construct operationalization, and employed data collection mechanisms when using these models and theories.

    Download full text (pdf)
    fulltext
  • 20.
    Carlson, Jan
    et al.
    Malardalen Univ, Vasteras, Sweden..
    Papatheocharous, Efi
    Swedish Inst Comp Sci, Stockholm, Sweden..
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A Context Model for Architectural Decision Support2016In: PROCEEDINGS 2016 1ST INTERNATIONAL WORKSHOP ON DECISION MAKING IN SOFTWARE ARCHITECTURE, IEEE Computer Society, 2016, p. 9-15Conference paper (Refereed)
    Abstract [en]

    Developing efficient and effective decision making support includes identifying means to reduce repeated manual work and providing possibilities to take advantage of the experience gained in previous decision situations. For this to be possible, there is a need to explicitly model the context of a decision case, for example to determine how much the evidence from one decision case can be trusted in another, similar context. In earlier work, context has been recognized as important when transferring and understanding outcomes between cases. The contribution of this paper is threefold. First, we describe different ways of utilizing context in an envisioned decision support system. Thereby, we distinguish between internal and external context usage, possibilities of context representation, and context inheritance. Second, we present a systematically developed context model comprised of five types of context information, namely organization, product, stakeholder, development method & technology, and market & business. Third, we exemplary illustrate the relation of the context information to architectural decision making using existing literature.

  • 21.
    Demirsoy, Ali
    et al.
    Borsa Istanbul, TUR.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Semantic Knowledge Management System to Support Software Engineers: Implementation and Static Evaluation through Interviews at Ericsson2018In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 12, no 1, p. 237-263Article in journal (Refereed)
    Abstract [en]

    Background: In large-scale corporations in the software engineering context information overload problems occur as stakeholders continuously produce useful information on process life-cycle issues, matters related to specific products under development, etc. Information overload makes finding relevant information (e.g., how did the company apply the requirements process for product X?) challenging, which is in the primary focus of this paper. Contribution: In this study the authors aimed at evaluating the ease of implementing a semantic knowledge management system at Ericsson, including the essential components of such systems (such as text processing, ontologies, semantic annotation and semantic search). Thereafter, feedback on the usefulness of the system was collected from practitioners. Method: A single case study was conducted at a development site of Ericsson AB in Sweden. Results: It was found that semantic knowledge management systems are challenging to implement, this refers in particular to the implementation and integration of ontologies. Specific ontologies for structuring and filtering are essential, such as domain ontologies and ontologies distinct to the organization. Conclusion: To be readily adopted and transferable to practice, desired ontologies need to be implemented and integrated into semantic knowledge management frameworks with ease, given that the desired ontologies are dependent on organizations and domains.

    Download full text (pdf)
    fulltext
  • 22. Engström, Emelie
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Mapping software testing practice with software testing research: SERP-test taxonomy2015In: 2015 IEEE 8th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2015 - Proceedings, IEEE Computer Society, 2015, p. Article number 7107470-Conference paper (Refereed)
    Abstract [en]

    There is a gap between software testing research and practice. One reason is the discrepancy between how testing research is reported and how testing challenges are perceived in industry. We propose the SERP-test taxonomy to structure information on testing interventions and practical testing challenges from a common perspective and thus bridge the communication gap. To develop the taxonomy we follow a systematic incremental approach. The SERP-test taxonomy may be used by both researchers and practitioners to classify and search for testing challenges or interventions. The SERP-test taxonomy also supports comparison of testing interventions by providing an instrument for assessing the distance between them and thus identify relevant points of comparisons. © 2015 IEEE.

  • 23.
    Engström, Emelie
    et al.
    Lund University, SWE.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bjarnason, Elizabeth
    Lund University, SWE.
    SERP-test: a taxonomy for supporting industry-academia communication2017In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 25, no 4, p. 1269-1305Article in journal (Refereed)
    Abstract [en]

    This paper presents the construction and evaluation of SERP-test, a taxonomy aimed to improve communication between researchers and practitioners in the area of software testing. SERP-test can be utilized for direct communication in industry academia collaborations. It may also facilitate indirect communication between practitioners adopting software engineering research and researchers who are striving for industry relevance. SERP-test was constructed through a systematic and goal-oriented approach which included literature reviews and interviews with practitioners and researchers. SERP-test was evaluated through an online survey and by utilizing it in an industry–academia collaboration project. SERP-test comprises four facets along which both research contributions and practical challenges may be classified: Intervention, Scope, Effect target and Context constraints. This paper explains the available categories for each of these facets (i.e., their definitions and rationales) and presents examples of categorized entities. Several tasks may benefit from SERP-test, such as formulating research goals from a problem perspective, describing practical challenges in a researchable fashion, analyzing primary studies in a literature review, or identifying relevant points of comparison and generalization of research.

  • 24. Feyh, Markus
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lean Software Development Measures and Indicators: A Systematic Mapping Study2013Conference paper (Refereed)
    Abstract [en]

    Background: Lean Software Development (LSD) aims for improvement, yet this improvement requires measures to identify whether a difference has been achieved, and provide decision support for further improvement. Objective: This study identifies measures and indicators proposed in literature on LSD, then structures them according to ISO/IEC 15939, allowing for comparability due to a use of a standard. Method: Systematic mapping is the research methodology. Result: The published literature on LSD measures has significantly increased since 2010. The two pre-dominant study types are evaluation research and experience reports. 22 base measures, 13 derived measures, and 14 indicators were identified. Conclusion: Gaps exist with respect to LSD principles. In particular: deferring commitment, respecting people and knowledge creation. The principle of delivering fast is well supported.

  • 25. Garousi, Vahid
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ozkan, Baris
    Challenges and best practices in industry-academia collaborations in software engineering: A systematic literature review2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 79, p. 106-127Article in journal (Refereed)
    Abstract [en]

    Context: The global software industry and the software engineering (SE) academia are two large communities. However, unfortunately, the level of joint industry-academia collaborations in SE is still relatively very low, compared to the amount of activity in each of the two communities. It seems that the two ’camps’ show only limited interest/motivation to collaborate with one other. Many researchers and practitioners have written about the challenges, success patterns (what to do, i.e., how to collaborate) and anti-patterns (what not do do) for industry-academia collaborations. Objective: To identify (a) the challenges to avoid risks to the collaboration by being aware of the challenges, (b) the best practices to provide an inventory of practices (patterns) allowing for an informed choice of practices to use when planning and conducting collaborative projects. Method: A systematic review has been conducted. Synthesis has been done using grounded-theory based coding procedures. Results: Through thematic analysis we identified 10 challenge themes and 17 best practice themes. A key outcome was the inventory of best practices, the most common ones recommended in different contexts were to hold regular workshops and seminars with industry, assure continuous learning from industry and academic sides, ensure management engagement, the need for a champion, basing research on real-world problems, showing explicit benefits to the industry partner, be agile during the collaboration, and the co-location of the researcher on the industry side. Conclusion: Given the importance of industry-academia collaboration to conduct research of high practical relevance we provide a synthesis of challenges and best practices, which can be used by researchers and practitioners to make informed decisions on how to structure their collaborations.

  • 26.
    Gencel, Cigdem
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Mughal, Aftab Ahmad
    Iqbal, Muhammad Imran
    Blekinge Institute of Technology, School of Computing.
    A Decision Support Framework for Metric Selection in Goal-Based Measurement Programs: GQM-DSFMS2013In: Journal of Systems and Software, ISSN 0164-1212, Vol. 86, no 12, p. 3091-3108Article in journal (Refereed)
    Abstract [en]

    Software organizations face challenges in managing and sustaining their measurement programs over time. The complexity of measurement programs increase with exploding number of goals and metrics to collect. At the same time, organizations usually have limited budget and resources for metrics collection. It has been recognized for quite a while that there is the need for prioritizing goals, which then ought to drive the selection of metrics. On the other hand, the dynamic nature of the organizations requires measurement programs to adapt to the changes in the stakeholders, their goals, information needs and priorities. Therefore, it is crucial for organizations to use structured approaches that provide transparency, traceability and guidance in choosing an optimum set of metrics that would address the highest priority information needs considering limited resources. This paper proposes a decision support framework for metrics selection (DSFMS) which is built upon the widely used Goal Question Metric (GQM) approach. The core of the framework includes an iterative goal-based metrics selection process incorporating decision making mechanisms in metrics selection, a pre-defined Attributes/Metrics Repository, and a Traceability Model among GQM elements. We also discuss alternative prioritization and optimization techniques for organizations to tailor the framework according to their needs. The evaluation of the GQM-DSFMS framework was done through a case study in a CMMI Level 3 software company.

    Download full text (pdf)
    fulltext
  • 27.
    Ghazi, Ahmad Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Andersson, Jesper
    Torkar, Richard
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Information Sources and their Importance to Prioritize Test Cases in the Heterogeneous Systems Context2014Conference paper (Refereed)
    Abstract [en]

    Context: Testing techniques proposed in the literature rely on various sources of information for test case selection (e.g., require- ments, source code, system structure, etc.). The challenge of test selection is amplified in the context of heterogeneous systems, where it is unknown which information/data sources are most important. Contribution: (1) Achieve in-depth understanding of test processes in heterogeneous systems; (2) Elicit information sources for test selection in the context of heterogeneous systems. (3) Capture the relative importance of the identified information sources. Method: Case study research is used for the elicitation and understanding of which information sources are relevant for test case privatization, followed by an exploratory survey capturing the relative importance of information sources for testing heterogeneous systems. Results: We classified different information sources that play a vital role in the test selection process, and found that their importance differs largely for the different test levels observed in heterogeneous testing. However, overall all sources were considered essential in test selection for heterogeneous systems. Conclusion: Heterogeneous system testing requires solutions that take all information sources into account when suggesting test cases for selection. Such approaches need to be developed and compared with existing solutions.

    Download full text (pdf)
    fulltext
  • 28.
    Ghazi, Ahmad Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Garigapati, Ratna Pranathi
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Checklists to Support Test Charter Design in Exploratory Testing2017In: Agile Processes in Software Engineering and Extreme Programming / [ed] Baumeister H., Lichter H., Riebisch M., Springer, 2017, Vol. 283, p. 251-258Conference paper (Refereed)
    Abstract [en]

    During exploratory testing sessions the tester simultaneously learns, designs and executes tests. The activity is iterative and utilizes the skills of the tester and provides flexibility and creativity. Test charters are used as a vehicle to support the testers during the testing. The aim of this study is to support practitioners in the design of test charters through checklists. We aimed to identify factors allowing practitioners to critically reflect on their designs and contents of test charters to support practitioners in making informed decisions of what to include in test charters. The factors and contents have been elicited through interviews. Overall, 30 factors and 35 content elements have been elicited.

    Download full text (pdf)
    fulltext
  • 29.
    Ghazi, Ahmad Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Bjarnason, Elizabeth
    Lund University, SWE.
    Runeson, Per
    Lund University, SWE.
    Levels of Exploration in Exploratory Testing: From Freestyle to Fully Scripted2018In: IEEE Access, E-ISSN 2169-3536, Vol. 6, p. 26416-26423Article in journal (Refereed)
    Abstract [en]

    Exploratory testing (ET) is a powerful and efficient way of testing software by integrating design, execution, and analysis of tests during a testing session. ET is often contrasted with scripted testing, and seen as a choice of either exploratory testing or not. In contrast, we pose that exploratory testing can be of varying degrees of exploration from fully exploratory to fully scripted. In line with this, we propose a scale for the degree of exploration and define five levels. In our classification, these levels of exploration correspond to the way test charters are defined. We have evaluated this classification through focus groups at four companies and identified factors that influence the choice of exploration level. The results show that the proposed levels of exploration are influenced by different factors such as ease to reproduce defects, better learning, verification of requirements, etc., and that the levels can be used as a guide to structure test charters. Our study also indicates that applying a combination of exploration levels can be beneficial in achieving effective testing.

    Download full text (pdf)
    fulltext
  • 30.
    Ghazi, Ahmad Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Heterogeneous Systems Testing Techniques: An Exploratory Survey2015Conference paper (Refereed)
    Abstract [en]

    Heterogeneous systems comprising sets of inherent subsystems are challenging to integrate. In particular, testing for interoperability and conformance is a challenge. Furthermore, the complexities of such systems amplify traditional testing challenges. We explore (1) which techniques are frequently discussed in literature in context of heterogeneous system testing that practitioners use to test their heterogeneous systems; (2) the perception of the practitioners on the usefulness of the techniques with respect to a defined set of outcome variables. For that, we conducted an exploratory survey. A total of 27 complete survey answers have been received. Search-based testing has been used by 14 out of 27 respondents, indicating the practical relevance of the approach for testing heterogeneous systems, which itself is relatively new and has only recently been studied extensively. The most frequently used technique is exploratory manual testing, followed by combinatorial testing. With respect to the perceived performance of the testing techniques, the practitioners were undecided regarding many of the studied variables. Manual exploratory testing received very positive ratings across outcome variables.

    Download full text (pdf)
    fulltext
  • 31.
    Ghazi, Ahmad Nauman
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Reddy, Sri Sai Vijay Raj
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nekkanti, Harini
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Survey Research in Software Engineering: Problems and Mitigation Strategies2019In: IEEE Access, E-ISSN 2169-3536, Vol. 7, p. 24703-24718Article in journal (Refereed)
    Abstract [en]

    Background: The need for empirical investigations in software engineering is growing. Many researchers nowadays, conduct and validate their solutions using empirical research. The Survey is an empirical method which enables researchers to collect data from a large population. The main aim of the survey is to generalize the findings.

    Aims: In this study, we aim to identify the problems researchers face during survey design and mitigation strategies.

    Method: A literature review, as well as semi-structured interviews with nine software engineering researchers, were conducted to elicit their views on problems and mitigation strategies. The researchers are all focused on empirical software engineering.

    Results: We identified 24 problems and 65 strategies, structured according to the survey research process. The most commonly discussed problem was sampling, in particular, the ability to obtain a sufficiently large sample. To improve survey instrument design, evaluation and execution recommendations for question formulation and survey pre-testing were given. The importance of involving multiple researchers in the analysis of survey results was stressed.

    Conclusions: The elicited problems and strategies may serve researchers during the design of their studies. However, it was observed that some strategies were conflicting. This shows that it is important to conduct a trade-off analysis between strategies.

    Download full text (pdf)
    fulltext
  • 32. Ickin, Selim
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gonzalez-Huerta, Javier
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Why do users install and delete apps?: A survey study2017In: Lecture Notes in Business Information Processing, Springer Verlag , 2017, Vol. 304, p. 186-191Conference paper (Refereed)
    Abstract [en]

    Practitioners on the area of mobile application development usually rely on set of app-related success factors, the majority of which are directly related to their economical/business profit (e.g., number of downloads, or the in-app purchases revenue). However, gathering also the user-related success factors, that explain the reasons why users choose, download, and install apps as well as the user-related failure factors that explain the reasons why users delete apps, might help practitioners understand how to improve the market impact of their apps. The objectives were to: identify (i) the reasons why users choose and installing mobile apps from app stores; (ii) the reasons why users uninstall the apps. A questionnaire-based survey involving 121 users from 26 different countries was conducted. © Springer International Publishing AG 2017.

  • 33.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Ericsson AB, Sweden.
    Britto, Ricardo
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Adapting Behavior Driven Development (BDD) for large-scale software systems2021In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 177, article id 110944Article in journal (Refereed)
    Abstract [en]

    Context: Large-scale software projects require interaction between many stakeholders. Behavior-driven development (BDD) facilitates collaboration between stakeholders, and an adapted BDD process can help improve cooperation in a large-scale project. Objective: The objective of this study is to propose and empirically evaluate a BDD based process adapted for large-scale projects. Method: A technology transfer model was used to propose a BDD based process for large-scale projects. We conducted six workshop sessions to understand the challenges and benefits of BDD. Later, an industrial evaluation was performed for the process with the help of practitioners. Results: From our investigations, understanding of a business aspect of requirements, their improved quality, a guide to system-level use-cases, reuse of artifacts, and help for test organization are found as benefits of BDD. Practitioners identified the following challenges: specification and ownership of behaviors, adoption of new tools, the software projects’ scale, and versioning of behaviors. We proposed a process to address these challenges and evaluated the process with the help of practitioners. Conclusion: The evaluation proved that BDD could be adapted and used to facilitate interaction in large-scale software projects in the software industry. The feedback from the practitioners helped in improving the proposed process. © 2021 The Author(s)

    Download full text (pdf)
    fulltext
  • 34.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering. Ericsson AB, Sweden.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Supporting Refactoring of BDD Specifications - An Empirical Study2022In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 141, article id 106717Article in journal (Refereed)
    Abstract [en]

    Context: Behavior-driven development (BDD) is a variant of test-driven development where specifications are described in a structured domain-specific natural language. Although refactoring is a crucial activity of BDD, little research is available on the topic.

    Objective: To support practitioners in refactoring BDD specifications by (1) proposing semi-automated approaches to identify refactoring candidates; (2) defining refactoring techniques for BDD specifications; and (3) evaluating the proposed identification approaches in an industry context.

    Method: Using Action Research, we have developed an approach for identifying refactoring candidates in BDD specifications based on two measures of similarity and applied the approach in two projects of a large software organization. The accuracy of the measures for identifying refactoring candidates was then evaluated against an approach based on machine learning and a manual approach based on practitioner perception.

    Results: We proposed two measures of similarity to support the identification of refactoring candidates in a BDD specification base; (1) normalized compression similarity (NCS) and (2) similarity ratio (SR). A semi-automated approach based on NCS and SR was developed and applied to two industrial cases to identify refactoring candidates. Our results show that our approach can identify candidates for refactoring 6o times faster than a manual approach. Our results furthermore showed that our measures accurately identified refactoring candidates compared with a manual identification by software practitioners and outperformed an ML-based text classification approach. We also described four types of refactoring techniques applicable to BDD specifications; merging candidates, restructuring candidates, deleting duplicates, and renaming specification titles. 

    Conclusion: Our results show that NCS and SR can help practitioners in accurately identifying BDD specifications that are suitable candidates for refactoring, which also decreases the time for identifying refactoring candidates.

    Download full text (pdf)
    fulltext
  • 35.
    Irshad, Mohsin
    et al.
    Ericsson Sweden AB, Sweden.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic reuse process for automated acceptance tests: Construction and elementary evaluation2021In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 15, no 1, p. 133-162Article in journal (Refereed)
    Abstract [en]

    Context: Automated acceptance testing validates a product's functionality from the customer's perspective. Text-based automated acceptance tests (AATs) have gained popularity because they link requirements and testing.

    Objective: To propose and evaluate a cost-effective systematic reuse process for automated acceptance tests.

    Method: A systematic approach, method engineering, is used to construct a systematic reuse process for automated acceptance tests. The techniques to support searching, assessing, adapting the reusable tests are proposed and evaluated. The constructed process is evaluated using (i) qualitative feedback from software practitioners and (ii) a demonstration of the process in an industry setting. The process was evaluated for three constraints: performance expectancy, effort expectancy, and facilitating conditions.

    Results: The process consists of eleven activities that support development for reuse, development with reuse, and assessment of the costs and benefits of reuse. During the evaluation, practitioners found the process a useful method to support reuse. In the industrial demonstration, it was noted that the activities in the solution helped in developing an automated acceptance test with reuse faster than creating a test from scratch i.e., searching, assessment, and adaptation parts.

    Conclusion: The process is found to be useful and relevant to the industry during the preliminary investigation. 

    Download full text (pdf)
    fulltext
  • 36.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A systematic literature review of software requirements reuse approaches2018In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 93, no Jan, p. 223-245Article, review/survey (Refereed)
    Abstract [en]

    Context: Early software reuse is considered as the most beneficial form of software reuse. Hence, previous research has focused on supporting the reuse of software requirements. Objective: This study aims to identify and investigate the current state of the art with respect to (a) what requirement reuse approaches have been proposed, (b) the methods used to evaluate the approaches, (c) the characteristics of the approaches, and (d) the quality of empirical studies on requirements reuse with respect to rigor and relevance. Method: We conducted a systematic review and a combination of snowball sampling and database search have been used to identify the studies. The rigor and relevance scoring rubric has been used to assess the quality of the empirical studies. Multiple researchers have been involved in each step to increase the reliability of the study. Results: Sixty-nine studies were identified that describe requirements reuse approaches. The majority of the approaches used structuring and matching of requirements as a method to support requirements reuse and text-based artefacts were commonly used as an input to these approaches. Further evaluation of the studies revealed that the majority of the approaches are not validated in the industry. The subset of empirical studies (22 in total) was analyzed for rigor and relevance and two studies achieved the maximum score for rigor and relevance based on the rubric. It was found that mostly text-based requirements reuse approaches were validated in the industry. Conclusion: From the review, it was found that a number of approaches already exist in literature, but many approaches are not validated in industry. The evaluation of rigor and relevance of empirical studies show that these do not contain details of context, validity threats, and the industrial settings, thus highlighting the need for the industrial evaluation of the approaches. © 2017 Elsevier B.V.

  • 37.
    Irshad, Mohsin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Afzal, Wasif
    Capturing cost avoidance through reuse: Systematic literature review and industrial evaluation2016In: ACM International Conference Proceeding Series, ACM Press, 2016, Vol. 01-03-June-2016Conference paper (Refereed)
    Abstract [en]

    Background: Cost avoidance through reuse shows the benefits gained by the software organisations when reusing an artefact. Cost avoidance captures benefits that are not captured by cost savings e.g. spending that would have increased in the absence of the cost avoidance activity. This type of benefit can be combined with quality aspects of the product e.g. costs avoided because of defect prevention. Cost avoidance is a key driver for software reuse. Objectives: The main objectives of this study are: (1) To assess the status of capturing cost avoidance through reuse in the academia; (2) Based on the first objective, propose improvements in capturing of reuse cost avoidance, integrate these into an instrument, and evaluate the instrument in the software industry. Method: The study starts with a systematic literature review (SLR) on capturing of cost avoidance through reuse. Later, a solution is proposed and evaluated in the industry to address the shortcomings identified during the systematic literature review. Results: The results of a systematic literature review describe three previous studies on reuse cost avoidance and show that no solution, to capture reuse cost avoidance, was validated in industry. Afterwards, an instrument and a data collection form are proposed that can be used to capture the cost avoided by reusing any type of reuse artefact. The instrument and data collection form (describing guidelines) were demonstrated to a focus group, as part of static evaluation. Based on the feedback, the instrument was updated and evaluated in industry at 6 development sites, in 3 different countries, covering 24 projects in total. Conclusion: The proposed solution performed well in industrial evaluation. With this solution, practitioners were able to do calculations for reuse costs avoidance and use the results as decision support for identifying potential artefacts to reuse.

  • 38.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Handover of managerial responsibilities in global software development: a case study of source code evolution and quality2015In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 23, no 4, p. 539-566Article in journal (Refereed)
    Abstract [en]

    Studies report on the negative effect on quality in global software development (GSD) due to communication and coordination-related challenges. However, empirical studies reporting on the magnitude of the effect are scarce. This paper presents findings from an embedded explanatory case study on the change in quality over time, across multiple releases, for products that were developed in a GSD setting. The GSD setting involved periods of distributed development between geographically dispersed sites as well as a handover of project management responsibilities between the involved sites. Investigations were performed on two medium-sized products from a company that is part of a large multinational corporation. Quality is investigated quantitatively using defect data and measures that quantify two source code properties, size and complexity. Observations were triangulated with subjective views from company representatives. There were no observable indications that the distribution of work or handover of project management responsibilities had an impact on quality on both products. Among the product-, process- and people-related success factors, we identified well-designed product architectures, early handover planning and support from the sending site to the receiving site after the handover and skilled employees at the involved sites. Overall, these results can be useful input for decision-makers who are considering distributing development work between globally dispersed sites or handing over project management responsibilities from one site to another. Moreover, our study shows that analyzing the evolution of size and complexity properties of a product’s source code can provide valuable information to support decision-making during similar projects. Finally, the strategy used by the company to relocate responsibilities can also be considered as an alternative for software transfers, which have been linked with a decline in efficiency, productivity and quality.

    Download full text (pdf)
    fulltext
  • 39.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, School of Computing.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Šmite, Darja
    Blekinge Institute of Technology, School of Computing.
    Visualization of Defect Inflow and Resolution Cycles: Before, During and After Transfer2013In: 2013 20TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE (APSEC 2013), VOL 1 / [ed] Muenchaisri, P; Rothermel, G, IEEE Computer Society Press , 2013, Vol. 1, p. 289-298Conference paper (Refereed)
    Abstract [en]

    The link between maintenance and product quality, as well as the high cost of software maintenance, highlights the importance of efficient maintenance processes. Sustaining maintenance work efficiency in a global software development setting that involves a transfer is a challenging endeavor. Studies report on the negative effect of transfers on efficiency. However, empirical evidence on the magnitude of the change in efficiency is scarce. In this study we used a lean indicator to visualize variances in defect resolution cycles for two large products during evolution, before, during and after a transfer. Focus group meetings were also held for each product. Study results show that during and immediately after the transfer the defect inflow is higher, bottlenecks are more visible, and defect resolution cycles are longer, as compared to before the transfer. Furthermore we highlight the factors that influenced the change in defect resolution cycles before, during, and after the transfer.

    Download full text (pdf)
    fulltext
  • 40.
    Jabangwe, Ronald
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Wohlin, Claes
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Šmite, Darja
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    A method for investigating the quality of evolving object-oriented software using defects in global software development projects2016In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 28, no 8, p. 622-641Article in journal (Refereed)
    Abstract [en]

    Context: Global software development (GSD) projects can have distributed teams that work independently in different locations or team members that are dispersed. The various development settings in GSD can influence quality during product evolution. When evaluating quality using defects as a proxy, the development settings have to be taken into consideration. Objective: The aim is to provide a systematic method for supporting investigations of the implication of GSD contexts on defect data as a proxy for quality. Method: A method engineering approach was used to incrementally develop the proposed method. This was done through applying the method in multiple industrial contexts and then using lessons learned to refine and improve the method after application. Results: A measurement instrument and visualization was proposed incorporating an understanding of the release history and understanding of GSD contexts. Conclusion: The method can help with making accurate inferences about development settings because it includes details on collecting and aggregating data at a level that matches the development setting in a GSD context and involves practitioners at various phases of the investigation. Finally, the information that is produced from following the method can help practitioners make informed decisions when planning to develop software in comparable circumstances. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  • 41.
    Jabbari, Ramtin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    Fraunhofer Institute for Experimental Software Engineering IESE, DEU.
    Towards a benefits dependency network for DevOps based on a systematic literature review2018In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 30, no 11, article id e1957Article in journal (Refereed)
    Abstract [en]

    DevOps as a new way of thinking for software development and operations has received much attention in the industry, while it has not been thoroughly investigated in academia yet. The objective of this study is to characterize DevOps by exploring its central components in terms of principles, practices and their relations to the principles, challenges of DevOps adoption, and benefits reported in the peer-reviewed literature. As a key objective, we also aim to realize the relations between DevOps practices and benefits in a systematic manner. A systematic literature review was conducted. Also, we used the concept of benefits dependency network to synthesize the findings, in particular, to specify dependencies between DevOps practices and link the practices to benefits. We found that in many cases, DevOps characteristics, ie, principles, practices, benefits, and challenges, were not sufficiently defined in detail in the peer-reviewed literature. In addition, only a few empirical studies are available, which can be attributed to the nascency of DevOps research. Also, an initial version of the DevOps benefits dependency network has been derived. The definition of DevOps principles and practices should be emphasized given the novelty of the concept. Further empirical studies are needed to improve the benefits dependency network presented in this study. © 2018 John Wiley & Sons, Ltd.

  • 42.
    Jabbari, Ramtin
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Ali, Nauman Bin
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Tanveer, Binish
    What is DevOps?: A Systematic Mapping Study on Definitions and Practices2016Conference paper (Refereed)
  • 43. Kasoju, Abhinaya
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Mäntylä, Mika V.
    Analyzing an automotive testing process with evidence-based software engineering2013In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 55, no 7, p. 1237-1259Article in journal (Refereed)
    Abstract [en]

    Evidence-based software engineering (EBSE) provides a process for solving practical problems based on a rigorous research approach. The primary focus so far was on mapping and aggregating evidence through systematic reviews. Objectives: We extend existing work on evidence-based software engineering by using the EBSE process in an industrial case to help an organization to improve its automotive testing process. With this we contribute in (1) providing experiences on using evidence based processes to analyze a real world automotive test process and (2) provide evidence of challenges and related solutions for automotive software testing processes. Methods: In this study we perform an in-depth investigation of an automotive test process using an extended EBSE process including case study research (gain an understanding of practical questions to define a research scope), systematic literature review (identify solutions through systematic literature), and value stream mapping (map out an improved automotive test process based on the current situation and improvement suggestions identified). These are followed by reflections on the EBSE process used. Results: In the first step of the EBSE process we identified 10 challenge areas with a total of 26 individual challenges. For 15 out of those 26 challenges our domain specific systematic literature review identified solutions. Based on the input from the challenges and the solutions, we created a value stream map of the current and future process. Conclusions: Overall, we found that the evidence-based process as presented in this study helps in technology transfer of research results to industry, but at the same time some challenges lie ahead (e.g. scoping systematic reviews to focus more on concrete industry problems, and understanding strategies of conducting EBSE with respect to effort and quality of the evidence).

  • 44. Khurum, Mahvish
    et al.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Gorschek, Tony
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Extending value stream mapping through waste definition beyond customer perspective2014In: Journal of Software: Evolution and Process, ISSN 2047-7481, Vol. 26, no 12, p. 1074-1105Article in journal (Refereed)
    Abstract [en]

    Value Stream Mapping is one of the several Lean practices, which has recently attracted interest in the software engineering community. In other contexts (such as military, health, production), Value Stream Mapping has achieved considerable improvements in processes and products. The goal is to also leverage on these benefits in the software intensive product development context. The primary contribution is that we are extending the definition of waste to fit in the software intensive product development context. As traditionally in Value Stream Mapping everything that is not considered valuable is waste, we do this practically by looking at value beyond the customer perspective, and using the Software Value Map. A detailed illustration, via application in an industrial case at Ericsson AB, demonstrates usability and usefulness of the proposed extension. The case study results consist of two parts. First, the instantiation and motivations for selecting certain strategies have been provided. Second, the outcome of the value stream map is described in detail. Overall, the conclusion is that this case study indicates that Value Stream Mapping and the integration with the Software Value Map is useful in a software intensive product development context. In a retrospective the value stream approach was perceived positively by the practitioners with respect to process and outcome.

    Download full text (pdf)
    FULLTEXT01
  • 45.
    Kurapati, Narendra
    et al.
    Blekinge Institute of Technology, School of Computing.
    Manyam, Venkata Sarath Chandra
    Petersen, Kai
    Blekinge Institute of Technology, School of Computing.
    Agile software development practice adoption survey2012In: Lecture Notes in Business Information Processing, Malmö: Springer , 2012, Vol. 111, p. 16-30Conference paper (Refereed)
    Abstract [en]

    Agile methodologies are often not used "out of the box" by practitioners, instead they select the practices that fit their needs best. However, little is known which agile practices the practitioners choose. This study investigates agile practice adoption by asking practitioners which practices they are using on project and organizational level. We investigated how commonly used individual agile practices are, combinations of practices and their frequency of usage, as well as the degree of compliance to agile methodologies (Scrum and XP), and as how successful practitioners perceive the adoption. The research method used is survey. The survey has been sent to over 600 respondents, and has been posted on LinkedIn, Yahoo, and Google groups. In total 109 answers have been received. Practitioners can use the knowledge of the commonality of individual practices and combinations of practices as support in focusing future research efforts, and as decision support in selecting agile practices

  • 46.
    Marculescu, Bogdan
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Poulding, Simon
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Feldt, Robert
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Torkar, Richard
    Chalmers, Gothenburg, Sweden.;Univ Gothenburg, Gothenburg, Sweden..
    Tester interactivity makes a difference in search-based software testing: A controlled experiment2016In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 78, p. 66-82Article in journal (Refereed)
    Abstract [en]

    Context: Search-based software testing promises to provide users with the ability to generate high quality test cases, and hence increase product quality, with a minimal increase in the time and effort required. The development of the Interactive Search-Based Software Testing (ISBST) system was motivated by a previous study to investigate the application of search-based software testing (SBST) in an industrial setting. ISBST allows users to interact with the underlying SBST system, guiding the search and assessing the results. An industrial evaluation indicated that the ISBST system could find test cases that are not created by testers employing manual techniques. The validity of the evaluation was threatened, however, by the low number of participants. Objective: This paper presents a follow-up study, to provide a more rigorous evaluation of the ISBST system. Method: To assess the ISBST system a two-way crossover controlled experiment was conducted with 58 students taking a Verification and Validation course. The NASA Task Load Index (NASA-TLX) is used to assess the workload experienced by the participants in the experiment. Results:The experimental results validated the hypothesis that the ISBST system generates test cases that are not found by the same participants employing manual testing techniques. A follow-up laboratory experiment also investigates the importance of interaction in obtaining the results. In addition to this main result, the subjective workload was assessed for each participant by means of the NASA-TLX tool. The evaluation showed that, while the ISBST system required more effort from the participants, they achieved the same performance. Conclusions: The paper provides evidence that the ISBST system develops test cases that are not found by manual techniques, and that interaction plays an important role in achieving that result. (C) 2016 Elsevier B.V. All rights reserved.

  • 47.
    Minhas, Nasir Mehmood
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Checklists to support decision-making in regression testing2023In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 202, article id 111697Article in journal (Refereed)
    Abstract [en]

    Context: Practitioners working in large-scale software development face many challenges in regression testing activities. One of the reasons is the lack of a structured regression testing process. In this regard, checklists can help practitioners keep track of essential regression testing activities and add structure to the regression testing process to a certain extent. Objective: This study aims to introduce regression testing checklists so test managers/teams can use them: (1) to assess whether test teams/members are ready to begin regression testing, and (2) to keep track of essential regression testing activities while planning and executing regression tests. Method: We used interviews, workshops, and questionnaires to design, evolve, and evaluate regression testing checklists. In total, 25 practitioners from 12 companies participated in creating the checklist. Twenty-three of them participated in checklists evolution and evaluation. Results: We identified activities practitioners consider significant while planning, performing, and analyzing regression testing. We designed regression testing checklists based on these activities to help practitioners make informed decisions during regression testing. With the help of practitioners, we evolved these checklists into two iterations. Finally, the practitioners provided feedback on the proposed checklists. All respondents think the proposed checklists are useful and customizable for their environments, and 80% think checklists cover aspects essential for regression testing. Conclusion: The proposed regression testing checklists can be useful for test managers to assess their team/team members’ readiness and decide when to start and stop regression testing. The checklists can be used to record the steps required while planning and executing regression testing. Further, these checklists can provide a basis for structuring the regression testing process in varying contexts. © 2023 The Author(s)

    Download full text (pdf)
    fulltext
  • 48.
    Minhas, Nasir Mehmood
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Irshad, Mohsin
    Ericsson Sweden AB.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Lessons learned from replicating a study on information-retrieval based test case prioritization2023In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 31, no 4, p. 1527-1559Article in journal (Refereed)
    Abstract [en]

    Replication studies help solidify and extend knowledge by evaluating previous studies’ findings. Software engineering literature showed that too few replications are conducted focusing on software artifacts without the involvement of humans. This study aims to replicate an artifact-based study on software testing to address the gap related to replications. In this investigation, we focus on (i) providing a step-by-step guide of the replication, reflecting on challenges when replicating artifact-based testing research and (ii) evaluating the replicated study concerning the validity and robustness of the findings. We replicate a test case prioritization technique proposed by Kwon et al. We replicated the original study using six software programs, four from the original study and two additional software programs. We automated the steps of the original study using a Jupyter notebook to support future replications. Various general factors facilitating replications are identified, such as (1) the importance of documentation; (2) the need for assistance from the original authors; (3) issues in the maintenance of open-source repositories (e.g., concerning needed software dependencies, versioning); and (4) availability of scripts. We also noted observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. We conclude that the study by Kwon et al. is partially replicable for small software programs and could be automated to facilitate software practitioners, given the availability of required information. However, it is hard to implement the technique for large software programs with the current guidelines. Based on lessons learned, we suggest that the authors of original studies need to publish their data and experimental setup to support the external replications. © 2023, The Author(s).

    Download full text (pdf)
    fulltext
  • 49.
    Minhas, Nasir Mehmood
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Koppula, Thejendar Reddy
    Blekinge Institute of Technology. student.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Börstler, Jürgen
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Using goal-question-metric to Compare Research and Practice Perspectives on Regression Testing2023In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 35, no 2, article id e2506Article in journal (Refereed)
    Abstract [en]

    Regression testing is challenging because of its complexity and the amount of effort and time it requires, especially in large-scale environments with continuous integration and delivery. Regression test selection and prioritization techniques have been proposed in the literature to address the regression testing challenges, but adoption rates of these techniques in industry are not encouraging. One of the possible reasons could be the disparity in the regression testing goals in industry and literature. 

    This work compares the research perspective to industry practice on regression testing goals, corresponding information needs, and metrics required to evaluate these goals. We have conducted a literature review of 44 research papers and a survey with 56 testing practitioners. The survey comprises 11 interviews and 45 responses to an online questionnaire. 

    We identified that industry and research accentuate different regression testing goals. For instance, the literature emphasizes increasing the fault detection rates of test suites and early identification of critical faults. In contrast, the practitioners' focus is on test suite maintenance, controlled fault slippage, and awareness of changes. Similarly, the literature suggests maintaining information needs from test case execution histories to evaluate regression testing techniques based on various metrics, whereas, at large, the practitioners do not use the metrics suggested in the literature. 

    To bridge the research and practice gap, based on the literature and survey findings, we have created a goal-question-metric (GQM) model that maps the regression testing goals, associated information needs, and metrics from both perspectives. The GQM model can guide researchers in proposing new techniques closer to industry contexts. Practitioners can benefit from information needs and metrics presented in the literature and can use GQM as a tool to follow their regression testing goals. 

    Download full text (pdf)
    fulltext
  • 50.
    Minhas, Nasir Mehmood
    et al.
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Masood, Sohaib
    UIIT PMAS Arid Agriculture University, PAK.
    Petersen, Kai
    Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
    Nadeem, Aamer
    Capital University of Science and Technology, PAK.
    A Systematic Mapping of Test Case Generation Techniques Using UML Interaction Diagrams2020In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 32, no 6, article id e2235Article, review/survey (Refereed)
    Abstract [en]

    Testing plays a vital role for assuring software quality. Among the activities performed during testing process, test cases generation is a challenging and labor intensive task. Test case generation techniques based on UML models are getting the attention of researchers and practitioners. This study provides a systematic mapping of test case generation techniques based on interaction diagrams. The study compares the test case generation techniques, regarding their capabilities and limitations, and it also assesses the reporting quality of the primary studies. It has been revealed that UML interaction diagrams based techniques are mainly used for integration testing. The majority of the techniques are using sequence diagrams as input models, while some are using collaboration. A notable number of techniques are using interaction diagram along with some other UML diagram for test case generation. These techniques are mainly focusing on interaction, scenario, operational, concurrency, synchronization and deadlock related faults.

    From the results of this study, we can conclude that the studies presenting test case generation techniques using UML interaction diagrams failed to illustrate the use of rigorous methodology, and these techniques did not demonstrate the empirical evaluation in an industrial context. Our study revealed the need for tool support to facilitate the transfer of solutions to industry.

    Download full text (pdf)
    fulltext
123 1 - 50 of 117
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf