Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 121) Show all publications
Petersen, K. & Gerken, J. M. (2025). On the road to interactive LLM-based systematic mapping studies. Information and Software Technology, 178, Article ID 107611.
Open this publication in new window or tab >>On the road to interactive LLM-based systematic mapping studies
2025 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 178, article id 107611Article in journal (Refereed) Published
Abstract [en]

Context: The research volume is continuously increasing. Manual analysis of large topic scopes and continuously updating literature studies with the newest research results is effort intensive and, therefore, difficult to achieve.

Objective: To discuss possibilities and next steps for using LLMs (e.g., GPT-4) in the mapping study process.

Method: The research can be classified as a solution proposal. The solution was iteratively designed and discussed among the authors based on their experience with LLMs and literature reviews.

Results: We propose strategies for the mapping process, outlining the use of agents and prompting strategies for each step.

Conclusion: Given the potential of LLMs in literature studies, we should work on a holistic solutions for LLM-supported mapping studies. 

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
GPT, Large language models, Systematic mapping studies, Mapping, Classifieds, Language model, Large language model, Literature reviews, Literature studies, Manual analysis, Mapping studies, Research results
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-27106 (URN)10.1016/j.infsof.2024.107611 (DOI)001351330500001 ()2-s2.0-85208101590 (Scopus ID)
Available from: 2024-11-18 Created: 2024-11-18 Last updated: 2024-11-25Bibliographically approved
Börstler, J., Ali, N. b., Petersen, K. & Engström, E. (2024). Acceptance behavior theories and models in software engineering — A mapping study. Information and Software Technology, 172, Article ID 107469.
Open this publication in new window or tab >>Acceptance behavior theories and models in software engineering — A mapping study
2024 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 172, article id 107469Article in journal (Refereed) Published
Abstract [en]

Context: The adoption or acceptance of new technologies or ways of working in software development activities is a recurrent topic in the software engineering literature. The topic has, therefore, been empirically investigated extensively. It is, however, unclear which theoretical frames of reference are used in this research to explain acceptance behaviors. Objective: In this study, we explore how major theories and models of acceptance behavior have been used in the software engineering literature to empirically investigate acceptance behavior.Method: We conduct a systematic mapping study of empirical studies using acceptance behavior theories in software engineering.Results: We identified 47 primary studies covering 56 theory uses. The theories were categorized into six groups. Technology acceptance models (TAM and its extensions) were used in 29 of the 47 primary studies, innovation theories in 10, and the theories of planned behavior/ reasoned action (TPB/TRA) in six. All other theories were used in at most two of the primary studies. The usage and operationalization of the theories were, in many cases, inconsistent with the underlying theories. Furthermore, we identified 77 constructs used by these studies of which many lack clear definitions. Conclusions: Our results show that software engineering researchers are aware of some of the leading theories and models of acceptance behavior, which indicates an attempt to have more theoretical foundations. However, we identified issues related to theory usage that make it difficult to aggregate and synthesize results across studies. We propose mitigation actions that encourage the consistent use of theories and emphasize the measurement of key constructs.

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Acceptance behavior, Technology adoption, Theory use in software engineering, TAM, TPB, TRA, Fitness, Innovation diffusion
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-26143 (URN)10.1016/j.infsof.2024.107469 (DOI)001233663200001 ()2-s2.0-85190986067 (Scopus ID)
Projects
ELLIIT
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20220235
Available from: 2024-04-24 Created: 2024-04-24 Last updated: 2024-06-18Bibliographically approved
Petersen, K. (2024). Case study identification with GPT-4 and implications for mapping studies. Information and Software Technology, 171, Article ID 107452.
Open this publication in new window or tab >>Case study identification with GPT-4 and implications for mapping studies
2024 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 171, article id 107452Article in journal (Refereed) Published
Abstract [en]

Context: Rainer and Wohlin showed that case studies are not well understood by reviewers and authors and thus they say that a given research is a case study when it is not. Objective: Rainer and Wohlin proposed a smell indicator (inspired by code smells) to identify case studies based on the frequency of occurrences of words, which performed better than human classifiers. With the emergence of ChatGPT, we evaluate ChatGPT to assess its performance in accurately identifying case studies. We also reflect on the results’ implications for mapping studies, specifically data extraction. Method: We used ChatGPT with the model GPT-4 to identify case studies and compared the result with the smell indicator for precision, recall, and accuracy. Results: GPT-4 and the smell indicator perform similarly, with GPT-4 performing slightly better in some instances and the smell indicator (SI) in others. The advantage of GPT-4 is that it is based on the definition of case studies and provides traceability on how it reaches its conclusions. Conclusion: As GPT-4 performed well on the task and provides traceability, we should use and, with that, evaluate it on data extraction tasks, supporting us as authors. © 2024 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Case study, Data extraction, GPT-4, Systematic mapping studies, Data mining, Extraction, Case-studies, Code smell, Mapping studies, Performance, Mapping
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26102 (URN)10.1016/j.infsof.2024.107452 (DOI)001205391500001 ()2-s2.0-85189468248 (Scopus ID)
Available from: 2024-04-12 Created: 2024-04-12 Last updated: 2024-05-07Bibliographically approved
Petersen, K., Börstler, J., Ali, N. b. & Engström, E. (2024). Revisiting the construct and assessment of industrial relevance in software engineering research. In: Proceedings - 2024 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024: . Paper presented at 1st International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024, Lisbon, April 16, 2024 (pp. 17-20). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Revisiting the construct and assessment of industrial relevance in software engineering research
2024 (English)In: Proceedings - 2024 IEEE/ACM International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024, Association for Computing Machinery (ACM), 2024, p. 17-20Conference paper, Published paper (Refereed)
Abstract [en]

Industrial relevance is essential for an applied research area like software engineering. However, it is unclear how to achieve industrial relevance and how we communicate and assess it. We propose a reasoning framework to support the design, reporting, and assessment of research for industrial relevance. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
Engineering research, Applied research, Reasoning framework, Research areas, Software engineering research, Industrial research
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26912 (URN)10.1145/3643664.3648205 (DOI)001293147200004 ()2-s2.0-85203105590 (Scopus ID)9798400705670 (ISBN)
Conference
1st International Workshop on Methodological Issues with Empirical Studies in Software Engineering, WSESE 2024, Lisbon, April 16, 2024
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20220235
Available from: 2024-09-16 Created: 2024-09-16 Last updated: 2024-10-03Bibliographically approved
Minhas, N. M., Börstler, J. & Petersen, K. (2023). Checklists to support decision-making in regression testing. Journal of Systems and Software, 202, Article ID 111697.
Open this publication in new window or tab >>Checklists to support decision-making in regression testing
2023 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 202, article id 111697Article in journal (Refereed) Published
Abstract [en]

Context: Practitioners working in large-scale software development face many challenges in regression testing activities. One of the reasons is the lack of a structured regression testing process. In this regard, checklists can help practitioners keep track of essential regression testing activities and add structure to the regression testing process to a certain extent. Objective: This study aims to introduce regression testing checklists so test managers/teams can use them: (1) to assess whether test teams/members are ready to begin regression testing, and (2) to keep track of essential regression testing activities while planning and executing regression tests. Method: We used interviews, workshops, and questionnaires to design, evolve, and evaluate regression testing checklists. In total, 25 practitioners from 12 companies participated in creating the checklist. Twenty-three of them participated in checklists evolution and evaluation. Results: We identified activities practitioners consider significant while planning, performing, and analyzing regression testing. We designed regression testing checklists based on these activities to help practitioners make informed decisions during regression testing. With the help of practitioners, we evolved these checklists into two iterations. Finally, the practitioners provided feedback on the proposed checklists. All respondents think the proposed checklists are useful and customizable for their environments, and 80% think checklists cover aspects essential for regression testing. Conclusion: The proposed regression testing checklists can be useful for test managers to assess their team/team members’ readiness and decide when to start and stop regression testing. The checklists can be used to record the steps required while planning and executing regression testing. Further, these checklists can provide a basis for structuring the regression testing process in varying contexts. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Regression testing, Checklists, Test manager, Team readiness, Process improvement
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-23675 (URN)10.1016/j.jss.2023.111697 (DOI)000989289800001 ()2-s2.0-85153245617 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2022-09-18 Created: 2022-09-18 Last updated: 2023-06-12Bibliographically approved
Molléri, J. S., Mendes, E., Petersen, K. & Felderer, M. (2023). Determining a core view of research quality in empirical software engineering. Computer Standards & Interfaces, 84, Article ID 103688.
Open this publication in new window or tab >>Determining a core view of research quality in empirical software engineering
2023 (English)In: Computer Standards & Interfaces, ISSN 0920-5489, E-ISSN 1872-7018, Vol. 84, article id 103688Article in journal (Refereed) Published
Abstract [en]

Context: Research quality is intended to appraise the design and reporting of studies. It comprises a set of standards such as methodological rigor, practical relevance, and conformance to ethical standards. Depending on the perspective, different views of importance are given to the standards for research quality. Objective: To investigate the suitability of a conceptual model of research quality to Software Engineering (SE), from the perspective of researchers engaged in Empirical Software Engineering (ESE) research, in order to understand the core value of research quality. Method: We conducted a mixed-methods approach with two distinct group perspectives: (i) a research group; and (ii) the empirical SE research community. Our data collection approach comprised a questionnaire survey and a complementary focus group. We carried out a hierarchical voting prioritization to collect relative values for importance of standards for research quality. Results: In the context of this research, ‘internally valid’, ‘relevant research idea’, and ‘applicable results’ are perceived as the core standards for research quality in empirical SE. The alignment at the research group level was higher compared to that at the community level. Conclusion: The conceptual model was seen to express fairly the standards for research quality in the SE context. It presented limitations regarding its structure and components’ description, which resulted in an updated model. © 2022

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Alignment, Conceptual model, Research quality, Standards, Surveys, Core values, Data collection, Empirical Software Engineering, Ethical standards, Mixed method, Research communities, Research groups, Software engineering research, Software engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23706 (URN)10.1016/j.csi.2022.103688 (DOI)000870181900002 ()2-s2.0-85137713683 (Scopus ID)
Available from: 2022-10-03 Created: 2022-10-03 Last updated: 2023-12-04Bibliographically approved
Börstler, J., Ali, N. b. & Petersen, K. (2023). Double-counting in software engineering tertiary studies — An overlooked threat to validity. Information and Software Technology, 158, Article ID 107174.
Open this publication in new window or tab >>Double-counting in software engineering tertiary studies — An overlooked threat to validity
2023 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 158, article id 107174Article, review/survey (Refereed) Published
Abstract [en]

Context: Double-counting in a literature review occurs when the same data, population, or evidence is erroneously counted multiple times during synthesis. Detecting and mitigating the threat of double-counting is particularly challenging in tertiary studies. Although this topic has received much attention in the health sciences, it seems to have been overlooked in software engineering. Objective: We describe issues with double-counting in tertiary studies, investigate the prevalence of the issue in software engineering, and propose ways to identify and address the issue. Method: We analyze 47 tertiary studies in software engineering to investigate in which ways they address double-counting and whether double-counting might be a threat to validity in them. Results: In 19 of the 47 tertiary studies, double-counting might bias their results. Of those 19 tertiary studies, only 5 consider double-counting a threat to their validity, and 7 suggest strategies to address the issue. Overall, only 9 of the 47 tertiary studies, acknowledge double-counting as a potential general threat to validity for tertiary studies. Conclusions: Double-counting is an overlooked issue in tertiary studies in software engineering, and existing design and evaluation guidelines do not address it sufficiently. Therefore, we propose recommendations that may help to identify and mitigate double-counting in tertiary studies. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Population statistics, Bias, Double counting, Empirical, Guideline, Meta-review, Overview of review, Recommendation, Research method, Review of review, Tertiary review, Tertiary study, Umbrella review, Software engineering, Double-counting, Guidelines, Overview of reviews, Recommendations, Research methods, Review of reviews
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24419 (URN)10.1016/j.infsof.2023.107174 (DOI)001005614800001 ()2-s2.0-85150795598 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2023-04-07 Created: 2023-04-07 Last updated: 2023-06-30Bibliographically approved
Börstler, J., Ali, N. b., Svensson, M. & Petersen, K. (2023). Investigating Acceptance Behavior in Software Engineering – Theoretical Perspectives. Journal of Systems and Software, 198, Article ID 111592.
Open this publication in new window or tab >>Investigating Acceptance Behavior in Software Engineering – Theoretical Perspectives
2023 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 198, article id 111592Article in journal (Refereed) Published
Abstract [en]

Background: Software engineering research aims to establish software development practice on a scientific basis. However, the evidence of the efficacy of technology is insufficient to ensure its uptake in industry. In the absence of a theoretical frame of reference, we mainly rely on best practices and expert judgment from industry-academia collaboration and software process improvement research to improve the acceptance of the proposed technology. Objective: To identify acceptance models and theories and discuss their applicability in the research of acceptance behavior related to software development.Method: We analyzed literature reviews within an interdisciplinary team to identify models and theories relevant to software engineering research. We further discuss acceptance behavior from the human information processing perspective of automatic and affect-driven processes (“fast” system 1 thinking) and rational and rule-governed processes (“slow” system 2 thinking). Results: We identified 30 potentially relevant models and theories. Several of them have been used in researching acceptance behavior in contexts related to software development, but few have been validated in such contexts. They use constructs that capture aspects of (automatic) system 1 and (rational) system 2 oriented processes. However, their operationalizations focus on system 2-oriented processes indicating a rational view of behavior, thus overlooking important psychological processes underpinning behavior. Conclusions: Software engineering research may use acceptance behavior models and theories more extensively to understand and predict practice adoption in the industry. Such theoretical foundations will help improve the impact of software engineering research. However, more consideration should be given to their validation, overlap, construct operationalization, and employed data collection mechanisms when using these models and theories.

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Acceptance behavior, dual process theory, technology acceptance, theory, TAM, UTAUT, TPB
National Category
Software Engineering Psychology
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-24132 (URN)10.1016/j.jss.2022.111592 (DOI)000915632900001 ()2-s2.0-85146227386 (Scopus ID)
Projects
ELLIIT
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

open access

Available from: 2022-12-23 Created: 2022-12-23 Last updated: 2023-03-02Bibliographically approved
Minhas, N. M., Irshad, M., Petersen, K. & Börstler, J. (2023). Lessons learned from replicating a study on information-retrieval based test case prioritization. Software quality journal, 31(4), 1527-1559
Open this publication in new window or tab >>Lessons learned from replicating a study on information-retrieval based test case prioritization
2023 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 31, no 4, p. 1527-1559Article in journal (Refereed) Published
Abstract [en]

Replication studies help solidify and extend knowledge by evaluating previous studies’ findings. Software engineering literature showed that too few replications are conducted focusing on software artifacts without the involvement of humans. This study aims to replicate an artifact-based study on software testing to address the gap related to replications. In this investigation, we focus on (i) providing a step-by-step guide of the replication, reflecting on challenges when replicating artifact-based testing research and (ii) evaluating the replicated study concerning the validity and robustness of the findings. We replicate a test case prioritization technique proposed by Kwon et al. We replicated the original study using six software programs, four from the original study and two additional software programs. We automated the steps of the original study using a Jupyter notebook to support future replications. Various general factors facilitating replications are identified, such as (1) the importance of documentation; (2) the need for assistance from the original authors; (3) issues in the maintenance of open-source repositories (e.g., concerning needed software dependencies, versioning); and (4) availability of scripts. We also noted observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. We conclude that the study by Kwon et al. is partially replicable for small software programs and could be automated to facilitate software practitioners, given the availability of required information. However, it is hard to implement the technique for large software programs with the current guidelines. Based on lessons learned, we suggest that the authors of original studies need to publish their data and experimental setup to support the external replications. © 2023, The Author(s).

Place, publisher, year, edition, pages
Springer, 2023
Keywords
Replication, Regression testing, Technique, Test case prioritization, Information retrieval, SIR
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23631 (URN)10.1007/s11219-023-09650-4 (DOI)001084224100001 ()2-s2.0-85174265778 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2022-09-13 Created: 2022-09-13 Last updated: 2023-12-05Bibliographically approved
Minhas, N. M., Koppula, T. R., Petersen, K. & Börstler, J. (2023). Using goal-question-metric to Compare Research and Practice Perspectives on Regression Testing. Journal of Software: Evolution and Process, 35(2), Article ID e2506.
Open this publication in new window or tab >>Using goal-question-metric to Compare Research and Practice Perspectives on Regression Testing
2023 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 35, no 2, article id e2506Article in journal (Refereed) Published
Abstract [en]

Regression testing is challenging because of its complexity and the amount of effort and time it requires, especially in large-scale environments with continuous integration and delivery. Regression test selection and prioritization techniques have been proposed in the literature to address the regression testing challenges, but adoption rates of these techniques in industry are not encouraging. One of the possible reasons could be the disparity in the regression testing goals in industry and literature. 

This work compares the research perspective to industry practice on regression testing goals, corresponding information needs, and metrics required to evaluate these goals. We have conducted a literature review of 44 research papers and a survey with 56 testing practitioners. The survey comprises 11 interviews and 45 responses to an online questionnaire. 

We identified that industry and research accentuate different regression testing goals. For instance, the literature emphasizes increasing the fault detection rates of test suites and early identification of critical faults. In contrast, the practitioners' focus is on test suite maintenance, controlled fault slippage, and awareness of changes. Similarly, the literature suggests maintaining information needs from test case execution histories to evaluate regression testing techniques based on various metrics, whereas, at large, the practitioners do not use the metrics suggested in the literature. 

To bridge the research and practice gap, based on the literature and survey findings, we have created a goal-question-metric (GQM) model that maps the regression testing goals, associated information needs, and metrics from both perspectives. The GQM model can guide researchers in proposing new techniques closer to industry contexts. Practitioners can benefit from information needs and metrics presented in the literature and can use GQM as a tool to follow their regression testing goals. 

Place, publisher, year, edition, pages
John Wiley & Sons, 2023
Keywords
Regression testing, Goals, Objectives, Measures, Metrics, GQM
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-23630 (URN)10.1002/smr.2506 (DOI)000852963100001 ()2-s2.0-85137875656 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2022-09-13 Created: 2022-09-13 Last updated: 2023-06-19Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-1532-8223

Search in DiVA

Show all publications