Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Lessons learned from replicating a study on information-retrieval based test case prioritization
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0001-8177-4355
Ericsson Sweden AB.
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0002-1532-8223
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0639-4234
2023 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 31, no 4, p. 1527-1559Article in journal (Refereed) Published
Abstract [en]

Replication studies help solidify and extend knowledge by evaluating previous studies’ findings. Software engineering literature showed that too few replications are conducted focusing on software artifacts without the involvement of humans. This study aims to replicate an artifact-based study on software testing to address the gap related to replications. In this investigation, we focus on (i) providing a step-by-step guide of the replication, reflecting on challenges when replicating artifact-based testing research and (ii) evaluating the replicated study concerning the validity and robustness of the findings. We replicate a test case prioritization technique proposed by Kwon et al. We replicated the original study using six software programs, four from the original study and two additional software programs. We automated the steps of the original study using a Jupyter notebook to support future replications. Various general factors facilitating replications are identified, such as (1) the importance of documentation; (2) the need for assistance from the original authors; (3) issues in the maintenance of open-source repositories (e.g., concerning needed software dependencies, versioning); and (4) availability of scripts. We also noted observations specific to the study and its context, such as insights from using different mutation tools and strategies for mutant generation. We conclude that the study by Kwon et al. is partially replicable for small software programs and could be automated to facilitate software practitioners, given the availability of required information. However, it is hard to implement the technique for large software programs with the current guidelines. Based on lessons learned, we suggest that the authors of original studies need to publish their data and experimental setup to support the external replications. © 2023, The Author(s).

Place, publisher, year, edition, pages
Springer, 2023. Vol. 31, no 4, p. 1527-1559
Keywords [en]
Replication, Regression testing, Technique, Test case prioritization, Information retrieval, SIR
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-23631DOI: 10.1007/s11219-023-09650-4ISI: 001084224100001Scopus ID: 2-s2.0-85174265778OAI: oai:DiVA.org:bth-23631DiVA, id: diva2:1695144
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsAvailable from: 2022-09-13 Created: 2022-09-13 Last updated: 2023-12-05Bibliographically approved
In thesis
1. Understanding and improving regression testing practice
Open this publication in new window or tab >>Understanding and improving regression testing practice
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Background

Regression testing is a complex and challenging activity and consumes a significant portion of software maintenance costs. Researchers are proposing various techniques to deal with the cost and complexity of regression testing. Yet, practitioners face various challenges when planning and executing regression testing. One of the main reasons is the disparity between research and practice perspectives on the goals and challenges of regression testing. In addition, it is difficult for practitioners to find techniques relevant to their context, needs, and goals because most proposed techniques lack contextual information.

Objective

This work aims to understand the challenges to regression testing practice and find ways to improve it. To fulfil this aim, we have the following objectives:

1) understanding the current state of regression testing practice, goals, and challenges,

2) finding ways to utilize regression testing research in practice, and

3) providing support in structuring and improving regression testing practice. 

Method

We have utilized various research methods, including literature reviews, workshops, focus groups, case studies, surveys, and experiments, to conduct the studies for this thesis.

Results

Research and practice stress different goals, and both follow their priorities. Researchers propose new regression testing techniques to increase the test suite's fault detection rate and maximise coverage. The practitioners consider test suite maintenance, controlled fault slippage, and confidence their priority goals. The practitioners rely on expert judgment instead of a well-defined regression testing process. They face various challenges in regression testing, such as time to test, test suit maintenance, lack of communication, lack of strategy, lack of assessment, and issues in test case selection and prioritization. 

We have proposed a GQM model representing research and practice perspectives on regression testing goals. The proposed model can help reduce disparities in research and practice perspectives and cope with the lack of assessment. 

We have created regression testing taxonomies to guide practitioners in finding techniques suitable to their product context, goals, and needs.  Further, based on the experiences of replicating a regression testing technique, we have provided guidelines for future replications and adoption of regression testing techniques.

Finally, we have designed regression testing checklists to support practitioners in decision-making while planning and performing regression testing. Practitioners who evaluated the checklists reported that the checklists covered essential aspects of regression testing and were useful and customizable to their context.

Conclusions

The thesis points out the gap in research and practice perspectives of regression testing. The regression testing challenges identified in this thesis are the evidence that either research does not consider these challenges or practitioners are unaware of how to replicate the regression testing research into their context. The GQM model presented in this thesis is a step toward reducing the research and practice gap in regression testing. Furthermore, the taxonomies and the replication experiment provide a way forward to adopting regression testing research. Finally, the checklists proposed in this thesis could help improve communication and regression test strategy. Moreover, the checklists will provide a basis for structuring and improving regression testing practice.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2022. p. 297
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 7
Keywords
Regression testing, Goals, GQM, Replication, Checklists
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-23634 (URN)978-91-7295-444-1 (ISBN)
Public defence
2022-10-31, C413A, Campus Grasvik, Karlskrona, 13:00 (English)
Opponent
Supervisors
Available from: 2022-09-20 Created: 2022-09-18 Last updated: 2022-10-10Bibliographically approved

Open Access in DiVA

fulltext(2047 kB)42 downloads
File information
File name FULLTEXT03.pdfFile size 2047 kBChecksum SHA-512
baa0f539e3df5e29ad22efebb7729ea6949f7be950282f27bb0733f4dc94a4d3199fe2a84d35fe06acb7750e30f78bd976081888292e39fdf3def6b11e96bc4a
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Minhas, Nasir MehmoodIrshad, MohsinPetersen, KaiBörstler, Jürgen

Search in DiVA

By author/editor
Minhas, Nasir MehmoodIrshad, MohsinPetersen, KaiBörstler, Jürgen
By organisation
Department of Software Engineering
In the same journal
Software quality journal
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 43 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 116 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf