Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Understanding Test-Artifact Quality in Software Engineering
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0066-1792
2022 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place.

Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering.

Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective.

Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach.

Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2022. , p. 156
Series
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 2022:03
Keywords [en]
Software testing, Test case quality, Test suite quality, Test artifact quality, Quality assurance
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
URN: urn:nbn:se:bth-22819ISBN: 978-91-7295-438-0 (print)OAI: oai:DiVA.org:bth-22819DiVA, id: diva2:1650943
Presentation
2022-06-01, J1630 + Zoom, Karlskrona, 09:00 (English)
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127Available from: 2022-04-11 Created: 2022-04-08 Last updated: 2022-05-12Bibliographically approved
List of papers
1. Test-Case Quality: Understanding Practitioners’ Perspectives
Open this publication in new window or tab >>Test-Case Quality: Understanding Practitioners’ Perspectives
2019 (English)In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Franch X.,Mannisto T.,Martinez-Fernandez S., Springer , 2019, Vol. 11915, p. 37-52Conference paper, Published paper (Refereed)
Abstract [en]

Background: Test-case quality has always been one of the major concerns in software testing. To improve test-case quality, it is important to better understand how practitioners perceive the quality of test-cases. Objective: Motivated by that need, we investigated how practitioners define test-case quality and which aspects of test-cases are important for quality assessment. Method: We conducted semi-structured interviews with professional developers, testers and test architects from a multinational software company in Sweden. Before the interviews, we asked participants for actual test cases (written in natural language) that they perceive as good, normal, and bad respectively together with rationales for their assessment. We also compared their opinions on shared test cases and contrasted their views with the relevant literature. Results: We present a quality model which consists of 11 test-case quality attributes. We also identify a misalignment in defining test-case quality among practitioners and between academia and industry, along with suggestions for improving test-case quality in industry. Conclusion: The results show that practitioners’ background, including roles and working experience, are critical dimensions of how test-case quality is defined and assessed. © Springer Nature Switzerland AG 2019.

Place, publisher, year, edition, pages
Springer, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
Natural-language test case, Software testing, Test-case quality, Process engineering, Quality management, Testing, Critical dimension, Natural languages, Quality assessment, Quality attributes, Quality modeling, Semi structured interviews, Software company, Test case
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-19063 (URN)10.1007/978-3-030-35333-9_3 (DOI)000611527400003 ()2-s2.0-85076540061 (Scopus ID)9783030353322 (ISBN)
Conference
20th International Conference on Product-Focused Software Process Improvement, PROFES 2019; Barcelona; Spain; 27 November 2019 through 29 November
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Available from: 2019-12-27 Created: 2019-12-27 Last updated: 2022-05-06Bibliographically approved
2. Assessing test artifact quality-A tertiary study
Open this publication in new window or tab >>Assessing test artifact quality-A tertiary study
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 139, article id 106620Article in journal (Refereed) Published
Abstract [en]

Context: Modern software development increasingly relies on software testing for an ever more frequent delivery of high quality software. This puts high demands on the quality of the central artifacts in software testing, test suites and test cases. Objective: We aim to develop a comprehensive model for capturing the dimensions of test case/suite quality, which are relevant for a variety of perspectives. Methods: We have carried out a systematic literature review to identify and analyze existing secondary studies on quality aspects of software testing artifacts. Results: We identified 49 relevant secondary studies. Of these 49 studies, less than half did some form of quality appraisal of the included primary studies and only 3 took into account the quality of the primary study when synthesizing the results. We present an aggregation of the context dimensions and factors that can be used to characterize the environment in which the test case/suite quality is investigated. We also provide a comprehensive model of test case/suite quality with definitions for the quality attributes and measurements based on findings in the literature and ISO/IEC 25010:2011. Conclusion: The test artifact quality model presented in the paper can be used to support test artifact quality assessment and improvement initiatives in practice. Furthermore, the model can also be used as a framework for documenting context characteristics to make research results more accessible for research and practice.

Place, publisher, year, edition, pages
ELSEVIER, 2021
Keywords
Software testing; Test case quality; Test suite quality; Test artifact quality; Quality assurance
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-22174 (URN)10.1016/j.infsof.2021.106620 (DOI)000697678300012 ()
Note

open access

Available from: 2021-10-01 Created: 2021-10-01 Last updated: 2022-04-08Bibliographically approved
3. How good are my search strings? Reflections on using an existing review as a quasi-gold standard
Open this publication in new window or tab >>How good are my search strings? Reflections on using an existing review as a quasi-gold standard
2022 (English)In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 16, no 1, p. 69-89, article id 220103Article, review/survey (Refereed) Published
Abstract [en]

Background: Systematic literature studies (SLS) have become a core research methodology in Evidence-based Software Engineering (EBSE). Search completeness, i.e., finding all relevant papers on the topic of interest, has been recognized as one of the most commonly discussed validity issues of SLSs. Aim: This study aims at raising awareness on the issues related to search string construction and on search validation using a quasi-gold standard (QGS). Furthermore, we aim at providing guidelines for search string validation. Method: We use a recently completed tertiary study as a case and complement our findings with the observations from other researchers studying and advancing EBSE. Results: We found that the issue of assessing QGS quality has not seen much attention in the literature, and the validation of automated searches in SLSs could be improved. Hence, we propose to extend the current search validation approach by the additional analysis step of the automated search validation results and provide recommendations for the QGS construction. Conclusion: In this paper, we report on new issues which could affect search completeness in SLSs. Furthermore, the proposed guideline and recommendations could help researchers implement a more reliable search strategy in their SLSs.

Place, publisher, year, edition, pages
Wroclaw University of Technology, 2022
Keywords
search string construction, automated search validation, quasi-gold standard, systematic literature review, systematic mapping study, SYSTEMATIC LITERATURE-REVIEWS, RELIABILITY
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-22539 (URN)10.37190/e-Inf220103 (DOI)000733746700001 ()2-s2.0-85123458181 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Note

open access

Available from: 2022-01-11 Created: 2022-01-11 Last updated: 2022-08-08Bibliographically approved

Open Access in DiVA

fulltext(6088 kB)418 downloads
File information
File name FULLTEXT02.pdfFile size 6088 kBChecksum SHA-512
347e365525a11556dfd4f9551c4e49fe4189d114a4ccbe3d625700e1630a4dafc0fa8da48feff72d5e98f4e578677f25dbd18fcf1715b6c1764a2c89d09964a3
Type fulltextMimetype application/pdf

Authority records

Tran, Huynh Khanh Vi

Search in DiVA

By author/editor
Tran, Huynh Khanh Vi
By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 457 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1076 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf