Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Test-Case Quality: Understanding Practitioners’ Perspectives
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0001-7266-5632
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0639-4234
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.
2019 (English)In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Franch X.,Mannisto T.,Martinez-Fernandez S., Springer , 2019, Vol. 11915, p. 37-52Conference paper, Published paper (Refereed)
Abstract [en]

Background: Test-case quality has always been one of the major concerns in software testing. To improve test-case quality, it is important to better understand how practitioners perceive the quality of test-cases. Objective: Motivated by that need, we investigated how practitioners define test-case quality and which aspects of test-cases are important for quality assessment. Method: We conducted semi-structured interviews with professional developers, testers and test architects from a multinational software company in Sweden. Before the interviews, we asked participants for actual test cases (written in natural language) that they perceive as good, normal, and bad respectively together with rationales for their assessment. We also compared their opinions on shared test cases and contrasted their views with the relevant literature. Results: We present a quality model which consists of 11 test-case quality attributes. We also identify a misalignment in defining test-case quality among practitioners and between academia and industry, along with suggestions for improving test-case quality in industry. Conclusion: The results show that practitioners’ background, including roles and working experience, are critical dimensions of how test-case quality is defined and assessed. © Springer Nature Switzerland AG 2019.

Place, publisher, year, edition, pages
Springer , 2019. Vol. 11915, p. 37-52
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords [en]
Natural-language test case, Software testing, Test-case quality, Process engineering, Quality management, Testing, Critical dimension, Natural languages, Quality assessment, Quality attributes, Quality modeling, Semi structured interviews, Software company, Test case
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-19063DOI: 10.1007/978-3-030-35333-9_3ISI: 000611527400003Scopus ID: 2-s2.0-85076540061ISBN: 9783030353322 (print)OAI: oai:DiVA.org:bth-19063DiVA, id: diva2:1381715
Conference
20th International Conference on Product-Focused Software Process Improvement, PROFES 2019; Barcelona; Spain; 27 November 2019 through 29 November
Part of project
VITS- Visualisation of test data for decision support, Knowledge Foundation
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127Available from: 2019-12-27 Created: 2019-12-27 Last updated: 2025-04-03Bibliographically approved
In thesis
1. Understanding Test-Artifact Quality in Software Engineering
Open this publication in new window or tab >>Understanding Test-Artifact Quality in Software Engineering
2022 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place.

Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering.

Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective.

Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach.

Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2022. p. 156
Series
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 2022:03
Keywords
Software testing, Test case quality, Test suite quality, Test artifact quality, Quality assurance
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-22819 (URN)978-91-7295-438-0 (ISBN)
Presentation
2022-06-01, J1630 + Zoom, Karlskrona, 09:00 (English)
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Available from: 2022-04-11 Created: 2022-04-08 Last updated: 2022-05-12Bibliographically approved
2. Characterizing and Assessing Test Case and Test Suite Quality
Open this publication in new window or tab >>Characterizing and Assessing Test Case and Test Suite Quality
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Context: Test cases and test suites (TCS) are central to software testing. High-quality TCS are essential for boosting practitioners’ confidence in testing. However, the quality of a test suite (a collection of test cases) is not merely the sum of the quality of individual test cases, as suite-level factors must also be considered. Achieving high-quality TCS requires defining relevant quality attributes, establishing appropriate measures for their assessment, and determining their importance within different testing contexts.

Objective: This thesis aims to (1) provide a consolidated view of TCS quality in terms of quality attributes, quality measures, and context information, (2) determine the relative importance of the quality attributes in practice, and (3) develop a reliable approach for assessing a highly prioritized quality attribute identified by practitioners.

Method: We conducted an exploratory study and a tertiary literature review for the first objective, a personal opinion survey for the second, and a comparative experiment with a small-scale evaluation study for the third.

Results: We developed a comprehensive TCS quality model grounded in practitioner insights and existing literature. Based on the survey, maintainability emerged as a critical quality attribute where practitioners need further support. A well-known indicator of poor test design that can negatively impact test-case maintainability is the Eager Test smell, which is defined as “when a test method checks several methods of the object to be tested” or “when a test verifies too much functionality.” The results of existing detection tools for eager tests are found to be inconsistent and unreliable. To better support practitioners in assessing test case maintainability, we proposed a novel, unambiguous definition of the Eager Test smell, developed a heuristic to operationalize it, and implemented a detection tool to automate its identification in practice. Our systematic approach in the tertiary review also yielded valuable insights into constructing and validating automated search results using a quasi-gold standard. We generalized these insights into recommendations for enhancing the current search validation approach.

Conclusions: The thesis makes three main contributions: (1) at the abstract level, a comprehensive quality model to help practitioners and researchers develop guidelines, templates, or tools for designing new test cases and test suites and assessing existing ones; (2) at the strategic level, identification of contextually important quality attributes; and (3), at the operational level, a refined definition of Eager Test smell, a detection heuristic and a tool prototype implementing the heuristic, advancing maintainability assessment in software testing.

 

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2025. p. 245
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 2025:05
Keywords
Software testing, Test case quality, Test suite quality, Test smell, Eager Test
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-27676 (URN)978-91-7295-501-1 (ISBN)
Public defence
2025-05-27, C413A, Karlskrona, 13:15 (English)
Opponent
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2025-04-04 Created: 2025-04-03 Last updated: 2025-04-30Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Tran, Huynh Khanh ViAli, Nauman binBörstler, JürgenUnterkalmsteiner, Michael

Search in DiVA

By author/editor
Tran, Huynh Khanh ViAli, Nauman binBörstler, JürgenUnterkalmsteiner, Michael
By organisation
Department of Software Engineering
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 593 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf