Change search
Link to record
Permanent link

Direct link
Tran, Huynh Khanh ViORCID iD iconorcid.org/0000-0003-0066-1792
Publications (8 of 8) Show all publications
Tran, H. K., Ali, N. b., Unterkalmsteiner, M. & Börstler, J. (2025). A proposal and assessment of an improved heuristic for the Eager Test smell detection. Journal of Systems and Software, 226, Article ID 112438.
Open this publication in new window or tab >>A proposal and assessment of an improved heuristic for the Eager Test smell detection
2025 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 226, article id 112438Article in journal (Refereed) Published
Abstract [en]

Context: The evidence for the prevalence of test smells at the unit testing level has relied on the accuracy of detection tools, which have seen intense research in the last two decades. The Eager Test smell, one of the most prevalent, is often identified using simplified detection rules that practitioners find inadequate.

Objective: We aim to improve the rules for detecting the Eager Test smell.

Method: We reviewed the literature on test smells to analyze the definitions and detection rules of the Eager Test smell. We proposed a novel, unambiguous definition of the test smell and a heuristic to address the limitations of the existing rules. We evaluated our heuristic against existing detection rules by manually applying it to 300 unit test cases in Java.

Results: Our review identified 56 relevant studies. We found that inadequate interpretations of original definitions of the Eager Test smell led to imprecise detection rules, resulting in a high level of disagreement in detection outcomes. Also, our heuristic detected patterns of eager and non-eager tests that existing rules missed.

Conclusion: Our heuristic captures the essence of the Eager Test smell more precisely; hence, it may address practitioners’ concerns regarding the adequacy of existing detection rules.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
Software testing, Test case quality, Test suite quality, Quality assurance, Test smells, Unit testing, Eager test Java JUnit
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-27675 (URN)10.1016/j.jss.2025.112438 (DOI)001464187400001 ()2-s2.0-105001808870 (Scopus ID)
Available from: 2025-03-31 Created: 2025-03-31 Last updated: 2025-09-30Bibliographically approved
Tran, H. K. (2025). Characterizing and Assessing Test Case and Test Suite Quality. (Doctoral dissertation). Karlskrona: Blekinge Tekniska Högskola
Open this publication in new window or tab >>Characterizing and Assessing Test Case and Test Suite Quality
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Context: Test cases and test suites (TCS) are central to software testing. High-quality TCS are essential for boosting practitioners’ confidence in testing. However, the quality of a test suite (a collection of test cases) is not merely the sum of the quality of individual test cases, as suite-level factors must also be considered. Achieving high-quality TCS requires defining relevant quality attributes, establishing appropriate measures for their assessment, and determining their importance within different testing contexts.

Objective: This thesis aims to (1) provide a consolidated view of TCS quality in terms of quality attributes, quality measures, and context information, (2) determine the relative importance of the quality attributes in practice, and (3) develop a reliable approach for assessing a highly prioritized quality attribute identified by practitioners.

Method: We conducted an exploratory study and a tertiary literature review for the first objective, a personal opinion survey for the second, and a comparative experiment with a small-scale evaluation study for the third.

Results: We developed a comprehensive TCS quality model grounded in practitioner insights and existing literature. Based on the survey, maintainability emerged as a critical quality attribute where practitioners need further support. A well-known indicator of poor test design that can negatively impact test-case maintainability is the Eager Test smell, which is defined as “when a test method checks several methods of the object to be tested” or “when a test verifies too much functionality.” The results of existing detection tools for eager tests are found to be inconsistent and unreliable. To better support practitioners in assessing test case maintainability, we proposed a novel, unambiguous definition of the Eager Test smell, developed a heuristic to operationalize it, and implemented a detection tool to automate its identification in practice. Our systematic approach in the tertiary review also yielded valuable insights into constructing and validating automated search results using a quasi-gold standard. We generalized these insights into recommendations for enhancing the current search validation approach.

Conclusions: The thesis makes three main contributions: (1) at the abstract level, a comprehensive quality model to help practitioners and researchers develop guidelines, templates, or tools for designing new test cases and test suites and assessing existing ones; (2) at the strategic level, identification of contextually important quality attributes; and (3), at the operational level, a refined definition of Eager Test smell, a detection heuristic and a tool prototype implementing the heuristic, advancing maintainability assessment in software testing.

 

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2025. p. 245
Series
Blekinge Institute of Technology Doctoral Dissertation Series, ISSN 1653-2090 ; 2025:05
Keywords
Software testing, Test case quality, Test suite quality, Test smell, Eager Test
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-27676 (URN)978-91-7295-501-1 (ISBN)
Public defence
2025-05-27, C413A, Karlskrona, 13:15 (English)
Opponent
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2025-04-04 Created: 2025-04-03 Last updated: 2025-09-30Bibliographically approved
Tran, H. K., Ali, N. b., Unterkalmsteiner, M., Börstler, J. & Chatzipetrou, P. (2025). Quality attributes of test cases and test suites - importance & challenges from practitioners' perspectives. Software quality journal, 33(1), Article ID 9.
Open this publication in new window or tab >>Quality attributes of test cases and test suites - importance & challenges from practitioners' perspectives
Show others...
2025 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 33, no 1, article id 9Article in journal (Refereed) Published
Abstract [en]

The quality of the test suites and the constituent test cases significantly impacts confidence in software testing. While research has identified several quality attributes of test cases and test suites, there is a need for a better understanding of their relative importance in practice. We investigate practitioners' perceptions regarding the relative importance of quality attributes of test cases and test suites and the challenges that they face in ensuring the perceived important quality attributes. To capture the practitioners' perceptions, we conducted an industrial survey using a questionnaire based on the quality attributes identified in an extensive literature review. We used a sampling strategy that leverages LinkedIn to draw a large and heterogeneous sample of professionals with experience in software testing. We collected 354 responses from practitioners with a wide range of experience (from less than one year to 42 years of experience). We found that the majority of practitioners rated Fault Detection, Usability, Maintainability, Reliability, and Coverage to be the most important quality attributes. Resource Efficiency, Reusability, and Simplicity received the most divergent opinions, which, according to our analysis, depend on the software-testing contexts. Also, we identified common challenges that apply to the important attributes, namely inadequate definition, lack of useful metrics, lack of an established review process, and lack of external support. The findings point out where practitioners actually need further support with respect to achieving high-quality test cases and test suites under different software testing contexts. Hence, the findings can serve as a guideline for academic researchers when looking for research directions on the topic. Furthermore, the findings can be used to encourage companies to provide more support to practitioners to achieve high-quality test cases and test suites.

Place, publisher, year, edition, pages
Springer, 2025
Keywords
Software testing, Test case quality, Test suite quality, Quality assurance
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-27395 (URN)10.1007/s11219-024-09698-w (DOI)001396622900001 ()2-s2.0-85217646661 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20220235Knowledge Foundation, 20180010
Available from: 2025-01-24 Created: 2025-01-24 Last updated: 2025-09-30Bibliographically approved
Tran, H. K. (2025). Towards Reliable Eager Test Detection: Practitioner Validation and a Tool Prototype. In: Proceedings - 2025 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion, SANER-C 2025: . Paper presented at 2025 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion, SANER-C 2025, Montreal, Marsch 4-7, 2025 (pp. 190-197). Institute of Electrical and Electronics Engineers (IEEE)
Open this publication in new window or tab >>Towards Reliable Eager Test Detection: Practitioner Validation and a Tool Prototype
2025 (English)In: Proceedings - 2025 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion, SANER-C 2025, Institute of Electrical and Electronics Engineers (IEEE), 2025, p. 190-197Conference paper, Published paper (Refereed)
Abstract [en]

Context: Existing tools for detecting eager tests produce many false positives, rendering them unreliable for practitioners. To address this, our previous work introduced a novel definition of the Eager Test smell and a heuristic for more effective identification. Comparing the heuristic’s results with existing detection rules revealed eight test patterns where the rules misclassified the presence or absence of eager tests.

Objective: We aim to gather practitioners’ feedback on our heuristic’s assessment of these eight test patterns and operationalize the heuristic in a tool we named EagerID.

Method: We conducted a survey to collect practitioners’ feedback on the eight identified test patterns and developed EagerID to detect eager tests in Java unit test cases using JUnit. We also preliminarily evaluated EagerID on 300 test cases, which were manually analyzed in our previous study.

Results: Our survey received 23 responses from practitioners with a wide range of experience. We found that most practitioners agreed with the assessment of our heuristic. Furthermore, the preliminary evaluation of EagerID returned high precision (100%), recall (91.76%), and F-Score (95.70%).

Conclusion: Our survey findings highlight the practical relevance of the heuristic. The preliminary evaluation of the EagerID tool confirmed the heuristic’s potential for automation. These findings suggest that the heuristic provides a solid foundation for both manual and automated detection.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers (IEEE), 2025
Keywords
Software testing, Test case quality, Test suite quality, Quality assurance, Test smells, Unit testing, Eager Test, Detection tool, Java, JUnit
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-27674 (URN)10.1109/SANER-C66551.2025.00035 (DOI)2-s2.0-105030543220 (Scopus ID)9798331537494 (ISBN)
Conference
2025 IEEE International Conference on Software Analysis, Evolution and Reengineering - Companion, SANER-C 2025, Montreal, Marsch 4-7, 2025
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20220235Knowledge Foundation, 20180010
Available from: 2025-03-31 Created: 2025-03-31 Last updated: 2026-03-24Bibliographically approved
Tran, H. K., Börstler, J., Ali, N. b. & Unterkalmsteiner, M. (2022). How good are my search strings? Reflections on using an existing review as a quasi-gold standard. e-Informatica Software Engineering Journal, 16(1), 69-89, Article ID 220103.
Open this publication in new window or tab >>How good are my search strings? Reflections on using an existing review as a quasi-gold standard
2022 (English)In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 16, no 1, p. 69-89, article id 220103Article, review/survey (Refereed) Published
Abstract [en]

Background: Systematic literature studies (SLS) have become a core research methodology in Evidence-based Software Engineering (EBSE). Search completeness, i.e., finding all relevant papers on the topic of interest, has been recognized as one of the most commonly discussed validity issues of SLSs. Aim: This study aims at raising awareness on the issues related to search string construction and on search validation using a quasi-gold standard (QGS). Furthermore, we aim at providing guidelines for search string validation. Method: We use a recently completed tertiary study as a case and complement our findings with the observations from other researchers studying and advancing EBSE. Results: We found that the issue of assessing QGS quality has not seen much attention in the literature, and the validation of automated searches in SLSs could be improved. Hence, we propose to extend the current search validation approach by the additional analysis step of the automated search validation results and provide recommendations for the QGS construction. Conclusion: In this paper, we report on new issues which could affect search completeness in SLSs. Furthermore, the proposed guideline and recommendations could help researchers implement a more reliable search strategy in their SLSs.

Place, publisher, year, edition, pages
Wroclaw University of Technology, 2022
Keywords
search string construction, automated search validation, quasi-gold standard, systematic literature review, systematic mapping study, SYSTEMATIC LITERATURE-REVIEWS, RELIABILITY
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-22539 (URN)10.37190/e-Inf220103 (DOI)000733746700001 ()2-s2.0-85123458181 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Note

open access

Available from: 2022-01-11 Created: 2022-01-11 Last updated: 2025-09-30Bibliographically approved
Tran, H. K. (2022). Understanding Test-Artifact Quality in Software Engineering. (Licentiate dissertation). Karlskrona: Blekinge Tekniska Högskola
Open this publication in new window or tab >>Understanding Test-Artifact Quality in Software Engineering
2022 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place.

Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering.

Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective.

Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach.

Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2022. p. 156
Series
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 2022:03
Keywords
Software testing, Test case quality, Test suite quality, Test artifact quality, Quality assurance
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-22819 (URN)978-91-7295-438-0 (ISBN)
Presentation
2022-06-01, J1630 + Zoom, Karlskrona, 09:00 (English)
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Available from: 2022-04-11 Created: 2022-04-08 Last updated: 2025-09-30Bibliographically approved
Tran, H. K., Unterkalmsteiner, M., Börstler, J. & Ali, N. b. (2021). Assessing test artifact quality-A tertiary study. Information and Software Technology, 139, Article ID 106620.
Open this publication in new window or tab >>Assessing test artifact quality-A tertiary study
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 139, article id 106620Article in journal (Refereed) Published
Abstract [en]

Context: Modern software development increasingly relies on software testing for an ever more frequent delivery of high quality software. This puts high demands on the quality of the central artifacts in software testing, test suites and test cases. Objective: We aim to develop a comprehensive model for capturing the dimensions of test case/suite quality, which are relevant for a variety of perspectives. Methods: We have carried out a systematic literature review to identify and analyze existing secondary studies on quality aspects of software testing artifacts. Results: We identified 49 relevant secondary studies. Of these 49 studies, less than half did some form of quality appraisal of the included primary studies and only 3 took into account the quality of the primary study when synthesizing the results. We present an aggregation of the context dimensions and factors that can be used to characterize the environment in which the test case/suite quality is investigated. We also provide a comprehensive model of test case/suite quality with definitions for the quality attributes and measurements based on findings in the literature and ISO/IEC 25010:2011. Conclusion: The test artifact quality model presented in the paper can be used to support test artifact quality assessment and improvement initiatives in practice. Furthermore, the model can also be used as a framework for documenting context characteristics to make research results more accessible for research and practice.

Place, publisher, year, edition, pages
ELSEVIER, 2021
Keywords
Software testing; Test case quality; Test suite quality; Test artifact quality; Quality assurance
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-22174 (URN)10.1016/j.infsof.2021.106620 (DOI)000697678300012 ()
Note

open access

Available from: 2021-10-01 Created: 2021-10-01 Last updated: 2025-09-30Bibliographically approved
Tran, H. K., Ali, N. b., Börstler, J. & Unterkalmsteiner, M. (2019). Test-Case Quality: Understanding Practitioners’ Perspectives. In: Franch X.,Mannisto T.,Martinez-Fernandez S. (Ed.), Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics): . Paper presented at 20th International Conference on Product-Focused Software Process Improvement, PROFES 2019; Barcelona; Spain; 27 November 2019 through 29 November (pp. 37-52). Springer, 11915
Open this publication in new window or tab >>Test-Case Quality: Understanding Practitioners’ Perspectives
2019 (English)In: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) / [ed] Franch X.,Mannisto T.,Martinez-Fernandez S., Springer , 2019, Vol. 11915, p. 37-52Conference paper, Published paper (Refereed)
Abstract [en]

Background: Test-case quality has always been one of the major concerns in software testing. To improve test-case quality, it is important to better understand how practitioners perceive the quality of test-cases. Objective: Motivated by that need, we investigated how practitioners define test-case quality and which aspects of test-cases are important for quality assessment. Method: We conducted semi-structured interviews with professional developers, testers and test architects from a multinational software company in Sweden. Before the interviews, we asked participants for actual test cases (written in natural language) that they perceive as good, normal, and bad respectively together with rationales for their assessment. We also compared their opinions on shared test cases and contrasted their views with the relevant literature. Results: We present a quality model which consists of 11 test-case quality attributes. We also identify a misalignment in defining test-case quality among practitioners and between academia and industry, along with suggestions for improving test-case quality in industry. Conclusion: The results show that practitioners’ background, including roles and working experience, are critical dimensions of how test-case quality is defined and assessed. © Springer Nature Switzerland AG 2019.

Place, publisher, year, edition, pages
Springer, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349
Keywords
Natural-language test case, Software testing, Test-case quality, Process engineering, Quality management, Testing, Critical dimension, Natural languages, Quality assessment, Quality attributes, Quality modeling, Semi structured interviews, Software company, Test case
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-19063 (URN)10.1007/978-3-030-35333-9_3 (DOI)000611527400003 ()2-s2.0-85076540061 (Scopus ID)9783030353322 (ISBN)
Conference
20th International Conference on Product-Focused Software Process Improvement, PROFES 2019; Barcelona; Spain; 27 November 2019 through 29 November
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Available from: 2019-12-27 Created: 2019-12-27 Last updated: 2025-09-30Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0066-1792

Search in DiVA

Show all publications