Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
How good are my search strings? Reflections on using an existing review as a quasi-gold standard
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0066-1792
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0639-4234
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0001-7266-5632
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-4118-0952
2022 (English)In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 16, no 1, p. 69-89, article id 220103Article, review/survey (Refereed) Published
Abstract [en]

Background: Systematic literature studies (SLS) have become a core research methodology in Evidence-based Software Engineering (EBSE). Search completeness, i.e., finding all relevant papers on the topic of interest, has been recognized as one of the most commonly discussed validity issues of SLSs. Aim: This study aims at raising awareness on the issues related to search string construction and on search validation using a quasi-gold standard (QGS). Furthermore, we aim at providing guidelines for search string validation. Method: We use a recently completed tertiary study as a case and complement our findings with the observations from other researchers studying and advancing EBSE. Results: We found that the issue of assessing QGS quality has not seen much attention in the literature, and the validation of automated searches in SLSs could be improved. Hence, we propose to extend the current search validation approach by the additional analysis step of the automated search validation results and provide recommendations for the QGS construction. Conclusion: In this paper, we report on new issues which could affect search completeness in SLSs. Furthermore, the proposed guideline and recommendations could help researchers implement a more reliable search strategy in their SLSs.

Place, publisher, year, edition, pages
Wroclaw University of Technology, 2022. Vol. 16, no 1, p. 69-89, article id 220103
Keywords [en]
search string construction, automated search validation, quasi-gold standard, systematic literature review, systematic mapping study, SYSTEMATIC LITERATURE-REVIEWS, RELIABILITY
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-22539DOI: 10.37190/e-Inf220103ISI: 000733746700001Scopus ID: 2-s2.0-85123458181OAI: oai:DiVA.org:bth-22539DiVA, id: diva2:1626396
Part of project
VITS- Visualisation of test data for decision support, Knowledge Foundation
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Note

open access

Available from: 2022-01-11 Created: 2022-01-11 Last updated: 2022-08-08Bibliographically approved
In thesis
1. Understanding Test-Artifact Quality in Software Engineering
Open this publication in new window or tab >>Understanding Test-Artifact Quality in Software Engineering
2022 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Context: The core of software testing is test artifacts, i.e., test cases, test suites, test scripts, test code, test specifications, and natural language tests. Hence, the quality of test artifacts can negatively or positively impact the reliability of the software testing process. Several empirical studies and secondary studies have investigated the test artifact quality. Nevertheless, little is known about how practitioners by themselves perceive test artifact quality, and the evidence on test artifact quality in the literature has not been synthesized in one place.

Objective: This thesis aims to identify and synthesize the knowledge in test artifact quality from both academia and industry. Hence, our objectives are: (1) To understand practitioners’ perspectives on test artifact quality, (2) To investigate how test artifact quality has been characterized in the literature, (3) To increase the reliability of the research method for conducting systematic literature reviews (SLR) in software engineering.

Method: In this thesis, we conducted an interview-based exploratory study and a tertiary study to achieve the first two objectives. We used the tertiary study as a case and referred to related observations from other researchers to achieve the last objective.

Results: We provided two quality models based on the findings of the interview-based and tertiary studies. The two models were synthesized and combined to provide a broader view of test artifact quality. Also, the context information that can be used to characterize the environment in which test artifact quality is investigated was aggregated based on these studies’ findings. Based on our experience in constructing and validating automated search results using a quality gold standard (QGS) in the tertiary study, we provided recommendations for the QGS construction and proposed an extension to the current search validation approach.

Conclusions: The context information and the combined quality model provide a comprehensive view of test artifact quality. Researchers can use the quality model to develop guidelines, templates for designing new test artifacts, or assessment tools for evaluating existing test artifacts. The model also can serve as a guideline for practitioners to search for test-artifact quality information, i.e., definitions for the quality attributes and measurements. For future work, we aim at investigating how to improve relevant test artifact quality attributes that are challenging to deal with.

Place, publisher, year, edition, pages
Karlskrona: Blekinge Tekniska Högskola, 2022. p. 156
Series
Blekinge Institute of Technology Licentiate Dissertation Series, ISSN 1650-2140 ; 2022:03
Keywords
Software testing, Test case quality, Test suite quality, Test artifact quality, Quality assurance
National Category
Software Engineering
Research subject
Software Engineering
Identifiers
urn:nbn:se:bth-22819 (URN)978-91-7295-438-0 (ISBN)
Presentation
2022-06-01, J1630 + Zoom, Karlskrona, 09:00 (English)
Supervisors
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile CommunicationsKnowledge Foundation, 20180127
Available from: 2022-04-11 Created: 2022-04-08 Last updated: 2022-05-12Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Tran, Huynh Khanh ViBörstler, JürgenAli, Nauman binUnterkalmsteiner, Michael

Search in DiVA

By author/editor
Tran, Huynh Khanh ViBörstler, JürgenAli, Nauman binUnterkalmsteiner, Michael
By organisation
Department of Software Engineering
In the same journal
e-Informatica Software Engineering Journal
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 256 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf