Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Case study identification: A trivial indicator outperforms human classifiers
Queen's University Belfast, United Kingdom.
Blekinge Institute of Technology, Faculty of Computing, Department of Software Engineering.ORCID iD: 0000-0003-0460-5253
2023 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 161, article id 107252Article in journal (Refereed) Published
Abstract [en]

Context: The definition and term “case study” are not being applied consistently by software engineering researchers. We previously developed a trivial “smell indicator” to help detect the misclassification of primary studies as case studies. Objective: To evaluate the performance of the indicator. Methods: We compare the performance of the indicator against human classifiers for three datasets, two datasets comprising classifications by both authors of systematic literature studies and primary studies, and one dataset comprising only primary-study author classifications. Results: The indicator outperforms the human classifiers for all datasets. Conclusions: The indicator is successful because human classifiers “fail” to properly classify their own, and others’, primary studies. Consequently, reviewers of primary studies and authors of systematic literature studies could use the classifier as a “sanity” check for primary studies. Moreover, authors might use the indicator to double-check how they classified a study, as part of their analysis, and prior to submitting their manuscript for publication. We challenge the research community to both beat the indicator, and to improve its ability to identify true case studies. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023. Vol. 161, article id 107252
Keywords [en]
Case study, Evaluation, Primary study, Smell indicator, Systematic review, Software engineering, Case-studies, Classifieds, Literature studies, Misclassifications, Performance, Sanity check, Classification (of information)
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-24692DOI: 10.1016/j.infsof.2023.107252ISI: 001052976000001Scopus ID: 2-s2.0-85159783823OAI: oai:DiVA.org:bth-24692DiVA, id: diva2:1761963
Available from: 2023-06-02 Created: 2023-06-02 Last updated: 2023-09-08Bibliographically approved

Open Access in DiVA

fulltext(441 kB)92 downloads
File information
File name FULLTEXT01.pdfFile size 441 kBChecksum SHA-512
56f7fc6eb0910c500e435ac0bd12a1e5a596bcf6fb9d2adac2c4874f68053b7857a478093362ce6be45ca20dc3c8223ac142d8a979e3147538770d8faae3f596
Type fulltextMimetype application/pdf

Other links

Publisher's full textScopus

Authority records

Wohlin, Claes

Search in DiVA

By author/editor
Wohlin, Claes
By organisation
Department of Software Engineering
In the same journal
Information and Software Technology
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
Total: 92 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 124 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf