Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 184) Show all publications
Papatheocharous, E., Wohlin, C., Badampudi, D., Carlson, J. & Wnuk, K. (2024). Context factors perceived important when looking for similar experiences in decision-making for software components: An interview study. Journal of Software: Evolution and Process, 36(9), Article ID e2668.
Open this publication in new window or tab >>Context factors perceived important when looking for similar experiences in decision-making for software components: An interview study
Show others...
2024 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 36, no 9, article id e2668Article in journal (Refereed) Published
Abstract [en]

During software evolution, decisions related to components' origin or source significantly impact the quality properties of the product and development metrics such as cost, time to market, ease of maintenance, and further evolution. Thus, such decisions should ideally be supported by evidence, i.e., using previous experiences and information from different sources, even own previous experiences. A hindering factor to such reuse of previous experiences is that these decisions are highly context-dependent and it is difficult to identify when previous experiences come from sufficiently similar contexts to be useful in a current setting. Conversely, when documenting a decision (as a decision experience), it is difficult to know which context factors will be most beneficial when reusing the experience in the future. An interview study is performed to identify a list of context factors that are perceived to be most important by practitioners when using experiences to support decision-making for component sourcing, using a specific scenario with alternative sources of experiences. We observed that the further away (from a company or an interviewee) the experience evidence is, as is the case for online experiences, the more context factors are perceived as important by practitioners to make use of the experience. Furthermore, we discuss and identify further research to make this type of decision-making more evidence-based. With this interview study, which focuses on which context factors are perceived as important by practitioners when reusing previous knowledge on software component reuse, we contribute with a listing of factors perceived to be important when reusing experiences from other prior decision-making cases of selecting among software components options. image

Place, publisher, year, edition, pages
John Wiley & Sons, 2024
Keywords
components off-the-shelf, context factors, decision experience, decision-making, experience source, in-house, open-source software
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26145 (URN)10.1002/smr.2668 (DOI)001199811300001 ()2-s2.0-85190424140 (Scopus ID)
Projects
Orion
Funder
Knowledge Foundation, 20140218
Available from: 2024-04-25 Created: 2024-04-25 Last updated: 2024-11-22Bibliographically approved
Wohlin, C. (2024). Prof. Günther Ruhe's research contributions. Information and Software Technology, 172, Article ID 107487.
Open this publication in new window or tab >>Prof. Günther Ruhe's research contributions
2024 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 172, article id 107487Article in journal, Editorial material (Refereed) Published
Place, publisher, year, edition, pages
Elsevier, 2024
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26216 (URN)10.1016/j.infsof.2024.107487 (DOI)001240006600001 ()2-s2.0-85192230522 (Scopus ID)
Available from: 2024-05-20 Created: 2024-05-20 Last updated: 2024-06-19Bibliographically approved
Rainer, A. & Wohlin, C. (2024). Reporting case studies in systematic literature studies—An evidential problem. Information and Software Technology, 174, Article ID 107501.
Open this publication in new window or tab >>Reporting case studies in systematic literature studies—An evidential problem
2024 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 174, article id 107501Article in journal (Refereed) Published
Abstract [en]

Context: The term and label, “case study”, is not used consistently by authors of primary studies in software engineering research. It is not clear whether this problem also occurs for systematic literature studies (SLSs).

Objective: To investigate the extent to which SLSs in/correctly use the term and label, “case study”, when classifying primary studies.

Methods: We systematically collect two sub-samples (2010–2021 & 2022) comprising a total of eleven SLSs and 79 primary studies. We examine the designs of these SLSs, and then analyse whether the SLS authors and the primary-study authors correctly label the respective primary study as a “case study”.

Results: 76% of the 79 primary studies are misclassified by SLSs (with the two sub-samples having 60% and 81% misclassification, respectively). For 39% of the 79 studies, the SLSs propagate a mislabelling by the original authors, whilst for 37%, the SLSs introduce a new mislabel, thus making the problem worse. SLSs rarely present explicit definitions for “case study” and when they do, the definition is not consistent with established definitions.

Conclusions: SLSs are both propagating and exacerbating the problem of the mislabelling of primary studies as “case studies”, rather than – as we should expect of SLSs – correcting the labelling of primary studies, and thus improving the body of credible evidence. Propagating and exacerbating mislabelling undermines the credibility of evidence in terms of its quantity, quality and relevance to both practice and research. © 2024 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2024
Keywords
Case study, Credible evidence, Systematic literature review, Systematic mapping study, Systematic review, Case-studies, Labelings, Literature studies, Misclassifications, Software engineering research, Sub-samples, Systematic mapping studies, Software engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-26541 (URN)10.1016/j.infsof.2024.107501 (DOI)001252262000001 ()2-s2.0-85195473840 (Scopus ID)
Available from: 2024-06-24 Created: 2024-06-24 Last updated: 2024-08-05Bibliographically approved
Usman, M., Ali, N. b. & Wohlin, C. (2023). A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering. e-Informatica Software Engineering Journal, 17(1), Article ID 230105.
Open this publication in new window or tab >>A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering
2023 (English)In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 17, no 1, article id 230105Article in journal (Refereed) Published
Abstract [en]

Background: Systematic literature reviews (SLRs) have become a standard practice as part of software engineering (SE) research, although their quality varies. To build on the reviews, both for future research and industry practice, they need to be of high quality.Aim: To assess the quality of SLRs in SE, we put forward an appraisal instrument for SLRs.Method: A well-established appraisal instrument from research in healthcare was used as a starting point to develop the instrument. It is adapted to SE using guidelines, checklists, and experiences from SE. The first version was reviewed by four external experts on SLRs in SE and updated based on their feedback. To demonstrate its use, the updated version was also used by the authors to assess a sample of six selected systematic literature studies.Results: The outcome of the research is an appraisal instrument for quality assessment of SLRs in SE. The instrument includes 15 items with different options to capture the quality. The instrument also supports consolidating the items into groups, which are then used to assess the overall quality of an SLR.Conclusion: The presented instrument may be helpful support for an appraiser in assessing the quality of SLRs in SE.

Place, publisher, year, edition, pages
Wroclaw University of Technology, 2023
Keywords
Systematic reviews, quality assessment, critical appraisal, AMSTAR 2, systematic literature review, tertiary study
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24415 (URN)10.37190/e-Inf230105 (DOI)000944209900001 ()2-s2.0-85152967598 (Scopus ID)
Funder
Knowledge Foundation, 20180127Knowledge Foundation, 20190081ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2023-04-05 Created: 2023-04-05 Last updated: 2023-04-28Bibliographically approved
Rainer, A. & Wohlin, C. (2023). Case study identification: A trivial indicator outperforms human classifiers. Information and Software Technology, 161, Article ID 107252.
Open this publication in new window or tab >>Case study identification: A trivial indicator outperforms human classifiers
2023 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 161, article id 107252Article in journal (Refereed) Published
Abstract [en]

Context: The definition and term “case study” are not being applied consistently by software engineering researchers. We previously developed a trivial “smell indicator” to help detect the misclassification of primary studies as case studies. Objective: To evaluate the performance of the indicator. Methods: We compare the performance of the indicator against human classifiers for three datasets, two datasets comprising classifications by both authors of systematic literature studies and primary studies, and one dataset comprising only primary-study author classifications. Results: The indicator outperforms the human classifiers for all datasets. Conclusions: The indicator is successful because human classifiers “fail” to properly classify their own, and others’, primary studies. Consequently, reviewers of primary studies and authors of systematic literature studies could use the classifier as a “sanity” check for primary studies. Moreover, authors might use the indicator to double-check how they classified a study, as part of their analysis, and prior to submitting their manuscript for publication. We challenge the research community to both beat the indicator, and to improve its ability to identify true case studies. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Case study, Evaluation, Primary study, Smell indicator, Systematic review, Software engineering, Case-studies, Classifieds, Literature studies, Misclassifications, Performance, Sanity check, Classification (of information)
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24692 (URN)10.1016/j.infsof.2023.107252 (DOI)001052976000001 ()2-s2.0-85159783823 (Scopus ID)
Available from: 2023-06-02 Created: 2023-06-02 Last updated: 2023-09-08Bibliographically approved
Wohlin, C. & Rainer, A. (2022). Is it a case study?—A critical analysis and guidance. Journal of Systems and Software, 192, Article ID 111395.
Open this publication in new window or tab >>Is it a case study?—A critical analysis and guidance
2022 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 192, article id 111395Article in journal (Refereed) Published
Abstract [en]

The term “case study” is not used consistently when describing studies and, most importantly, is not used according to the established definitions. Given the misuse of the term “case study”, we critically analyse articles that cite case study guidelines and report case studies. We find that only about 50% of the studies labelled “case study” are correctly labelled, and about 40% of studies labelled “case study” are actually better understood as “small-scale evaluations”. Based on our experiences conducting the analysis, we formulate support for ensuring and assuring the correct labelling of case studies. We develop a checklist and a self-assessment scheme. The checklist is intended to complement existing definitions and to encourage researchers to use the term “case study” correctly. The self-assessment scheme is intended to help the researcher identify when their empirical study is a “small-scale evaluation” and, again, encourages researchers to label their studies correctly. Finally, we develop and evaluate a smell indicator to automatically suggest when a reported case study may not actually be a case study. These three instruments have been developed to help ensure and assure that only those studies that are actually case studies are labelled as “case study”. © 2022 The Author(s)

Place, publisher, year, edition, pages
Elsevier Inc., 2022
Keywords
Case study, Checklist, Citation analysis, Guidelines, Small-scale evaluation, Smell indicator, Assessment scheme, Case-studies, Critical analysis, Guideline, Labelings, Self-assessment
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23502 (URN)10.1016/j.jss.2022.111395 (DOI)000829490300010 ()2-s2.0-85132744710 (Scopus ID)
Note

open access

Available from: 2022-08-11 Created: 2022-08-11 Last updated: 2022-08-11Bibliographically approved
Rainer, A. & Wohlin, C. (2022). Recruiting credible participants for field studies in software engineering research. Information and Software Technology, 151, Article ID 107002.
Open this publication in new window or tab >>Recruiting credible participants for field studies in software engineering research
2022 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 151, article id 107002Article in journal (Refereed) Published
Abstract [en]

Context: Software practitioners are a primary provider of information for field studies in software engineering. Research typically recruits practitioners through some kind of sampling. But sampling may not in itself recruit the "right" participants. Objective: To assess existing guidance on participant recruitment, and to propose and illustrate a framework for recruiting professional practitioners as credible participants in field studies of software engineering. Methods: We review existing guidelines, checklists and other advisory sources on recruiting participants for field studies. We develop a framework, partly based on our prior research and on the research of others. We search for and select three exemplar studies (a case study, an interview study and a survey study) and use those to illustrate the framework. Results: Whilst existing guidance recognises the importance of recruiting participants, there is limited guidance on how to recruit the "right" participants. The framework suggests the conceptualisation of participants as "research instruments" or, alternatively, as a sampling frame for items of interest. The exemplars suggest that at least some members of the research community are aware of the need to carefully recruit the "right" participants. Conclusions: The framework is intended to encourage researchers to think differently about the involvement of practitioners in field studies of software engineering. Also, the framework identifies a number of characteristics not explicitly addressed by existing guidelines.

Place, publisher, year, edition, pages
ELSEVIER, 2022
Keywords
Credibility, Validity, Reliability, Data collection, Sampling, Subjects, Participants, Recruitment
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23746 (URN)10.1016/j.infsof.2022.107002 (DOI)000859493100003 ()
Note

open access

Available from: 2022-10-14 Created: 2022-10-14 Last updated: 2022-10-14Bibliographically approved
Wohlin, C., Kalinowski, M., Romero Felizardo, K. & Mendes, E. (2022). Successful combination of database search and snowballing for identification of primary studies in systematic literature studies. Information and Software Technology, 147, Article ID 106908.
Open this publication in new window or tab >>Successful combination of database search and snowballing for identification of primary studies in systematic literature studies
2022 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 147, article id 106908Article in journal (Refereed) Published
Abstract [en]

Background: A good search strategy is essential for a successful systematic literature study. Historically, database searches have been the norm, which was later complemented with snowball searches. Our conjecture is that we can perform even better searches if combining these two search approaches, referred to as a hybrid search strategy. Objective: Our main objective was to compare and evaluate a hybrid search strategy. Furthermore, we compared four alternative hybrid search strategies to assess whether we could identify more cost-efficient ways of searching for relevant primary studies. Methods: To compare and evaluate the hybrid search strategy, we replicated the search procedure in a systematic literature review (SLR) on industry–academia collaboration in software engineering. The SLR used a more “traditional” approach to searching for relevant articles for an SLR, while our replication was executed using a hybrid search strategy. Results: In our evaluation, the hybrid search strategy was superior in identifying relevant primary studies. It identified 30% more primary studies and even more studies when focusing only on peer-reviewed articles. To embrace individual viewpoints when assessing research articles and minimise the risk of missing primary studies, we introduced two new concepts, wild cards and borderline articles, when performing systematic literature studies. Conclusions: The hybrid search strategy is a strong contender for being used when performing systematic literature studies. Furthermore, alternative hybrid search strategies may be viable if selected wisely in relation to the start set for snowballing. Finally, the two new concepts were judged as essential to cater for different individual judgements and to minimise the risk of excluding primary studies that ought to be included. © 2022 The Authors

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Hybrid search, Scopus, Snowballing, Systematic literature reviews, Information retrieval, Risk assessment, Cost-efficient, Database searches, Hybrid search strategies, Literature studies, Search procedures, Search strategies, Systematic literature review, Search engines
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-22852 (URN)10.1016/j.infsof.2022.106908 (DOI)000912894400002 ()2-s2.0-85127808772 (Scopus ID)
Note

open access

Available from: 2022-04-22 Created: 2022-04-22 Last updated: 2023-12-04Bibliographically approved
Wohlin, C. (2021). Case Study Research in Software Engineering: It is a Case, and it is a Study, but is it a Case Study?. Information and Software Technology, 133, Article ID 106514.
Open this publication in new window or tab >>Case Study Research in Software Engineering: It is a Case, and it is a Study, but is it a Case Study?
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 133, article id 106514Article in journal (Refereed) Published
Abstract [en]

Background: Case studies are regularly published in the software engineering literature, and guidelines for conducting case studies are available. Based on a perception that the label “case study” is assigned to studies that are not case studies, an investigation has been conducted. Objective: The aim was to investigate whether or not the label “case study” is correctly used in software engineering research. Method: To address the objective, 100 recent articles found through Scopus when searching for case studies in software engineering have been investigated and classified. Results: Unfortunately, the perception of misuse of the label “case study” is correct. Close to 50% of the articles investigated were judged as not being case studies according to the definition of a case study. Conclusions: We either need to ensure correct use of the label “case study”, or we need another label for its definition. Given that “case study” is a well-established label, it is probably impossible to change the label. Thus, we introduce an alternative definition of case study emphasising its real-life context, and urge researchers to carefully follow the definition of different research methods when presenting their research. © 2021 The Author

Place, publisher, year, edition, pages
Elsevier B.V., 2021
Keywords
Case study, Empirical, Misuse, Software engineering, Case study research, Case-studies, Real-life contexts
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21057 (URN)10.1016/j.infsof.2021.106514 (DOI)000620932900006 ()2-s2.0-85100091468 (Scopus ID)
Note

open access

Available from: 2021-02-12 Created: 2021-02-12 Last updated: 2021-03-18Bibliographically approved
Wohlin, C. & Rainer, A. W. (2021). Challenges and recommendations to publishing and using credible evidence in software engineering. Information and Software Technology, 134, Article ID 106555.
Open this publication in new window or tab >>Challenges and recommendations to publishing and using credible evidence in software engineering
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 134, article id 106555Article in journal (Refereed) Published
Abstract [en]

Context: An evidence-based scientific discipline should produce, consume and disseminate credible evidence. Unfortunately, mistakes are sometimes made, resulting in the production, consumption and dissemination of invalid or otherwise questionable evidence. In the worst cases, such questionable evidence achieves the status of accepted knowledge. There is, therefore, the need to ensure that producers and consumers seek to identify and rectify such situations. Objectives: To raise awareness of the negative impact of misinterpreting evidence and of propagating that misinterpreted evidence, and to provide guidance on how to improve on the type of issues identified. Methods: We use a case-based approach to present and analyse the production, consumption and dissemination of evidence. The cases are based on the literature and our professional experience. These cases illustrate a range of challenges confronting evidence-based researchers as well as the consequences to research when invalid evidence is not corrected in a timely way. Results: We use the cases and the challenges to formulate a framework and a set of recommendations to help the community in producing and consuming credible evidence. Conclusions: We encourage the community to collectively remain alert to the emergence and dissemination of invalid, or otherwise questionable, evidence, and to proactively seek to identify and rectify it. © 2021 The Authors

Place, publisher, year, edition, pages
Elsevier B.V., 2021
Keywords
Credible evidence, EBSE, Evidence-based software engineering, Relevance, Validity, Case-based approach, Evidence-based, Professional experiences, Provide guidances, Scientific discipline, Software engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21172 (URN)10.1016/j.infsof.2021.106555 (DOI)000634797600005 ()2-s2.0-85101314361 (Scopus ID)
Note

open access

Available from: 2021-03-04 Created: 2021-03-04 Last updated: 2021-04-29Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0460-5253

Search in DiVA

Show all publications