Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 181) Show all publications
Usman, M., Ali, N. b. & Wohlin, C. (2023). A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering. e-Informatica Software Engineering Journal, 17(1), Article ID 230105.
Open this publication in new window or tab >>A Quality Assessment Instrument for Systematic Literature Reviews in Software Engineering
2023 (English)In: e-Informatica Software Engineering Journal, ISSN 1897-7979, E-ISSN 2084-4840, Vol. 17, no 1, article id 230105Article in journal (Refereed) Published
Abstract [en]

Background: Systematic literature reviews (SLRs) have become a standard practice as part of software engineering (SE) research, although their quality varies. To build on the reviews, both for future research and industry practice, they need to be of high quality.Aim: To assess the quality of SLRs in SE, we put forward an appraisal instrument for SLRs.Method: A well-established appraisal instrument from research in healthcare was used as a starting point to develop the instrument. It is adapted to SE using guidelines, checklists, and experiences from SE. The first version was reviewed by four external experts on SLRs in SE and updated based on their feedback. To demonstrate its use, the updated version was also used by the authors to assess a sample of six selected systematic literature studies.Results: The outcome of the research is an appraisal instrument for quality assessment of SLRs in SE. The instrument includes 15 items with different options to capture the quality. The instrument also supports consolidating the items into groups, which are then used to assess the overall quality of an SLR.Conclusion: The presented instrument may be helpful support for an appraiser in assessing the quality of SLRs in SE.

Place, publisher, year, edition, pages
Wroclaw University of Technology, 2023
Keywords
Systematic reviews, quality assessment, critical appraisal, AMSTAR 2, systematic literature review, tertiary study
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24415 (URN)10.37190/e-Inf230105 (DOI)000944209900001 ()2-s2.0-85152967598 (Scopus ID)
Funder
Knowledge Foundation, 20180127Knowledge Foundation, 20190081ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Available from: 2023-04-05 Created: 2023-04-05 Last updated: 2023-04-28Bibliographically approved
Rainer, A. & Wohlin, C. (2023). Case study identification: A trivial indicator outperforms human classifiers. Information and Software Technology, 161, Article ID 107252.
Open this publication in new window or tab >>Case study identification: A trivial indicator outperforms human classifiers
2023 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 161, article id 107252Article in journal (Refereed) Published
Abstract [en]

Context: The definition and term “case study” are not being applied consistently by software engineering researchers. We previously developed a trivial “smell indicator” to help detect the misclassification of primary studies as case studies. Objective: To evaluate the performance of the indicator. Methods: We compare the performance of the indicator against human classifiers for three datasets, two datasets comprising classifications by both authors of systematic literature studies and primary studies, and one dataset comprising only primary-study author classifications. Results: The indicator outperforms the human classifiers for all datasets. Conclusions: The indicator is successful because human classifiers “fail” to properly classify their own, and others’, primary studies. Consequently, reviewers of primary studies and authors of systematic literature studies could use the classifier as a “sanity” check for primary studies. Moreover, authors might use the indicator to double-check how they classified a study, as part of their analysis, and prior to submitting their manuscript for publication. We challenge the research community to both beat the indicator, and to improve its ability to identify true case studies. © 2023 The Author(s)

Place, publisher, year, edition, pages
Elsevier, 2023
Keywords
Case study, Evaluation, Primary study, Smell indicator, Systematic review, Software engineering, Case-studies, Classifieds, Literature studies, Misclassifications, Performance, Sanity check, Classification (of information)
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-24692 (URN)10.1016/j.infsof.2023.107252 (DOI)001052976000001 ()2-s2.0-85159783823 (Scopus ID)
Available from: 2023-06-02 Created: 2023-06-02 Last updated: 2023-09-08Bibliographically approved
Wohlin, C. & Rainer, A. (2022). Is it a case study?—A critical analysis and guidance. Journal of Systems and Software, 192, Article ID 111395.
Open this publication in new window or tab >>Is it a case study?—A critical analysis and guidance
2022 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 192, article id 111395Article in journal (Refereed) Published
Abstract [en]

The term “case study” is not used consistently when describing studies and, most importantly, is not used according to the established definitions. Given the misuse of the term “case study”, we critically analyse articles that cite case study guidelines and report case studies. We find that only about 50% of the studies labelled “case study” are correctly labelled, and about 40% of studies labelled “case study” are actually better understood as “small-scale evaluations”. Based on our experiences conducting the analysis, we formulate support for ensuring and assuring the correct labelling of case studies. We develop a checklist and a self-assessment scheme. The checklist is intended to complement existing definitions and to encourage researchers to use the term “case study” correctly. The self-assessment scheme is intended to help the researcher identify when their empirical study is a “small-scale evaluation” and, again, encourages researchers to label their studies correctly. Finally, we develop and evaluate a smell indicator to automatically suggest when a reported case study may not actually be a case study. These three instruments have been developed to help ensure and assure that only those studies that are actually case studies are labelled as “case study”. © 2022 The Author(s)

Place, publisher, year, edition, pages
Elsevier Inc., 2022
Keywords
Case study, Checklist, Citation analysis, Guidelines, Small-scale evaluation, Smell indicator, Assessment scheme, Case-studies, Critical analysis, Guideline, Labelings, Self-assessment
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23502 (URN)10.1016/j.jss.2022.111395 (DOI)000829490300010 ()2-s2.0-85132744710 (Scopus ID)
Note

open access

Available from: 2022-08-11 Created: 2022-08-11 Last updated: 2022-08-11Bibliographically approved
Rainer, A. & Wohlin, C. (2022). Recruiting credible participants for field studies in software engineering research. Information and Software Technology, 151, Article ID 107002.
Open this publication in new window or tab >>Recruiting credible participants for field studies in software engineering research
2022 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 151, article id 107002Article in journal (Refereed) Published
Abstract [en]

Context: Software practitioners are a primary provider of information for field studies in software engineering. Research typically recruits practitioners through some kind of sampling. But sampling may not in itself recruit the "right" participants. Objective: To assess existing guidance on participant recruitment, and to propose and illustrate a framework for recruiting professional practitioners as credible participants in field studies of software engineering. Methods: We review existing guidelines, checklists and other advisory sources on recruiting participants for field studies. We develop a framework, partly based on our prior research and on the research of others. We search for and select three exemplar studies (a case study, an interview study and a survey study) and use those to illustrate the framework. Results: Whilst existing guidance recognises the importance of recruiting participants, there is limited guidance on how to recruit the "right" participants. The framework suggests the conceptualisation of participants as "research instruments" or, alternatively, as a sampling frame for items of interest. The exemplars suggest that at least some members of the research community are aware of the need to carefully recruit the "right" participants. Conclusions: The framework is intended to encourage researchers to think differently about the involvement of practitioners in field studies of software engineering. Also, the framework identifies a number of characteristics not explicitly addressed by existing guidelines.

Place, publisher, year, edition, pages
ELSEVIER, 2022
Keywords
Credibility, Validity, Reliability, Data collection, Sampling, Subjects, Participants, Recruitment
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-23746 (URN)10.1016/j.infsof.2022.107002 (DOI)000859493100003 ()
Note

open access

Available from: 2022-10-14 Created: 2022-10-14 Last updated: 2022-10-14Bibliographically approved
Wohlin, C., Kalinowski, M., Romero Felizardo, K. & Mendes, E. (2022). Successful combination of database search and snowballing for identification of primary studies in systematic literature studies. Information and Software Technology, 147, Article ID 106908.
Open this publication in new window or tab >>Successful combination of database search and snowballing for identification of primary studies in systematic literature studies
2022 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 147, article id 106908Article in journal (Refereed) Published
Abstract [en]

Background: A good search strategy is essential for a successful systematic literature study. Historically, database searches have been the norm, which was later complemented with snowball searches. Our conjecture is that we can perform even better searches if combining these two search approaches, referred to as a hybrid search strategy. Objective: Our main objective was to compare and evaluate a hybrid search strategy. Furthermore, we compared four alternative hybrid search strategies to assess whether we could identify more cost-efficient ways of searching for relevant primary studies. Methods: To compare and evaluate the hybrid search strategy, we replicated the search procedure in a systematic literature review (SLR) on industry–academia collaboration in software engineering. The SLR used a more “traditional” approach to searching for relevant articles for an SLR, while our replication was executed using a hybrid search strategy. Results: In our evaluation, the hybrid search strategy was superior in identifying relevant primary studies. It identified 30% more primary studies and even more studies when focusing only on peer-reviewed articles. To embrace individual viewpoints when assessing research articles and minimise the risk of missing primary studies, we introduced two new concepts, wild cards and borderline articles, when performing systematic literature studies. Conclusions: The hybrid search strategy is a strong contender for being used when performing systematic literature studies. Furthermore, alternative hybrid search strategies may be viable if selected wisely in relation to the start set for snowballing. Finally, the two new concepts were judged as essential to cater for different individual judgements and to minimise the risk of excluding primary studies that ought to be included. © 2022 The Authors

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Hybrid search, Scopus, Snowballing, Systematic literature reviews, Information retrieval, Risk assessment, Cost-efficient, Database searches, Hybrid search strategies, Literature studies, Search procedures, Search strategies, Systematic literature review, Search engines
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-22852 (URN)10.1016/j.infsof.2022.106908 (DOI)000912894400002 ()2-s2.0-85127808772 (Scopus ID)
Note

open access

Available from: 2022-04-22 Created: 2022-04-22 Last updated: 2023-12-04Bibliographically approved
Wohlin, C. (2021). Case Study Research in Software Engineering: It is a Case, and it is a Study, but is it a Case Study?. Information and Software Technology, 133, Article ID 106514.
Open this publication in new window or tab >>Case Study Research in Software Engineering: It is a Case, and it is a Study, but is it a Case Study?
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 133, article id 106514Article in journal (Refereed) Published
Abstract [en]

Background: Case studies are regularly published in the software engineering literature, and guidelines for conducting case studies are available. Based on a perception that the label “case study” is assigned to studies that are not case studies, an investigation has been conducted. Objective: The aim was to investigate whether or not the label “case study” is correctly used in software engineering research. Method: To address the objective, 100 recent articles found through Scopus when searching for case studies in software engineering have been investigated and classified. Results: Unfortunately, the perception of misuse of the label “case study” is correct. Close to 50% of the articles investigated were judged as not being case studies according to the definition of a case study. Conclusions: We either need to ensure correct use of the label “case study”, or we need another label for its definition. Given that “case study” is a well-established label, it is probably impossible to change the label. Thus, we introduce an alternative definition of case study emphasising its real-life context, and urge researchers to carefully follow the definition of different research methods when presenting their research. © 2021 The Author

Place, publisher, year, edition, pages
Elsevier B.V., 2021
Keywords
Case study, Empirical, Misuse, Software engineering, Case study research, Case-studies, Real-life contexts
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21057 (URN)10.1016/j.infsof.2021.106514 (DOI)000620932900006 ()2-s2.0-85100091468 (Scopus ID)
Note

open access

Available from: 2021-02-12 Created: 2021-02-12 Last updated: 2021-03-18Bibliographically approved
Wohlin, C. & Rainer, A. W. (2021). Challenges and recommendations to publishing and using credible evidence in software engineering. Information and Software Technology, 134, Article ID 106555.
Open this publication in new window or tab >>Challenges and recommendations to publishing and using credible evidence in software engineering
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 134, article id 106555Article in journal (Refereed) Published
Abstract [en]

Context: An evidence-based scientific discipline should produce, consume and disseminate credible evidence. Unfortunately, mistakes are sometimes made, resulting in the production, consumption and dissemination of invalid or otherwise questionable evidence. In the worst cases, such questionable evidence achieves the status of accepted knowledge. There is, therefore, the need to ensure that producers and consumers seek to identify and rectify such situations. Objectives: To raise awareness of the negative impact of misinterpreting evidence and of propagating that misinterpreted evidence, and to provide guidance on how to improve on the type of issues identified. Methods: We use a case-based approach to present and analyse the production, consumption and dissemination of evidence. The cases are based on the literature and our professional experience. These cases illustrate a range of challenges confronting evidence-based researchers as well as the consequences to research when invalid evidence is not corrected in a timely way. Results: We use the cases and the challenges to formulate a framework and a set of recommendations to help the community in producing and consuming credible evidence. Conclusions: We encourage the community to collectively remain alert to the emergence and dissemination of invalid, or otherwise questionable, evidence, and to proactively seek to identify and rectify it. © 2021 The Authors

Place, publisher, year, edition, pages
Elsevier B.V., 2021
Keywords
Credible evidence, EBSE, Evidence-based software engineering, Relevance, Validity, Case-based approach, Evidence-based, Professional experiences, Provide guidances, Scientific discipline, Software engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21172 (URN)10.1016/j.infsof.2021.106555 (DOI)000634797600005 ()2-s2.0-85101314361 (Scopus ID)
Note

open access

Available from: 2021-03-04 Created: 2021-03-04 Last updated: 2021-04-29Bibliographically approved
Wohlin, C. & Runeson, P. (2021). Guiding the selection of research methodology in industry–academia collaboration in software engineering. Information and Software Technology, 140, Article ID 106678.
Open this publication in new window or tab >>Guiding the selection of research methodology in industry–academia collaboration in software engineering
2021 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 140, article id 106678Article in journal (Refereed) Published
Abstract [en]

Background: The literature concerning research methodologies and methods has increased in software engineering in the last decade. However, there is limited guidance on selecting an appropriate research methodology for a given research study or project. Objective: Based on a selection of research methodologies suitable for software engineering research in collaboration between industry and academia, we present, discuss and compare the methodologies aiming to provide guidance on which research methodology to choose in a given situation to ensure successful industry–academia collaboration in research. Method: Three research methodologies were chosen for two main reasons. Design Science and Action Research were selected for their usage in software engineering. We also chose a model emanating from software engineering, i.e., the Technology Transfer Model. An overview of each methodology is provided. It is followed by a discussion and an illustration concerning their use in industry–academia collaborative research. The three methodologies are then compared using a set of criteria as a basis for our guidance. Results: The discussion and comparison of the three research methodologies revealed general similarities and distinct differences. All three research methodologies are easily mapped to the general research process describe–solve–practice, while the main driver behind the formulation of the research methodologies is different. Thus, we guide in selecting a research methodology given the primary research objective for a given research study or project in collaboration between industry and academia. Conclusions: We observe that the three research methodologies have different main objectives and differ in some characteristics, although still having a lot in common. We conclude that it is vital to make an informed decision concerning which research methodology to use. The presentation and comparison aim to guide selecting an appropriate research methodology when conducting research in collaboration between industry and academia. © 2021 The Authors

Place, publisher, year, edition, pages
Elsevier B.V., 2021
Keywords
Action Research, Design Science, Industry–academia collaboration, Research methodology, Selecting research methodology, Technology Transfer Model, Technology transfer, Collaborative research, Informed decision, Provide guidances, Research methodologies, Research objectives, Research process, Research studies, Software engineering
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21996 (URN)10.1016/j.infsof.2021.106678 (DOI)000701752600002 ()2-s2.0-85110034189 (Scopus ID)
Funder
ELLIIT - The Linköping‐Lund Initiative on IT and Mobile Communications
Note

open access

Available from: 2021-08-07 Created: 2021-08-07 Last updated: 2021-10-15Bibliographically approved
Wohlin, C., Papatheocharous, E., Carlson, J., Petersen, K., Alégroth, E., Axelsson, J., . . . Gorschek, T. (2021). Towards evidence-based decision-making for identification and usage of assets in composite software: A research roadmap. Journal of Software: Evolution and Process, 33(6), Article ID e2345.
Open this publication in new window or tab >>Towards evidence-based decision-making for identification and usage of assets in composite software: A research roadmap
Show others...
2021 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 33, no 6, article id e2345Article in journal (Refereed) Published
Abstract [en]

Software engineering is decision intensive. Evidence-based software engineering is suggested for decision-making concerning the use of methods and technologies when developing software. Software development often includes the reuse of software assets, for example, open-source components. Which components to use have implications on the quality of the software (e.g., maintainability). Thus, research is needed to support decision-making for composite software. This paper presents a roadmap for research required to support evidence-based decision-making for choosing and integrating assets in composite software systems. The roadmap is developed as an output from a 5-year project in the area, including researchers from three different organizations. The roadmap is developed in an iterative process and is based on (1) systematic literature reviews of the area; (2) investigations of the state of practice, including a case survey and a survey; and (3) development and evaluation of solutions for asset identification and selection. The research activities resulted in identifying 11 areas in need of research. The areas are grouped into two categories: areas enabling evidence-based decision-making and those related to supporting the decision-making. The roadmap outlines research needs in these 11 areas. The research challenges and research directions presented in this roadmap are key areas for further research to support evidence-based decision-making for composite software. © 2021 The Authors. Journal of Software: Evolution and Process published by John Wiley & Sons Ltd.

Place, publisher, year, edition, pages
John Wiley and Sons Ltd, 2021
Keywords
asset origins, component-based software engineering (CBSE), decision-making, evidence-based software engineering, software architecture, Computer software reusability, Iterative methods, Open source software, Open systems, Software design, Surveys, Asset identification, Evidence Based Software Engineering, Evidence- based decisions, Iterative process, Open-source components, Research activities, Research challenges, Systematic literature review, Decision making
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-21293 (URN)10.1002/smr.2345 (DOI)000630322200001 ()2-s2.0-85102713035 (Scopus ID)
Note

open access

Available from: 2021-03-26 Created: 2021-03-26 Last updated: 2022-09-16Bibliographically approved
Wohlin, C., Mendes, E., Felizardo, K. & Kalinowski, M. (2020). Guidelines for the search strategy to update systematic literature reviews in software engineering. Information and Software Technology, 127, Article ID 106366.
Open this publication in new window or tab >>Guidelines for the search strategy to update systematic literature reviews in software engineering
2020 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 127, article id 106366Article in journal (Refereed) Published
Abstract [en]

Context: Systematic Literature Reviews (SLRs) have been adopted within Software Engineering (SE) for more than a decade to provide meaningful summaries of evidence on several topics. Many of these SLRs are now potentially not fully up-to-date, and there are no standard proposals on how to update SLRs in SE. Objective: The objective of this paper is to propose guidelines on how to best search for evidence when updating SLRs in SE, and to evaluate these guidelines using an SLR that was not employed during the formulation of the guidelines. Method: To propose our guidelines, we compare and discuss outcomes from applying different search strategies to identify primary studies in a published SLR, an SLR update, and two replications in the area of effort estimation. These guidelines are then evaluated using an SLR in the area of software ecosystems, its update and a replication. Results: The use of a single iteration forward snowballing with Google Scholar, and employing as a seed set the original SLR and its primary studies is the most cost-effective way to search for new evidence when updating SLRs. Furthermore, the importance of having more than one researcher involved in the selection of papers when applying the inclusion and exclusion criteria is highlighted through the results. Conclusions: Our proposed guidelines formulated based upon an effort estimation SLR, its update and two replications, were supported when using an SLR in the area of software ecosystems, its update and a replication. Therefore, we put forward that our guidelines ought to be adopted for updating SLRs in SE. © 2020

Place, publisher, year, edition, pages
Elsevier B.V., 2020
Keywords
Searching for evidence, Snowballing, Software engineering, Systematic literature review update, Systematic literature reviews, Cost effectiveness, Ecosystems, Iterative methods, Cost effective, Effort Estimation, Google scholar, Inclusion and exclusions, Search strategies, Seed set, Software ecosystems, Systematic literature review
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-20026 (URN)10.1016/j.infsof.2020.106366 (DOI)000571236700010 ()
Note

Open access

Available from: 2020-06-29 Created: 2020-06-29 Last updated: 2023-12-04Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-0460-5253

Search in DiVA

Show all publications