Change search
Link to record
Permanent link

Direct link
BETA
Alternative names
Publications (10 of 23) Show all publications
Ali, N. b. & Usman, M. (2019). A critical appraisal tool for systematic literature reviews in software engineering. Information and Software Technology, 112, 48-50
Open this publication in new window or tab >>A critical appraisal tool for systematic literature reviews in software engineering
2019 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, Vol. 112, p. 48-50Article, review/survey (Refereed) Published
Abstract [en]

Context: Methodological research on systematic literature reviews (SLRs)in Software Engineering (SE)has so far focused on developing and evaluating guidelines for conducting systematic reviews. However, the support for quality assessment of completed SLRs has not received the same level of attention. Objective: To raise awareness of the need for a critical appraisal tool (CAT)for assessing the quality of SLRs in SE. To initiate a community-based effort towards the development of such a tool. Method: We reviewed the literature on the quality assessment of SLRs to identify the frequently used CATs in SE and other fields. Results: We identified that the CATs currently used is SE were borrowed from medicine, but have not kept pace with substantial advancements in the field of medicine. Conclusion: In this paper, we have argued the need for a CAT for quality appraisal of SLRs in SE. We have also identified a tool that has the potential for application in SE. Furthermore, we have presented our approach for adapting this state-of-the-art CAT for assessing SLRs in SE. © 2019 The Authors

Place, publisher, year, edition, pages
Elsevier B.V., 2019
Keywords
AMSTAR, Critical appraisal tools, Quality assessment, Software engineering, Systematic literature reviews, Community-based, Methodological research, Quality appraisals, State of the art, Systematic literature review, Systematic Review, Quality management
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17898 (URN)10.1016/j.infsof.2019.04.006 (DOI)000469899100004 ()2-s2.0-85064816057 (Scopus ID)
Note

open access

Available from: 2019-05-21 Created: 2019-05-21 Last updated: 2019-06-20Bibliographically approved
Tanveer, B., Vollmer, A. M., Braun, S. & Ali, N. b. (2019). An evaluation of effort estimation supported by change impact analysis in agile software development. Journal of Software: Evolution and Process, 31(5), Article ID e2165.
Open this publication in new window or tab >>An evaluation of effort estimation supported by change impact analysis in agile software development
2019 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 31, no 5, article id e2165Article in journal (Refereed) Published
Abstract [en]

In agile software development, functionality is added to the system in an incremental and iterative manner. Practitioners often rely on expert judgment to estimate the effort in this context. However, the impact of a change on the existing system can provide objective information to practitioners to arrive at an informed estimate. In this regard, we have developed a hybrid method, that utilizes change impact analysis information for improving effort estimation. We also developed an estimation model based on gradient boosted trees (GBT). In this study, we evaluate the performance and usefulness of our hybrid method with tool support and the GBT model in a live iteration at Insiders Technologies GmbH, a German software company. Additionally, the solution was also assessed for perceived usefulness and understandability in a study with graduate and post-graduate students. The results from the industrial evaluation show that the proposed method produces more accurate estimates than only expert-based or only model-based estimates. Furthermore, both students and practitioners perceived the usefulness and understandability of the method positively.

Place, publisher, year, edition, pages
WILEY, 2019
Keywords
agile, case study, change impact analysis, effort estimation, expert-based
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-18019 (URN)10.1002/smr.2165 (DOI)000468316500004 ()
Available from: 2019-06-14 Created: 2019-06-14 Last updated: 2019-06-17Bibliographically approved
Ali, N. b., Engström, E., Taromirad, M., Mousavi, M. R., Minhas, N. M., Helgesson, D., . . . Varshosaz, M. (2019). On the search for industry-relevant regression testing research. Journal of Empirical Software Engineering
Open this publication in new window or tab >>On the search for industry-relevant regression testing research
Show others...
2019 (English)In: Journal of Empirical Software Engineering, ISSN 1382-3256, E-ISSN 1573-7616Article in journal (Refereed) Epub ahead of print
Abstract [en]

Regression testing is a means to assure that a change in the software, or

its execution environment, does not introduce new defects. It involves the expensive

undertaking of rerunning test cases. Several techniques have been proposed

to reduce the number of test cases to execute in regression testing, however, there

is no research on how to assess industrial relevance and applicability of such techniques.

We conducted a systematic literature review with the following two goals:

rstly, to enable researchers to design and present regression testing research with

a focus on industrial relevance and applicability and secondly, to facilitate the industrial

adoption of such research by addressing the attributes of concern from the

practitioners' perspective. Using a reference-based search approach, we identied

1068 papers on regression testing. We then reduced the scope to only include papers

with explicit discussions about relevance and applicability (i.e. mainly studies

involving industrial stakeholders). Uniquely in this literature review, practitioners

were consulted at several steps to increase the likelihood of achieving our aim of

identifying factors important for relevance and applicability. We have summarised

the results of these consultations and an analysis of the literature in three taxonomies,

which capture aspects of industrial-relevance regarding the regression

testing techniques. Based on these taxonomies, we mapped 38 papers reporting

the evaluation of 26 regression testing techniques in industrial settings.

Place, publisher, year, edition, pages
Springer-Verlag New York, 2019
Keywords
Industrial relevance, Recommendations, Regression testing, Systematic literature review, Taxonomy
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17364 (URN)10.1007/s10664-018-9670-1 (DOI)
Available from: 2018-11-29 Created: 2018-11-29 Last updated: 2019-03-07Bibliographically approved
Mendes, E., Ali, N. b., Counsell, S. & Baldassare, M. T. (2019). Special issue on evaluation and assessment in software engineering. Journal of Systems and Software, 151, 224-225
Open this publication in new window or tab >>Special issue on evaluation and assessment in software engineering
2019 (English)In: Journal of Systems and Software, ISSN 0164-1212, E-ISSN 1873-1228, Vol. 151, p. 224-225Article in journal, Editorial material (Refereed) Published
Place, publisher, year, edition, pages
Elsevier Inc., 2019
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17703 (URN)10.1016/j.jss.2019.01.066 (DOI)000462105200014 ()2-s2.0-85061811984 (Scopus ID)
Available from: 2019-03-07 Created: 2019-03-07 Last updated: 2019-04-18Bibliographically approved
Ali, N. b. & Usman, M. (2018). Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy. Information and Software Technology, 99, 133-147
Open this publication in new window or tab >>Reliability of search in systematic reviews: Towards a quality assessment framework for the automated-search strategy
2018 (English)In: Information and Software Technology, ISSN 0950-5849, E-ISSN 1873-6025, ISSN 0950-5849, Vol. 99, p. 133-147Article in journal (Refereed) Published
Abstract [en]

Context: The trust in systematic literature reviews (SLRs) to provide credible recommendations is critical for establishing evidence-based software engineering (EBSE) practice. The reliability of SLR as a method is not a given and largely depends on the rigor of the attempt to identify, appraise and aggregate evidence. Previous research, by comparing SLRs on the same topic, has identified search as one of the reasons for discrepancies in the included primary studies. This affects the reliability of an SLR, as the papers identified and included in it are likely to influence its conclusions. Objective: We aim to propose a comprehensive evaluation checklist to assess the reliability of an automated-search strategy used in an SLR. Method: Using a literature review, we identified guidelines for designing and reporting automated-search as a primary search strategy. Using the aggregated design, reporting and evaluation guidelines, we formulated a comprehensive evaluation checklist. The value of this checklist was demonstrated by assessing the reliability of search in 27 recent SLRs. Results: Using the proposed evaluation checklist, several additional issues (not captured by the current evaluation checklist) related to the reliability of search in recent SLRs were identified. These issues severely limit the coverage of literature by the search and also the possibility to replicate it. Conclusion: Instead of solely relying on expensive replications to assess the reliability of SLRs, this work provides means to objectively assess the likely reliability of a search-strategy used in an SLR. It highlights the often-assumed aspect of repeatability of search when using automated-search. Furthermore, by explicitly considering repeatability and consistency as sub-characteristics of a reliable search, it provides a more comprehensive evaluation checklist than the ones currently used in EBSE. © 2018 Elsevier B.V.

Keywords
Credibility Guidelines, Reliability, Search strategies, Secondary studies, Systematic literature reviews
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-15974 (URN)10.1016/j.infsof.2018.02.002 (DOI)000432767900012 ()
Available from: 2018-03-22 Created: 2018-03-22 Last updated: 2018-06-07Bibliographically approved
Josyula, J., Panamgipalli, S., Usman, M., Britto, R. & Ali, N. b. (2018). Software Practitioners' Information Needs and Sources: A Survey Study. In: Proceedings - 2018 9th International Workshop on Empirical Software Engineering in Practice, IWESEP 2018: . Paper presented at 9th International Workshop on Empirical Software Engineering in Practice (IWESEP), DEC 04, 2018, Nara, JAPAN (pp. 1-6). IEEE
Open this publication in new window or tab >>Software Practitioners' Information Needs and Sources: A Survey Study
Show others...
2018 (English)In: Proceedings - 2018 9th International Workshop on Empirical Software Engineering in Practice, IWESEP 2018, IEEE , 2018, p. 1-6Conference paper, Published paper (Refereed)
Abstract [en]

Software engineering practitioners have information needs to support strategic, tactical and operational decision-making. However, there is scarce research on understanding which information needs exist and how they are currently fulfilled in practice. This study aims to identify the information needs, the frequency of their occurrence, the sources of information used to satisfy the needs, and the perception of practitioners regarding the usefulness of the sources currently used. For this purpose, a literature review was conducted to aggregate the current state of understanding in this area. We built on the results of the literature review and developed further insights through in-depth interviews with 17 practitioners. We further triangulated the findings from these two investigations by conducting a web-based survey (with 83 completed responses). Based on the results, we infer that information regarding product design, product architecture and requirements gathering are the most frequently faced needs. Software practitioners mostly use blogs, community forums, product documentation, and discussion with colleagues to address their information needs.

Place, publisher, year, edition, pages
IEEE, 2018
Series
International Workshop on Empirical Software Engineering in Practice, ISSN 2333-519X
Keywords
evidence-based software engineering, EBSE, Information retrieval, knowledge sharing
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-17830 (URN)10.1109/IWESEP.2018.00009 (DOI)000462914400001 ()978-1-7281-0439-3 (ISBN)
Conference
9th International Workshop on Empirical Software Engineering in Practice (IWESEP), DEC 04, 2018, Nara, JAPAN
Available from: 2019-04-18 Created: 2019-04-18 Last updated: 2019-06-28Bibliographically approved
Molléri, J. S., Ali, N. b., Petersen, K., Minhas, T. N. & Chatzipetrou, P. (2018). Teaching students critical appraisal of scientific literature using checklists. In: ACM International Conference Proceeding Series: . Paper presented at 3rd European Conference of Software Engineering Education, ECSEE, Seeon Monastery, Germany (pp. 8-17). Association for Computing Machinery
Open this publication in new window or tab >>Teaching students critical appraisal of scientific literature using checklists
Show others...
2018 (English)In: ACM International Conference Proceeding Series, Association for Computing Machinery , 2018, p. 8-17Conference paper, Published paper (Refereed)
Abstract [en]

Background: Teaching students to critically appraise scientific literature is an important goal for a postgraduate research methods course. Objective: To investigate the application of checklists for assessing the scientific rigor of empirical studies support students in reviewing case study research and experiments. Methods:We employed an experimental design where 76 students (in pairs) used two checklists to evaluate two papers (reporting a case study and an experiment) each. We compared the students' assessments against ratings from more senior researchers. We also collected data on students' perception of using the checklists. Results: The consistency of students' ratings and the accuracy when compared to ratings from seniors varied. A factor seemed to be that the clearer the reporting, the easier it is for students to judge the quality of studies. Students perceived checklist items related to data analysis as difficult to assess. Conclusion: As expected, this study reinforces the needs for clear reporting, as it is important that authors write to enable synthesis and quality assessment. With clearer reporting, the novices performed well in assessing the quality of the empirical work, which supports its continued use in the course as means for introducing scientific reviews. © 2018 Association for Computing Machinery.

Place, publisher, year, edition, pages
Association for Computing Machinery, 2018
Keywords
Case study, Checklist, Critical appraisal, Experiment, Student, Design of experiments, Engineering education, Experiments, Software engineering, Teaching, Case study research, Continued use, Empirical studies, Post-graduate research, Quality assessment, Scientific literature, Students
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-16892 (URN)10.1145/3209087.3209099 (DOI)2-s2.0-85049867400 (Scopus ID)9781450363839 (ISBN)
Conference
3rd European Conference of Software Engineering Education, ECSEE, Seeon Monastery, Germany
Available from: 2018-08-20 Created: 2018-08-20 Last updated: 2019-04-24Bibliographically approved
Jabbari, R., Ali, N. b., Petersen, K. & Tanveer, B. (2018). Towards a benefits dependency network for DevOps based on a systematic literature review. Journal of Software: Evolution and Process, 30(11), Article ID e1957.
Open this publication in new window or tab >>Towards a benefits dependency network for DevOps based on a systematic literature review
2018 (English)In: Journal of Software: Evolution and Process, ISSN 2047-7473, E-ISSN 2047-7481, Vol. 30, no 11, article id e1957Article in journal (Refereed) Published
Abstract [en]

DevOps as a new way of thinking for software development and operations has received much attention in the industry, while it has not been thoroughly investigated in academia yet. The objective of this study is to characterize DevOps by exploring its central components in terms of principles, practices and their relations to the principles, challenges of DevOps adoption, and benefits reported in the peer-reviewed literature. As a key objective, we also aim to realize the relations between DevOps practices and benefits in a systematic manner. A systematic literature review was conducted. Also, we used the concept of benefits dependency network to synthesize the findings, in particular, to specify dependencies between DevOps practices and link the practices to benefits. We found that in many cases, DevOps characteristics, ie, principles, practices, benefits, and challenges, were not sufficiently defined in detail in the peer-reviewed literature. In addition, only a few empirical studies are available, which can be attributed to the nascency of DevOps research. Also, an initial version of the DevOps benefits dependency network has been derived. The definition of DevOps principles and practices should be emphasized given the novelty of the concept. Further empirical studies are needed to improve the benefits dependency network presented in this study. © 2018 John Wiley & Sons, Ltd.

Place, publisher, year, edition, pages
John Wiley and Sons Ltd, 2018
Keywords
Benefits and values, Challenges, Development and operations, DevOps, Principles and practices, Systematic literature review, Computer software, Software design
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-16924 (URN)10.1002/smr.1957 (DOI)000450237000008 ()2-s2.0-85050720397 (Scopus ID)
Available from: 2018-08-21 Created: 2018-08-21 Last updated: 2018-11-29Bibliographically approved
Minhas, N. M., Petersen, K., Ali, N. b. & Wnuk, K. (2017). Regression testing goals: View of practitioners and researchers. In: 24th Asia-Pacific Software Engineering Conference Workshops (APSECW): . Paper presented at 24th Asia-Pacific Software Engineering Conference, Nanjing (pp. 25-32). IEEE
Open this publication in new window or tab >>Regression testing goals: View of practitioners and researchers
2017 (English)In: 24th Asia-Pacific Software Engineering Conference Workshops (APSECW), IEEE, 2017, p. 25-32Conference paper, Published paper (Refereed)
Abstract [en]

Context: Regression testing is a well-researched area. However, the majority regression testing techniques proposed by the researchers are not getting the attention of the practitioners. Communication gaps between industry and academia and disparity in the regression testing goals are the main reasons. Close collaboration can help in bridging the communication gaps and resolving the disparities.Objective: The study aims at exploring the views of academics and practitioners about the goals of regression testing. The purpose is to investigate the commonalities and differences in their viewpoints and defining some common goals for the success of regression testing.Method: We conducted a focus group study, with 7 testing experts from industry and academia. 4 testing practitioners from 2companies and 3 researchers from 2 universities participated in the study. We followed GQM approach, to elicit the regression testing goals, information needs, and measures.Results: 43 regression testing goals were identified by the participants, which were reduced to 10 on the basis of similarity among the identified goals. Later during the priority assignment process, 5 goals were discarded, because the priority assigned to these goals was very low. Participants identified 47 information needs/questions required to evaluate the success of regression testing with reference to goal G5 (confidence). Which were then reduced to10 on the basis of similarity. Finally, we identified measures to gauge those information needs/questions, which were corresponding to the goal (G5).Conclusions: We observed that participation level of practitioners and researchers during the elicitation of goals and questions was same. We found a certain level of agreement between the participants regarding the regression testing definitions and goals.But there was some level of disagreement regarding the priorities of the goals. We also identified the need to implement a regression testing evaluation framework in the participating companies.

Place, publisher, year, edition, pages
IEEE, 2017
Keywords
Regression testing, Regression testing goals, GQM, Focus group
National Category
Computer Systems
Identifiers
urn:nbn:se:bth-16009 (URN)10.1109/APSECW.2017.23 (DOI)000428319200008 ()978-1-5386-2649-8 (ISBN)
Conference
24th Asia-Pacific Software Engineering Conference, Nanjing
Projects
EASE (Embedded Applications Software Engineering, ease.cs.lth.se)
Available from: 2018-03-22 Created: 2018-03-22 Last updated: 2018-12-05Bibliographically approved
Engström, E., Petersen, K., Ali, N. & Bjarnason, E. (2017). SERP-test: a taxonomy for supporting industry-academia communication. Software quality journal, 25(4), 1269-1305
Open this publication in new window or tab >>SERP-test: a taxonomy for supporting industry-academia communication
2017 (English)In: Software quality journal, ISSN 0963-9314, E-ISSN 1573-1367, Vol. 25, no 4, p. 1269-1305Article in journal (Refereed) Published
Abstract [en]

This paper presents the construction and evaluation of SERP-test, a taxonomy aimed to improve communication between researchers and practitioners in the area of software testing. SERP-test can be utilized for direct communication in industry academia collaborations. It may also facilitate indirect communication between practitioners adopting software engineering research and researchers who are striving for industry relevance. SERP-test was constructed through a systematic and goal-oriented approach which included literature reviews and interviews with practitioners and researchers. SERP-test was evaluated through an online survey and by utilizing it in an industry–academia collaboration project. SERP-test comprises four facets along which both research contributions and practical challenges may be classified: Intervention, Scope, Effect target and Context constraints. This paper explains the available categories for each of these facets (i.e., their definitions and rationales) and presents examples of categorized entities. Several tasks may benefit from SERP-test, such as formulating research goals from a problem perspective, describing practical challenges in a researchable fashion, analyzing primary studies in a literature review, or identifying relevant points of comparison and generalization of research.

Place, publisher, year, edition, pages
Springer-Verlag New York, 2017
Keywords
Classification (of information); Software engineering; Taxonomies; Testing, Context; Industry relevance; Intervention; Methodology; Scope, Software testing
National Category
Software Engineering
Identifiers
urn:nbn:se:bth-13103 (URN)10.1007/s11219-016-9322-x (DOI)000415973100007 ()2-s2.0-84976367380 (Scopus ID)
Available from: 2016-10-04 Created: 2016-10-03 Last updated: 2019-03-08Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7266-5632

Search in DiVA

Show all publications