Change search
ReferencesLink to record
Permanent link

Direct link
Considering rigor and relevance when evaluating test driven development: A systematic review
Blekinge Institute of Technology, School of Computing.
2014 (English)In: Information and Software Technology, ISSN 0950-5849, Vol. 56, no 4, 375-394 p.Article in journal (Refereed) Published
Abstract [en]

Context Test driven development (TDD) has been extensively researched and compared to traditional approaches (test last development, TLD). Existing literature reviews show varying results for TDD. Objective This study investigates how the conclusions of existing literature reviews change when taking two study quality dimension into account, namely rigor and relevance. Method In this study a systematic literature review has been conducted and the results of the identified primary studies have been analyzed with respect to rigor and relevance scores using the assessment rubric proposed by Ivarsson and Gorschek 2011. Rigor and relevance are rated on a scale, which is explained in this paper. Four categories of studies were defined based on high/low rigor and relevance. Results We found that studies in the four categories come to different conclusions. In particular, studies with a high rigor and relevance scores show clear results for improvement in external quality, which seem to come with a loss of productivity. At the same time high rigor and relevance studies only investigate a small set of variables. Other categories contain many studies showing no difference, hence biasing the results negatively for the overall set of primary studies. Given the classification differences to previous literature reviews could be highlighted. Conclusion Strong indications are obtained that external quality is positively influenced, which has to be further substantiated by industry experiments and longitudinal case studies. Future studies in the high rigor and relevance category would contribute largely by focusing on a wider set of outcome variables (e.g. internal code quality). We also conclude that considering rigor and relevance in TDD evaluation is important given the differences in results between categories and in comparison to previous reviews.

Place, publisher, year, edition, pages
Elsevier , 2014. Vol. 56, no 4, 375-394 p.
Keyword [en]
External code quality, Internal code quality, Productivity, Test-driven development (TDD), Test-last development (TLD)
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-6724DOI: 10.1016/j.infsof.2014.01.002ISI: 000332904000001Local ID: oai:bth.se:forskinfo06DBBEDC7A01C1C9C1257CA60037CAF4OAI: oai:DiVA.org:bth-6724DiVA: diva2:834257
Available from: 2014-04-23 Created: 2014-03-25 Last updated: 2015-06-30Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Petersen, Kai
By organisation
School of Computing
In the same journal
Information and Software Technology
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar

Altmetric score

Total: 19 hits
ReferencesLink to record
Permanent link

Direct link