Is it possible to disregard obsolete requirements? a family of experiments in software effort estimation
2021 (English)In: Requirements Engineering, ISSN 0947-3602, E-ISSN 1432-010X, no 3, p. 459-480Article in journal (Refereed) Published
Abstract [en]
Expert judgement is a common method for software effort estimations in practice today. Estimators are often shown extra obsolete requirements together with the real ones to be implemented. Only one previous study has been conducted on if such practices bias the estimations. We conducted six experiments with both students and practitioners to study, and quantify, the effects of obsolete requirements on software estimation. By conducting a family of six experiments using both students and practitioners as research subjects (N= 461), and by using a Bayesian Data Analysis approach, we investigated different aspects of this effect. We also argue for, and show an example of, how we by using a Bayesian approach can be more confident in our results and enable further studies with small sample sizes. We found that the presence of obsolete requirements triggered an overestimation in effort across all experiments. The effect, however, was smaller in a field setting compared to using students as subjects. Still, the over-estimations triggered by the obsolete requirements were systematically around twice the percentage of the included obsolete ones, but with a large 95% credible interval. The results have implications for both research and practice in that the found systematic error should be accounted for in both studies on software estimation and, maybe more importantly, in estimation practices to avoid over-estimations due to this systematic error. We partly explain this error to be stemming from the cognitive bias of anchoring-and-adjustment, i.e. the obsolete requirements anchored a much larger software. However, further studies are needed in order to accurately predict this effect. © 2021, The Author(s).
Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH , 2021. no 3, p. 459-480
Keywords [en]
Expert judgement, Family of experiments, Software effort estimation, Systematic error, Bayesian networks, Students, 95% credible intervals, Bayesian approaches, Bayesian data analysis, Research subjects, Small Sample Size, Software estimation, Errors
National Category
Software Engineering
Identifiers
URN: urn:nbn:se:bth-21375DOI: 10.1007/s00766-021-00351-7ISI: 000639369700001Scopus ID: 2-s2.0-85104452176OAI: oai:DiVA.org:bth-21375DiVA, id: diva2:1548391
Part of project
SERT- Software Engineering ReThought, Knowledge Foundation
Note
open access
A correction to this paper has been published: https://doi.org/10.1007/s00766-021-00354-4
2021-04-302021-04-302022-01-11Bibliographically approved