Test Set Diameter: Quantifying the Diversity of Sets of Test Cases
2015 (English)In: Proceedings - 2016 IEEE International Conference on Software Testing, Verification and Validation, ICST, IEEE Computer Society, 2015, Vol. abs/1506.03482Conference paper (Refereed)Text
A common and natural intuition among software testers is that test cases need to differ if a software system is to be tested properly and its quality ensured. Consequently, much research has gone into formulating distance measures for how test cases, their inputs and/or their outputs differ. However, common to these proposals is that they are data type specific and/or calculate the diversity only between pairs of test inputs, traces or outputs. We propose a new metric to measure the diversity of sets of tests: the test set diameter (TSDm). It extends our earlier, pairwise test diversity metrics based on recent advances in information theory regarding the calculation of the normalized compression distance (NCD) for multisets. A key advantage is that TSDm is a universal measure of diversity and so can be applied to any test set regardless of data type of the test inputs (and, moreover, to other test-related data such as execution traces). But this universality comes at the cost of greater computational effort compared to competing approaches. Our experiments on four different systems show that the test set diameter can help select test sets with higher structural and fault coverage than random selection even when only applied to test inputs. This can enable early test design and selection, prior to even having a software system to test, and complement other types of test automation and analysis. We argue that this quantification of test set diversity creates a number of opportunities to better understand software quality and provides practical ways to increase it.
Place, publisher, year, edition, pages
IEEE Computer Society, 2015. Vol. abs/1506.03482
Empirical study; Information theory; Software testing; Test selection
IdentifiersURN: urn:nbn:se:bth-11201DOI: 10.1109/ICST.2016.33ISBN: 9781509018260OAI: oai:DiVA.org:bth-11201DiVA: diva2:894204
9th IEEE International Conference on Software Testing, Verification and Validation, ICST 2016; Chicago