When evaluating data mining algorithms that are applied to solve real-world problems there are often several, conflicting criteria that need to be considered. We investigate the concept of generic multi-criteria (MC) classifier and algorithm evaluation and perform a comparison of existing methods. This comparison makes explicit some of the important characteristics of MC analysis and focuses on finding out which method is most suitable for further development. Generic MC methods can be described as frameworks for combining evaluation metrics and are generic in the sense that the metrics used are not dictated by the method; the choice of metric is instead dependent on the problem at hand. We discuss some scenarios that benefit from the application of generic MC methods and synthesize what we believe are attractive properties from the reviewed methods into a new method called the candidate evaluation function (CEF). Finally, we present a case study in which we apply CEF to trade-off several criteria when solving a real-world problem.
Traditionellt utvärderas inlärningsalgoritmer och klassificerare avseende ett enskilt kriterium. Detta har visats sig för begränsat för många inlärningsproblem. Denna studie undersöker existerande alternativ för generisk multikriteria-utvärdering och presenterar en ny metod tillsammans med en fallstudie.