The requirements of real-world data mining problems vary extensively. It is plausible to assume that some of these requirements can be expressed as application-specific performance metrics. An algorithm that is designed to maximize performance given a certain learning metric may not produce the best possible result according to these application-specific metrics. We have implemented A Metric-based One Rule Inducer (AMORI), for which it is possible to select the learning metric. We have compared the performance of this algorithm by embedding three different learning metrics (classification accuracy, the F-measure, and the area under the ROC curve), on 19 UCI data sets. In addition, we have compared the results of AMORI with those obtained using an existing rule learning algorithm of similar complexity (One Rule) and a state-of-the-art rule learner (Ripper). The experiments show that a performance gain is achieved, for all included metrics, when using identical metrics for learning and evaluation. We also show that each AMORI/metric combination outperforms One Rule when using identical learning and evaluation metrics. The performance of AMORI is acceptable when compared with Ripper. Overall, the results suggest that metric-based learning is a viable approach.