In this thesis we present a theoretical investigation of the feasibility of using a problem specific inductive bias for back-propagated neural networks. We argue that if a learning algorithm is biased towards optimizing a certain performance measure, it is plausible to assume that it will generate a higher performance score when evaluated using that particular measure. We use the term measure function for a multi-criteria evaluation function that can also be used as an inherent function in learning algorithms, in order to customize the bias of a learning algorithm for a specific problem. Hence, the term measure-based learning algorithms. We discuss different characteristics of the most commonly used performance measures and establish similarities among them. The characteristics of individual measures and the established similarities are then correlated to the characteristics of the backpropagation algorithm, in order to explore the applicability of introducing a measure function to backpropagated neural networks. Our study shows that there are certain characteristics of the error back-propagation mechanism and the inherent gradient search method that limit the set of measures that can be used for the measure function. Also, we highlight the significance of taking the representational bias of the neural network into account when developing methods for measure-based learning. The overall analysis of the research shows that measure-based learning is a promising area of research with potential for further exploration. We suggest directions for future research that might help realize measure-based neural networks.
The study is an investigation on the feasibility of using a generic inductive bias for backpropagation artificial neural networks, which could incorporate any one or a combination of problem specific performance metrics to be optimized. We have identified several limitations of both the standard error backpropagation mechanism as well the inherent gradient search approach. These limitations suggest exploration of methods other than backpropagation, as well use of global search methods instead of gradient search. Also, we emphasize the importance of taking the representational bias of the neural network in consideration, since only a combination of both procedural and representational bias can provide highly optimal solutions.