ABSTRACT

Artificial neural networks have been proposed as a tool for machine learning [1,2,3,4] and many results have been obtained regarding their application to practical problems in robotics control, vision, pattern recognition, grammatical inferences, and other areas [5,6,7,8]. In these roles, a neural network is trained to recognize complex associations between inputs and outputs that were presented during a supervised training cycle. These associations are incorporated into the weights of the network, which encode a distributed representation of the information that was contained in the input patterns. Once trained, the network will compute an input/output mapping which, if the training data was representative enough, will closely match the unknown rule that produced the original data. Massive parallelism of computation, as well as noise and fault tolerance, are often offered as justifications for the use of neural nets as learning paradigms.