Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANN's) are most popular, because of their simplicity and effectiveness. Some obstacles, however, are yet to be cleared to make them truly reliable-smart information processing systems. Difficulties such as long learning time and local minima may not affect FFANN's as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. The question of generalization ability of ANN's, however, is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of FFANN's. A novel method for measuring generalization ability is defined. This method can be used to identify degree of randomness in generalization ability of learning systems. If an FFANN architecture shows randomness in generalization ability for a given problem, multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks in a voting network is increased so does its generalization ability. Further analysis has shown that VC-dimension of the voting network model may increase monotonically as the number of networks in the voting networks is increased. This result is counter intuitive, since it is generally believed that the smaller the VC-dimension, the better the generalization ability.
ASJC Scopus subject areas
- Computer Science Applications
- Computer Networks and Communications
- Artificial Intelligence