Abstract
Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) being quite simple are most popular. However, some obstacles are yet to be cleared to make them truly reliable-smart information processing systems. Difficulties such as long learning-time and local minima, may not affect as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. However, the question of generalization ability of ANNs is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If a FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability.
Original language | English (US) |
---|---|
Pages | 131-136 |
Number of pages | 6 |
DOIs | |
State | Published - 1994 |
Event | Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) - Orlando, FL, USA Duration: Jun 27 1994 → Jun 29 1994 |
Other
Other | Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) |
---|---|
City | Orlando, FL, USA |
Period | 6/27/94 → 6/29/94 |
ASJC Scopus subject areas
- Software