Randomness in generalization ability: a source to improve it?

Research output: Contribution to conferencePaper

1 Citation (Scopus)

Abstract

Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) being quite simple are most popular. However, some obstacles are yet to be cleared to make them truly reliable-smart information processing systems. Difficulties such as long learning-time and local minima, may not affect as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. However, the question of generalization ability of ANNs is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If a FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability.

Original languageEnglish (US)
Pages131-136
Number of pages6
StatePublished - Dec 1 1994
EventProceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) - Orlando, FL, USA
Duration: Jun 27 1994Jun 29 1994

Other

OtherProceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7)
CityOrlando, FL, USA
Period6/27/946/29/94

Fingerprint

Neural networks
Network architecture
Neurons
Learning systems

ASJC Scopus subject areas

  • Software

Cite this

Sarkar, D. (1994). Randomness in generalization ability: a source to improve it?. 131-136. Paper presented at Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7), Orlando, FL, USA, .

Randomness in generalization ability : a source to improve it? / Sarkar, Dilip.

1994. 131-136 Paper presented at Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7), Orlando, FL, USA, .

Research output: Contribution to conferencePaper

Sarkar, D 1994, 'Randomness in generalization ability: a source to improve it?', Paper presented at Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7), Orlando, FL, USA, 6/27/94 - 6/29/94 pp. 131-136.
Sarkar D. Randomness in generalization ability: a source to improve it?. 1994. Paper presented at Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7), Orlando, FL, USA, .
Sarkar, Dilip. / Randomness in generalization ability : a source to improve it?. Paper presented at Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7), Orlando, FL, USA, .6 p.
@conference{5b0b0bfb291846d7b22910ebb77edbae,
title = "Randomness in generalization ability: a source to improve it?",
abstract = "Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) being quite simple are most popular. However, some obstacles are yet to be cleared to make them truly reliable-smart information processing systems. Difficulties such as long learning-time and local minima, may not affect as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. However, the question of generalization ability of ANNs is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If a FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability.",
author = "Dilip Sarkar",
year = "1994",
month = "12",
day = "1",
language = "English (US)",
pages = "131--136",
note = "Proceedings of the 1994 IEEE International Conference on Neural Networks. Part 1 (of 7) ; Conference date: 27-06-1994 Through 29-06-1994",

}

TY - CONF

T1 - Randomness in generalization ability

T2 - a source to improve it?

AU - Sarkar, Dilip

PY - 1994/12/1

Y1 - 1994/12/1

N2 - Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) being quite simple are most popular. However, some obstacles are yet to be cleared to make them truly reliable-smart information processing systems. Difficulties such as long learning-time and local minima, may not affect as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. However, the question of generalization ability of ANNs is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If a FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability.

AB - Among several models of neurons and their interconnections, feedforward artificial neural networks (FFANNs) being quite simple are most popular. However, some obstacles are yet to be cleared to make them truly reliable-smart information processing systems. Difficulties such as long learning-time and local minima, may not affect as much as the question of generalization ability, because a network needs only one training, and then it may be used for a long time. However, the question of generalization ability of ANNs is of great interest for both theoretical understanding and practical use. This paper reports our observations about randomness in generalization ability of FFANNs. A novel method for measuring generalization ability is defined. This definition can be used to identify degree of randomness in generalization ability of learning systems. If a FFANN architecture shows randomness in generalization ability for a given problem then multiple networks can be used to improve it. We have developed a model, called voting model, for predicting generalization ability of multiple networks. It has been shown that if correct classification probability of a single network is greater than half, then as the number of networks is increased so does the generalization ability.

UR - http://www.scopus.com/inward/record.url?scp=0028730470&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0028730470&partnerID=8YFLogxK

M3 - Paper

AN - SCOPUS:0028730470

SP - 131

EP - 136

ER -