Loading…
Learning in connectionist networks using the Alopex algorithm
The Alopex algorithm is described as a universal learning algorithm. The algorithm is stochastic and it can be used for learning in networks of any topology, including those with feedback. The neurons could contain any transfer function and the learning could involve minimization of any error measur...
Saved in:
Main Authors: | , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The Alopex algorithm is described as a universal learning algorithm. The algorithm is stochastic and it can be used for learning in networks of any topology, including those with feedback. The neurons could contain any transfer function and the learning could involve minimization of any error measure. The efficacy of the algorithm is investigated by applying it on multilayer perceptrons to solve problems such as XOR, parity, and encoder. These results are compared with results obtained using a backpropagation learning algorithm. Taking the specific case of the XOR problem, it is shown that a smoother error surface with fewer local minima could be obtained by using an information-theoretic error measure. An appropriate 'annealing' scheme for the algorithm is described, and it is shown that Alopex can escape out of the local minima.< > |
---|---|
DOI: | 10.1109/IJCNN.1992.287068 |