Loading…
Morphogenic neural networks encode abstract rules by data
The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abst...
Saved in:
Published in: | Information sciences 2002-05, Vol.142 (1), p.249-273 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abstract relationships between inputs and outputs remain hidden. Motivated by the need in the study on neural and neurofuzzy architectures, for a more general concept than that of the
neural unit, or
node, originally introduced by McCulloch and Pitts, we developed in our previous work the concept of the
morphogenetic neural (MN)
network. In this paper we show that in contrast to the classical NN, the MN network can encode abstract, symbolic expressions that characterize the mapping between inputs and outputs, and thus show the internal structure hidden in the data. Because of the more general nature of the MN, the MN networks are capable of abstraction, data reduction and discovering, often implicit, relationships. Uncertainty can be expressed by a combination of evidence theory, concepts of quantum mechanics and a morphogenetic neural network. With the proposed morphogenetic neural network it is possible to perform both rigorous and approximate computations (i.e. including semantic uncertainty). The internal structure in data can be discovered by identifying “invariants”, i.e. by finding (generally implicit) dependencies between variables and parameters in the model. |
---|---|
ISSN: | 0020-0255 1872-6291 |
DOI: | 10.1016/S0020-0255(02)00168-8 |