Loading…
Morphogenic neural networks encode abstract rules by data
The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abst...
Saved in:
Published in: | Information sciences 2002-05, Vol.142 (1), p.249-273 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c338t-83e12faa6a5f13d74144c70e2763c55a8971b1c7d6d7ffd82c3dce08d1bba7423 |
---|---|
cites | |
container_end_page | 273 |
container_issue | 1 |
container_start_page | 249 |
container_title | Information sciences |
container_volume | 142 |
creator | Resconi, G. van der Wal, A.J. |
description | The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abstract relationships between inputs and outputs remain hidden. Motivated by the need in the study on neural and neurofuzzy architectures, for a more general concept than that of the
neural unit, or
node, originally introduced by McCulloch and Pitts, we developed in our previous work the concept of the
morphogenetic neural (MN)
network. In this paper we show that in contrast to the classical NN, the MN network can encode abstract, symbolic expressions that characterize the mapping between inputs and outputs, and thus show the internal structure hidden in the data. Because of the more general nature of the MN, the MN networks are capable of abstraction, data reduction and discovering, often implicit, relationships. Uncertainty can be expressed by a combination of evidence theory, concepts of quantum mechanics and a morphogenetic neural network. With the proposed morphogenetic neural network it is possible to perform both rigorous and approximate computations (i.e. including semantic uncertainty). The internal structure in data can be discovered by identifying “invariants”, i.e. by finding (generally implicit) dependencies between variables and parameters in the model. |
doi_str_mv | 10.1016/S0020-0255(02)00168-8 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_27579505</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0020025502001688</els_id><sourcerecordid>27579505</sourcerecordid><originalsourceid>FETCH-LOGICAL-c338t-83e12faa6a5f13d74144c70e2763c55a8971b1c7d6d7ffd82c3dce08d1bba7423</originalsourceid><addsrcrecordid>eNqFkE9LxDAUxIMouK5-BKEn0UP1JWma9CSy-A9WPKjnkCavWu02a9Iq--3N7opXL2_gMTMwP0KOKZxToOXFEwCDHJgQp8DOIL1UrnbIhCrJ8pJVdJdM_iz75CDGdwAoZFlOSPXgw_LNv2Lf2qzHMZguyfDtw0fMsLfeYWbqOARjhyyMHcasXmXODOaQ7DWmi3j0q1PycnP9PLvL54-397OreW45V0OuOFLWGFMa0VDuZEGLwkpAJktuhTCqkrSmVrrSyaZxilnuLIJytK6NLBifkpNt7zL4zxHjoBdttNh1pkc_Rs2kkJUAkYxia7TBxxiw0cvQLkxYaQp6DUpvQOk1hXT0BpRWKXe5zWFa8dVi0NG2aTq6NqAdtPPtPw0_aspvqg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>27579505</pqid></control><display><type>article</type><title>Morphogenic neural networks encode abstract rules by data</title><source>Elsevier</source><creator>Resconi, G. ; van der Wal, A.J.</creator><creatorcontrib>Resconi, G. ; van der Wal, A.J.</creatorcontrib><description>The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abstract relationships between inputs and outputs remain hidden. Motivated by the need in the study on neural and neurofuzzy architectures, for a more general concept than that of the
neural unit, or
node, originally introduced by McCulloch and Pitts, we developed in our previous work the concept of the
morphogenetic neural (MN)
network. In this paper we show that in contrast to the classical NN, the MN network can encode abstract, symbolic expressions that characterize the mapping between inputs and outputs, and thus show the internal structure hidden in the data. Because of the more general nature of the MN, the MN networks are capable of abstraction, data reduction and discovering, often implicit, relationships. Uncertainty can be expressed by a combination of evidence theory, concepts of quantum mechanics and a morphogenetic neural network. With the proposed morphogenetic neural network it is possible to perform both rigorous and approximate computations (i.e. including semantic uncertainty). The internal structure in data can be discovered by identifying “invariants”, i.e. by finding (generally implicit) dependencies between variables and parameters in the model.</description><identifier>ISSN: 0020-0255</identifier><identifier>EISSN: 1872-6291</identifier><identifier>DOI: 10.1016/S0020-0255(02)00168-8</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Evidence theory ; Filtering ; Internal structure of data ; Invariant ; Morphogenetic neuron ; Neural network ; Orthogonality ; Quantum computing ; Semantic uncertainty ; Synergy</subject><ispartof>Information sciences, 2002-05, Vol.142 (1), p.249-273</ispartof><rights>2002 Elsevier Science Inc.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c338t-83e12faa6a5f13d74144c70e2763c55a8971b1c7d6d7ffd82c3dce08d1bba7423</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Resconi, G.</creatorcontrib><creatorcontrib>van der Wal, A.J.</creatorcontrib><title>Morphogenic neural networks encode abstract rules by data</title><title>Information sciences</title><description>The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abstract relationships between inputs and outputs remain hidden. Motivated by the need in the study on neural and neurofuzzy architectures, for a more general concept than that of the
neural unit, or
node, originally introduced by McCulloch and Pitts, we developed in our previous work the concept of the
morphogenetic neural (MN)
network. In this paper we show that in contrast to the classical NN, the MN network can encode abstract, symbolic expressions that characterize the mapping between inputs and outputs, and thus show the internal structure hidden in the data. Because of the more general nature of the MN, the MN networks are capable of abstraction, data reduction and discovering, often implicit, relationships. Uncertainty can be expressed by a combination of evidence theory, concepts of quantum mechanics and a morphogenetic neural network. With the proposed morphogenetic neural network it is possible to perform both rigorous and approximate computations (i.e. including semantic uncertainty). The internal structure in data can be discovered by identifying “invariants”, i.e. by finding (generally implicit) dependencies between variables and parameters in the model.</description><subject>Evidence theory</subject><subject>Filtering</subject><subject>Internal structure of data</subject><subject>Invariant</subject><subject>Morphogenetic neuron</subject><subject>Neural network</subject><subject>Orthogonality</subject><subject>Quantum computing</subject><subject>Semantic uncertainty</subject><subject>Synergy</subject><issn>0020-0255</issn><issn>1872-6291</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2002</creationdate><recordtype>article</recordtype><recordid>eNqFkE9LxDAUxIMouK5-BKEn0UP1JWma9CSy-A9WPKjnkCavWu02a9Iq--3N7opXL2_gMTMwP0KOKZxToOXFEwCDHJgQp8DOIL1UrnbIhCrJ8pJVdJdM_iz75CDGdwAoZFlOSPXgw_LNv2Lf2qzHMZguyfDtw0fMsLfeYWbqOARjhyyMHcasXmXODOaQ7DWmi3j0q1PycnP9PLvL54-397OreW45V0OuOFLWGFMa0VDuZEGLwkpAJktuhTCqkrSmVrrSyaZxilnuLIJytK6NLBifkpNt7zL4zxHjoBdttNh1pkc_Rs2kkJUAkYxia7TBxxiw0cvQLkxYaQp6DUpvQOk1hXT0BpRWKXe5zWFa8dVi0NG2aTq6NqAdtPPtPw0_aspvqg</recordid><startdate>20020501</startdate><enddate>20020501</enddate><creator>Resconi, G.</creator><creator>van der Wal, A.J.</creator><general>Elsevier Inc</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>8FD</scope><scope>L7M</scope></search><sort><creationdate>20020501</creationdate><title>Morphogenic neural networks encode abstract rules by data</title><author>Resconi, G. ; van der Wal, A.J.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c338t-83e12faa6a5f13d74144c70e2763c55a8971b1c7d6d7ffd82c3dce08d1bba7423</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2002</creationdate><topic>Evidence theory</topic><topic>Filtering</topic><topic>Internal structure of data</topic><topic>Invariant</topic><topic>Morphogenetic neuron</topic><topic>Neural network</topic><topic>Orthogonality</topic><topic>Quantum computing</topic><topic>Semantic uncertainty</topic><topic>Synergy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Resconi, G.</creatorcontrib><creatorcontrib>van der Wal, A.J.</creatorcontrib><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>Information sciences</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Resconi, G.</au><au>van der Wal, A.J.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Morphogenic neural networks encode abstract rules by data</atitle><jtitle>Information sciences</jtitle><date>2002-05-01</date><risdate>2002</risdate><volume>142</volume><issue>1</issue><spage>249</spage><epage>273</epage><pages>249-273</pages><issn>0020-0255</issn><eissn>1872-6291</eissn><abstract>The classical McCulloch and Pitts neural unit is widely used today in artificial neural networks (NNs) and essentially acts as a non-linear filter. Classical NN are only capable of approximating a mapping between inputs and outputs in the form of a lookup table or “black box” and the underlying abstract relationships between inputs and outputs remain hidden. Motivated by the need in the study on neural and neurofuzzy architectures, for a more general concept than that of the
neural unit, or
node, originally introduced by McCulloch and Pitts, we developed in our previous work the concept of the
morphogenetic neural (MN)
network. In this paper we show that in contrast to the classical NN, the MN network can encode abstract, symbolic expressions that characterize the mapping between inputs and outputs, and thus show the internal structure hidden in the data. Because of the more general nature of the MN, the MN networks are capable of abstraction, data reduction and discovering, often implicit, relationships. Uncertainty can be expressed by a combination of evidence theory, concepts of quantum mechanics and a morphogenetic neural network. With the proposed morphogenetic neural network it is possible to perform both rigorous and approximate computations (i.e. including semantic uncertainty). The internal structure in data can be discovered by identifying “invariants”, i.e. by finding (generally implicit) dependencies between variables and parameters in the model.</abstract><pub>Elsevier Inc</pub><doi>10.1016/S0020-0255(02)00168-8</doi><tpages>25</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0020-0255 |
ispartof | Information sciences, 2002-05, Vol.142 (1), p.249-273 |
issn | 0020-0255 1872-6291 |
language | eng |
recordid | cdi_proquest_miscellaneous_27579505 |
source | Elsevier |
subjects | Evidence theory Filtering Internal structure of data Invariant Morphogenetic neuron Neural network Orthogonality Quantum computing Semantic uncertainty Synergy |
title | Morphogenic neural networks encode abstract rules by data |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T22%3A58%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Morphogenic%20neural%20networks%20encode%20abstract%20rules%20by%20data&rft.jtitle=Information%20sciences&rft.au=Resconi,%20G.&rft.date=2002-05-01&rft.volume=142&rft.issue=1&rft.spage=249&rft.epage=273&rft.pages=249-273&rft.issn=0020-0255&rft.eissn=1872-6291&rft_id=info:doi/10.1016/S0020-0255(02)00168-8&rft_dat=%3Cproquest_cross%3E27579505%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c338t-83e12faa6a5f13d74144c70e2763c55a8971b1c7d6d7ffd82c3dce08d1bba7423%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=27579505&rft_id=info:pmid/&rfr_iscdi=true |