Loading…
Training neural networks with heterogeneous data
Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into...
Saved in:
Published in: | Neural networks 2005-07, Vol.18 (5), p.595-601 |
---|---|
Main Authors: | , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3 |
---|---|
cites | cdi_FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3 |
container_end_page | 601 |
container_issue | 5 |
container_start_page | 595 |
container_title | Neural networks |
container_volume | 18 |
creator | Drakopoulos, John A. Abdulkader, Ahmad |
description | Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the ‘axiomatic’ basis of our theory. Both methods have been applied to a time-delay neural network—which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.
1
1
An abbreviated version of some portions of this article appeared in
Drakopoulos and Abdulkader, 2005, published under the IEEE copyright. |
doi_str_mv | 10.1016/j.neunet.2005.06.011 |
format | article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_68554235</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S089360800500119X</els_id><sourcerecordid>68554235</sourcerecordid><originalsourceid>FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3</originalsourceid><addsrcrecordid>eNqFkMtKBDEQRYMoOo7-gchsdNdtpfPeCCK-QHCj6xCTas3Y061Jj-LfG50Bd7qqzbmXW4eQAwo1BSpP5nWPyx7HugEQNcgaKN0gE6qVqRqlm00yAW1YJUHDDtnNeQ4AUnO2TXaoBCO04hMC98nFPvZPs9KWXFfO-DGklzz7iOPz7BlHTMMT9jgs8yy40e2RrdZ1GffXd0oeLi_uz6-r27urm_Oz28rzho6VpMoEJY1uGyqQaeECth64fOTSK9-o1mvjWg6CMY8YCiCdkIG3tHxnApuS41XvaxrelphHu4jZY9e5ny1WaiF4w8S_IDVMCclZAfkK9GnIOWFrX1NcuPRpKdhvpXZuV0rtt1IL0halJXa47l8-LjD8htYOC3C0Blz2rmuT633Mv5yizNCyYkpOVxwWbe8Rk80-Yu8xxIR-tGGIfy_5AtbklcE</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>19375643</pqid></control><display><type>article</type><title>Training neural networks with heterogeneous data</title><source>ScienceDirect Freedom Collection</source><creator>Drakopoulos, John A. ; Abdulkader, Ahmad</creator><creatorcontrib>Drakopoulos, John A. ; Abdulkader, Ahmad</creatorcontrib><description>Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the ‘axiomatic’ basis of our theory. Both methods have been applied to a time-delay neural network—which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.
1
1
An abbreviated version of some portions of this article appeared in
Drakopoulos and Abdulkader, 2005, published under the IEEE copyright.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2005.06.011</identifier><identifier>PMID: 16095874</identifier><language>eng</language><publisher>Oxford: Elsevier Ltd</publisher><subject>Algorithms ; Applied sciences ; Artificial Intelligence ; Boosting ; Classification ; Computer science; control theory; systems ; Connectionism. Neural networks ; Data emphasizing ; Data Interpretation, Statistical ; Exact sciences and technology ; Game Theory ; Growing cell structure ; Heterogeneous data ; Models, Statistical ; Neural gas ; Neural networks ; Neural Networks (Computer) ; Training schedule</subject><ispartof>Neural networks, 2005-07, Vol.18 (5), p.595-601</ispartof><rights>2005 Elsevier Ltd</rights><rights>2005 INIST-CNRS</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3</citedby><cites>FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>309,310,314,780,784,789,790,23930,23931,25140,27924,27925</link.rule.ids><backlink>$$Uhttp://pascal-francis.inist.fr/vibad/index.php?action=getRecordDetail&idt=17139119$$DView record in Pascal Francis$$Hfree_for_read</backlink><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/16095874$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Drakopoulos, John A.</creatorcontrib><creatorcontrib>Abdulkader, Ahmad</creatorcontrib><title>Training neural networks with heterogeneous data</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the ‘axiomatic’ basis of our theory. Both methods have been applied to a time-delay neural network—which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.
1
1
An abbreviated version of some portions of this article appeared in
Drakopoulos and Abdulkader, 2005, published under the IEEE copyright.</description><subject>Algorithms</subject><subject>Applied sciences</subject><subject>Artificial Intelligence</subject><subject>Boosting</subject><subject>Classification</subject><subject>Computer science; control theory; systems</subject><subject>Connectionism. Neural networks</subject><subject>Data emphasizing</subject><subject>Data Interpretation, Statistical</subject><subject>Exact sciences and technology</subject><subject>Game Theory</subject><subject>Growing cell structure</subject><subject>Heterogeneous data</subject><subject>Models, Statistical</subject><subject>Neural gas</subject><subject>Neural networks</subject><subject>Neural Networks (Computer)</subject><subject>Training schedule</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2005</creationdate><recordtype>article</recordtype><recordid>eNqFkMtKBDEQRYMoOo7-gchsdNdtpfPeCCK-QHCj6xCTas3Y061Jj-LfG50Bd7qqzbmXW4eQAwo1BSpP5nWPyx7HugEQNcgaKN0gE6qVqRqlm00yAW1YJUHDDtnNeQ4AUnO2TXaoBCO04hMC98nFPvZPs9KWXFfO-DGklzz7iOPz7BlHTMMT9jgs8yy40e2RrdZ1GffXd0oeLi_uz6-r27urm_Oz28rzho6VpMoEJY1uGyqQaeECth64fOTSK9-o1mvjWg6CMY8YCiCdkIG3tHxnApuS41XvaxrelphHu4jZY9e5ny1WaiF4w8S_IDVMCclZAfkK9GnIOWFrX1NcuPRpKdhvpXZuV0rtt1IL0halJXa47l8-LjD8htYOC3C0Blz2rmuT633Mv5yizNCyYkpOVxwWbe8Rk80-Yu8xxIR-tGGIfy_5AtbklcE</recordid><startdate>200507</startdate><enddate>200507</enddate><creator>Drakopoulos, John A.</creator><creator>Abdulkader, Ahmad</creator><general>Elsevier Ltd</general><general>Elsevier Science</general><scope>IQODW</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QO</scope><scope>7TK</scope><scope>8FD</scope><scope>FR3</scope><scope>P64</scope><scope>7X8</scope></search><sort><creationdate>200507</creationdate><title>Training neural networks with heterogeneous data</title><author>Drakopoulos, John A. ; Abdulkader, Ahmad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2005</creationdate><topic>Algorithms</topic><topic>Applied sciences</topic><topic>Artificial Intelligence</topic><topic>Boosting</topic><topic>Classification</topic><topic>Computer science; control theory; systems</topic><topic>Connectionism. Neural networks</topic><topic>Data emphasizing</topic><topic>Data Interpretation, Statistical</topic><topic>Exact sciences and technology</topic><topic>Game Theory</topic><topic>Growing cell structure</topic><topic>Heterogeneous data</topic><topic>Models, Statistical</topic><topic>Neural gas</topic><topic>Neural networks</topic><topic>Neural Networks (Computer)</topic><topic>Training schedule</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Drakopoulos, John A.</creatorcontrib><creatorcontrib>Abdulkader, Ahmad</creatorcontrib><collection>Pascal-Francis</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Biotechnology Research Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Technology Research Database</collection><collection>Engineering Research Database</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Drakopoulos, John A.</au><au>Abdulkader, Ahmad</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Training neural networks with heterogeneous data</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2005-07</date><risdate>2005</risdate><volume>18</volume><issue>5</issue><spage>595</spage><epage>601</epage><pages>595-601</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>Data pruning and ordered training are two methods and the results of a small theory that attempts to formalize neural network training with heterogeneous data. Data pruning is a simple process that attempts to remove noisy data. Ordered training is a more complex method that partitions the data into a number of categories and assigns training times to those assuming that data size and training time have a polynomial relation. Both methods derive from a set of premises that form the ‘axiomatic’ basis of our theory. Both methods have been applied to a time-delay neural network—which is one of the main learners in Microsoft's Tablet PC handwriting recognition system. Their effect is presented in this paper along with a rough estimate of their effect on the overall multi-learner system. The handwriting data and the chosen language are Italian.
1
1
An abbreviated version of some portions of this article appeared in
Drakopoulos and Abdulkader, 2005, published under the IEEE copyright.</abstract><cop>Oxford</cop><pub>Elsevier Ltd</pub><pmid>16095874</pmid><doi>10.1016/j.neunet.2005.06.011</doi><tpages>7</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0893-6080 |
ispartof | Neural networks, 2005-07, Vol.18 (5), p.595-601 |
issn | 0893-6080 1879-2782 |
language | eng |
recordid | cdi_proquest_miscellaneous_68554235 |
source | ScienceDirect Freedom Collection |
subjects | Algorithms Applied sciences Artificial Intelligence Boosting Classification Computer science control theory systems Connectionism. Neural networks Data emphasizing Data Interpretation, Statistical Exact sciences and technology Game Theory Growing cell structure Heterogeneous data Models, Statistical Neural gas Neural networks Neural Networks (Computer) Training schedule |
title | Training neural networks with heterogeneous data |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T04%3A04%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Training%20neural%20networks%20with%20heterogeneous%20data&rft.jtitle=Neural%20networks&rft.au=Drakopoulos,%20John%20A.&rft.date=2005-07&rft.volume=18&rft.issue=5&rft.spage=595&rft.epage=601&rft.pages=595-601&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2005.06.011&rft_dat=%3Cproquest_cross%3E68554235%3C/proquest_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c421t-6179d7698f215e385adefc046b46c7c27fc89af40533ceed85a6a56d4f11019d3%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=19375643&rft_id=info:pmid/16095874&rfr_iscdi=true |