Loading…
Deep Learning Methods for Underwater Target Feature Extraction and Recognition
The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwat...
Saved in:
Published in: | Computational intelligence and neuroscience 2018-01, Vol.2018 (2018), p.1-10 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03 |
---|---|
cites | cdi_FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03 |
container_end_page | 10 |
container_issue | 2018 |
container_start_page | 1 |
container_title | Computational intelligence and neuroscience |
container_volume | 2018 |
creator | Kang, Baolin Shi, Jianfei Qiu, Mengran Wang, Kejun Hu, Gang Peng, Yuan |
description | The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved. |
doi_str_mv | 10.1155/2018/1214301 |
format | article |
fullrecord | <record><control><sourceid>gale_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_5892262</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A585448346</galeid><sourcerecordid>A585448346</sourcerecordid><originalsourceid>FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03</originalsourceid><addsrcrecordid>eNqN0s9rFDEUB_AgFlurN88S8CLo2vycyVyEUltbWBWkPYdM8jKbMptsMzNW_3sz7LrVnjwlIR--vJcXhF5R8oFSKU8YoeqEMio4oU_QEa1UvZCs5k_3-0oeoufDcEuIrCVhz9Aha2pFBKmP0NdPABu8BJNjiB3-AuMquQH7lPFNdJDvzQgZX5vcwYgvwIxTBnz-c8zGjiFFbKLD38GmLob5_AIdeNMP8HK3HqObi_Prs8vF8tvnq7PT5cKKphkXzjLrgXMhWetUC5XyrSDGMA6NN62zqrEMWCsrKrxvnawrx5tKcTCqFkD4Mfq4zd1M7RqchVgq6vUmh7XJv3QyQf97E8NKd-mHlqphrGIl4O0uIKe7CYZRr8Ngoe9NhDQNmhHBGGeS0kLfPKK3acqxtFcU44JI0dQPqjM96BB9mt9oDtWnUkkhFBdVUe-3yuY0DBn8vmRK9DxOPY9T78ZZ-Ou_29zjP_Mr4N0WrEJ05j78ZxwUA948aMrL32j4b6pRsQs</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2023405497</pqid></control><display><type>article</type><title>Deep Learning Methods for Underwater Target Feature Extraction and Recognition</title><source>Wiley Online Library Open Access</source><source>Publicly Available Content (ProQuest)</source><creator>Kang, Baolin ; Shi, Jianfei ; Qiu, Mengran ; Wang, Kejun ; Hu, Gang ; Peng, Yuan</creator><contributor>Köker, Raşit ; Raşit Köker</contributor><creatorcontrib>Kang, Baolin ; Shi, Jianfei ; Qiu, Mengran ; Wang, Kejun ; Hu, Gang ; Peng, Yuan ; Köker, Raşit ; Raşit Köker</creatorcontrib><description>The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.</description><identifier>ISSN: 1687-5265</identifier><identifier>EISSN: 1687-5273</identifier><identifier>DOI: 10.1155/2018/1214301</identifier><identifier>PMID: 29780407</identifier><language>eng</language><publisher>Cairo, Egypt: Hindawi Publishing Corporation</publisher><subject>Acoustic noise ; Acoustics ; Analysis ; Automatic classification ; Classification ; Classifiers ; Convolution ; Data processing ; Deep learning ; Feature extraction ; Feature recognition ; Learning algorithms ; Machine Learning ; Methods ; Neural networks ; Neural Networks, Computer ; Object recognition (Computers) ; Pattern recognition ; Pattern Recognition, Automated - methods ; Ships ; Signal processing ; Signal Processing, Computer-Assisted ; Target recognition ; Teaching methods ; Underwater acoustics ; Voice recognition ; Water ; Wavelet transforms</subject><ispartof>Computational intelligence and neuroscience, 2018-01, Vol.2018 (2018), p.1-10</ispartof><rights>Copyright © 2018 Gang Hu et al.</rights><rights>COPYRIGHT 2018 John Wiley & Sons, Inc.</rights><rights>Copyright © 2018 Gang Hu et al.; This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.</rights><rights>Copyright © 2018 Gang Hu et al. 2018</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03</citedby><cites>FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03</cites><orcidid>0000-0003-0511-1271 ; 0000-0002-8755-8829</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2023405497/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2023405497?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>230,314,776,780,881,25731,27901,27902,36989,36990,44566,74869</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/29780407$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><contributor>Köker, Raşit</contributor><contributor>Raşit Köker</contributor><creatorcontrib>Kang, Baolin</creatorcontrib><creatorcontrib>Shi, Jianfei</creatorcontrib><creatorcontrib>Qiu, Mengran</creatorcontrib><creatorcontrib>Wang, Kejun</creatorcontrib><creatorcontrib>Hu, Gang</creatorcontrib><creatorcontrib>Peng, Yuan</creatorcontrib><title>Deep Learning Methods for Underwater Target Feature Extraction and Recognition</title><title>Computational intelligence and neuroscience</title><addtitle>Comput Intell Neurosci</addtitle><description>The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.</description><subject>Acoustic noise</subject><subject>Acoustics</subject><subject>Analysis</subject><subject>Automatic classification</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Convolution</subject><subject>Data processing</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Feature recognition</subject><subject>Learning algorithms</subject><subject>Machine Learning</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Object recognition (Computers)</subject><subject>Pattern recognition</subject><subject>Pattern Recognition, Automated - methods</subject><subject>Ships</subject><subject>Signal processing</subject><subject>Signal Processing, Computer-Assisted</subject><subject>Target recognition</subject><subject>Teaching methods</subject><subject>Underwater acoustics</subject><subject>Voice recognition</subject><subject>Water</subject><subject>Wavelet transforms</subject><issn>1687-5265</issn><issn>1687-5273</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqN0s9rFDEUB_AgFlurN88S8CLo2vycyVyEUltbWBWkPYdM8jKbMptsMzNW_3sz7LrVnjwlIR--vJcXhF5R8oFSKU8YoeqEMio4oU_QEa1UvZCs5k_3-0oeoufDcEuIrCVhz9Aha2pFBKmP0NdPABu8BJNjiB3-AuMquQH7lPFNdJDvzQgZX5vcwYgvwIxTBnz-c8zGjiFFbKLD38GmLob5_AIdeNMP8HK3HqObi_Prs8vF8tvnq7PT5cKKphkXzjLrgXMhWetUC5XyrSDGMA6NN62zqrEMWCsrKrxvnawrx5tKcTCqFkD4Mfq4zd1M7RqchVgq6vUmh7XJv3QyQf97E8NKd-mHlqphrGIl4O0uIKe7CYZRr8Ngoe9NhDQNmhHBGGeS0kLfPKK3acqxtFcU44JI0dQPqjM96BB9mt9oDtWnUkkhFBdVUe-3yuY0DBn8vmRK9DxOPY9T78ZZ-Ou_29zjP_Mr4N0WrEJ05j78ZxwUA948aMrL32j4b6pRsQs</recordid><startdate>20180101</startdate><enddate>20180101</enddate><creator>Kang, Baolin</creator><creator>Shi, Jianfei</creator><creator>Qiu, Mengran</creator><creator>Wang, Kejun</creator><creator>Hu, Gang</creator><creator>Peng, Yuan</creator><general>Hindawi Publishing Corporation</general><general>Hindawi</general><general>John Wiley & Sons, Inc</general><general>Hindawi Limited</general><scope>ADJCN</scope><scope>AHFXO</scope><scope>RHU</scope><scope>RHW</scope><scope>RHX</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7QF</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7TK</scope><scope>7U5</scope><scope>7X7</scope><scope>7XB</scope><scope>8AL</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FH</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BBNVY</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>CCPQU</scope><scope>CWDGH</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>GNUQQ</scope><scope>H8D</scope><scope>H8G</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>K7-</scope><scope>K9.</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>LK8</scope><scope>L~C</scope><scope>L~D</scope><scope>M0N</scope><scope>M0S</scope><scope>M1P</scope><scope>M7P</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PSYQQ</scope><scope>PTHSS</scope><scope>Q9U</scope><scope>7X8</scope><scope>5PM</scope><orcidid>https://orcid.org/0000-0003-0511-1271</orcidid><orcidid>https://orcid.org/0000-0002-8755-8829</orcidid></search><sort><creationdate>20180101</creationdate><title>Deep Learning Methods for Underwater Target Feature Extraction and Recognition</title><author>Kang, Baolin ; Shi, Jianfei ; Qiu, Mengran ; Wang, Kejun ; Hu, Gang ; Peng, Yuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Acoustic noise</topic><topic>Acoustics</topic><topic>Analysis</topic><topic>Automatic classification</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Convolution</topic><topic>Data processing</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Feature recognition</topic><topic>Learning algorithms</topic><topic>Machine Learning</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Object recognition (Computers)</topic><topic>Pattern recognition</topic><topic>Pattern Recognition, Automated - methods</topic><topic>Ships</topic><topic>Signal processing</topic><topic>Signal Processing, Computer-Assisted</topic><topic>Target recognition</topic><topic>Teaching methods</topic><topic>Underwater acoustics</topic><topic>Voice recognition</topic><topic>Water</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kang, Baolin</creatorcontrib><creatorcontrib>Shi, Jianfei</creatorcontrib><creatorcontrib>Qiu, Mengran</creatorcontrib><creatorcontrib>Wang, Kejun</creatorcontrib><creatorcontrib>Hu, Gang</creatorcontrib><creatorcontrib>Peng, Yuan</creatorcontrib><collection>الدوريات العلمية والإحصائية - e-Marefa Academic and Statistical Periodicals</collection><collection>معرفة - المحتوى العربي الأكاديمي المتكامل - e-Marefa Academic Complete</collection><collection>Hindawi Publishing Complete</collection><collection>Hindawi Publishing Subscription Journals</collection><collection>Hindawi Publishing Open Access Journals</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Aluminium Industry Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Neurosciences Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>ProQuest Health & Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Computing Database (Alumni Edition)</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies & Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>Biological Science Collection</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest Natural Science Collection</collection><collection>ProQuest One Community College</collection><collection>Middle East & Africa Database</collection><collection>ProQuest Central</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Central Student</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Biological Sciences</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Computing Database</collection><collection>Health & Medical Collection (Alumni Edition)</collection><collection>PML(ProQuest Medical Library)</collection><collection>Biological Science Database</collection><collection>Engineering Database</collection><collection>ProQuest advanced technologies & aerospace journals</collection><collection>ProQuest Advanced Technologies & Aerospace Collection</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>ProQuest One Psychology</collection><collection>Engineering collection</collection><collection>ProQuest Central Basic</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Computational intelligence and neuroscience</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kang, Baolin</au><au>Shi, Jianfei</au><au>Qiu, Mengran</au><au>Wang, Kejun</au><au>Hu, Gang</au><au>Peng, Yuan</au><au>Köker, Raşit</au><au>Raşit Köker</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning Methods for Underwater Target Feature Extraction and Recognition</atitle><jtitle>Computational intelligence and neuroscience</jtitle><addtitle>Comput Intell Neurosci</addtitle><date>2018-01-01</date><risdate>2018</risdate><volume>2018</volume><issue>2018</issue><spage>1</spage><epage>10</epage><pages>1-10</pages><issn>1687-5265</issn><eissn>1687-5273</eissn><abstract>The classification and recognition technology of underwater acoustic signal were always an important research content in the field of underwater acoustic signal processing. Currently, wavelet transform, Hilbert-Huang transform, and Mel frequency cepstral coefficients are used as a method of underwater acoustic signal feature extraction. In this paper, a method for feature extraction and identification of underwater noise data based on CNN and ELM is proposed. An automatic feature extraction method of underwater acoustic signals is proposed using depth convolution network. An underwater target recognition classifier is based on extreme learning machine. Although convolution neural networks can execute both feature extraction and classification, their function mainly relies on a full connection layer, which is trained by gradient descent-based; the generalization ability is limited and suboptimal, so an extreme learning machine (ELM) was used in classification stage. Firstly, CNN learns deep and robust features, followed by the removing of the fully connected layers. Then ELM fed with the CNN features is used as the classifier to conduct an excellent classification. Experiments on the actual data set of civil ships obtained 93.04% recognition rate; compared to the traditional Mel frequency cepstral coefficients and Hilbert-Huang feature, recognition rate greatly improved.</abstract><cop>Cairo, Egypt</cop><pub>Hindawi Publishing Corporation</pub><pmid>29780407</pmid><doi>10.1155/2018/1214301</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0003-0511-1271</orcidid><orcidid>https://orcid.org/0000-0002-8755-8829</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1687-5265 |
ispartof | Computational intelligence and neuroscience, 2018-01, Vol.2018 (2018), p.1-10 |
issn | 1687-5265 1687-5273 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_5892262 |
source | Wiley Online Library Open Access; Publicly Available Content (ProQuest) |
subjects | Acoustic noise Acoustics Analysis Automatic classification Classification Classifiers Convolution Data processing Deep learning Feature extraction Feature recognition Learning algorithms Machine Learning Methods Neural networks Neural Networks, Computer Object recognition (Computers) Pattern recognition Pattern Recognition, Automated - methods Ships Signal processing Signal Processing, Computer-Assisted Target recognition Teaching methods Underwater acoustics Voice recognition Water Wavelet transforms |
title | Deep Learning Methods for Underwater Target Feature Extraction and Recognition |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-02T02%3A55%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning%20Methods%20for%20Underwater%20Target%20Feature%20Extraction%20and%20Recognition&rft.jtitle=Computational%20intelligence%20and%20neuroscience&rft.au=Kang,%20Baolin&rft.date=2018-01-01&rft.volume=2018&rft.issue=2018&rft.spage=1&rft.epage=10&rft.pages=1-10&rft.issn=1687-5265&rft.eissn=1687-5273&rft_id=info:doi/10.1155/2018/1214301&rft_dat=%3Cgale_pubme%3EA585448346%3C/gale_pubme%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c499t-dc2cfe33452bd8be68fb40aa23e9fabdc89c2e2b5614ffbd576d39683ea874e03%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2023405497&rft_id=info:pmid/29780407&rft_galeid=A585448346&rfr_iscdi=true |