Loading…
Harnessing the Potential of Advanced Large Vision Models to Enhance the Detection of Optoelectronic Imaging Signals
This study focuses on exploring the use of SAM (Segment Anything Model), an advanced visual foundation model, to enhance the detection of optoelectronic imaging signals. We fine-tuned the mask encoder of SAM and used the Electron Microscopy Dataset as the experimental dataset. To evaluate the effect...
Saved in:
Main Authors: | , , , , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 3 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Liang, Dunyou Chang, Xin Peng, Feng Wu, Bing Cui, Xiaojun Zuo, Xin Ma, JianChao Zhang, Guoyu |
description | This study focuses on exploring the use of SAM (Segment Anything Model), an advanced visual foundation model, to enhance the detection of optoelectronic imaging signals. We fine-tuned the mask encoder of SAM and used the Electron Microscopy Dataset as the experimental dataset. To evaluate the effect, the U-net model was also used as a comparison benchmark. The experimental results show that the IoU metrics of SAM outperform those of U-net when only a small amount of data is available, demonstrating that the fine-tuned SAM has a unique advantage in recognizing photoelectric imaging signals. |
doi_str_mv | 10.1109/ICOCN63276.2024.10647230 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10647230</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10647230</ieee_id><sourcerecordid>10647230</sourcerecordid><originalsourceid>FETCH-LOGICAL-i91t-43b65259b2586e8971b884c170b1dfa586555de7764c476f49192fe2080c36f13</originalsourceid><addsrcrecordid>eNo1kN1Kw0AQhVdBsNS-gRf7Aqmz_9nLEqstVCtYvC2bZJKupJuSXQTf3rTq1eHM4ZsZDiGUwZwxsA_rYlu8asGNnnPgcs5AS8MFXJGZNTYXCoQ2WvFrMuHGsEyAsrdkFuMnAAgOkgkzIXHlhoAx-tDSdED61icMybuO9g1d1F8uVFjTjRtapB8--j7Ql77GLtLU02U4nPML-IgJq3TOR3B7Sj12ox_64Cu6Prr2fODdt8F18Y7cNKPg7E-nZPe03BWrbLN9XheLTeYtS5kU5fi9siVXucbcGlbmuayYgZLVjRuHSqkajdGykkY30jLLG-SQQyV0w8SU3P-u9Yi4Pw3-6Ibv_X9N4gcBl1y4</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Harnessing the Potential of Advanced Large Vision Models to Enhance the Detection of Optoelectronic Imaging Signals</title><source>IEEE Xplore All Conference Series</source><creator>Liang, Dunyou ; Chang, Xin ; Peng, Feng ; Wu, Bing ; Cui, Xiaojun ; Zuo, Xin ; Ma, JianChao ; Zhang, Guoyu</creator><creatorcontrib>Liang, Dunyou ; Chang, Xin ; Peng, Feng ; Wu, Bing ; Cui, Xiaojun ; Zuo, Xin ; Ma, JianChao ; Zhang, Guoyu</creatorcontrib><description>This study focuses on exploring the use of SAM (Segment Anything Model), an advanced visual foundation model, to enhance the detection of optoelectronic imaging signals. We fine-tuned the mask encoder of SAM and used the Electron Microscopy Dataset as the experimental dataset. To evaluate the effect, the U-net model was also used as a comparison benchmark. The experimental results show that the IoU metrics of SAM outperform those of U-net when only a small amount of data is available, demonstrating that the fine-tuned SAM has a unique advantage in recognizing photoelectric imaging signals.</description><identifier>EISSN: 2771-3059</identifier><identifier>EISBN: 9798350367652</identifier><identifier>DOI: 10.1109/ICOCN63276.2024.10647230</identifier><language>eng</language><publisher>IEEE</publisher><subject>Analytical models ; Data models ; Deep learning ; Image segmentation ; Large Vision Models ; Measurement ; Optical fiber communication ; Optoelectronic imaging ; Training data ; Visualization</subject><ispartof>2024 22nd International Conference on Optical Communications and Networks (ICOCN), 2024, p.1-3</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10647230$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,776,780,785,786,27899,54527,54904</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10647230$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Liang, Dunyou</creatorcontrib><creatorcontrib>Chang, Xin</creatorcontrib><creatorcontrib>Peng, Feng</creatorcontrib><creatorcontrib>Wu, Bing</creatorcontrib><creatorcontrib>Cui, Xiaojun</creatorcontrib><creatorcontrib>Zuo, Xin</creatorcontrib><creatorcontrib>Ma, JianChao</creatorcontrib><creatorcontrib>Zhang, Guoyu</creatorcontrib><title>Harnessing the Potential of Advanced Large Vision Models to Enhance the Detection of Optoelectronic Imaging Signals</title><title>2024 22nd International Conference on Optical Communications and Networks (ICOCN)</title><addtitle>ICOCN</addtitle><description>This study focuses on exploring the use of SAM (Segment Anything Model), an advanced visual foundation model, to enhance the detection of optoelectronic imaging signals. We fine-tuned the mask encoder of SAM and used the Electron Microscopy Dataset as the experimental dataset. To evaluate the effect, the U-net model was also used as a comparison benchmark. The experimental results show that the IoU metrics of SAM outperform those of U-net when only a small amount of data is available, demonstrating that the fine-tuned SAM has a unique advantage in recognizing photoelectric imaging signals.</description><subject>Analytical models</subject><subject>Data models</subject><subject>Deep learning</subject><subject>Image segmentation</subject><subject>Large Vision Models</subject><subject>Measurement</subject><subject>Optical fiber communication</subject><subject>Optoelectronic imaging</subject><subject>Training data</subject><subject>Visualization</subject><issn>2771-3059</issn><isbn>9798350367652</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1kN1Kw0AQhVdBsNS-gRf7Aqmz_9nLEqstVCtYvC2bZJKupJuSXQTf3rTq1eHM4ZsZDiGUwZwxsA_rYlu8asGNnnPgcs5AS8MFXJGZNTYXCoQ2WvFrMuHGsEyAsrdkFuMnAAgOkgkzIXHlhoAx-tDSdED61icMybuO9g1d1F8uVFjTjRtapB8--j7Ql77GLtLU02U4nPML-IgJq3TOR3B7Sj12ox_64Cu6Prr2fODdt8F18Y7cNKPg7E-nZPe03BWrbLN9XheLTeYtS5kU5fi9siVXucbcGlbmuayYgZLVjRuHSqkajdGykkY30jLLG-SQQyV0w8SU3P-u9Yi4Pw3-6Ibv_X9N4gcBl1y4</recordid><startdate>20240726</startdate><enddate>20240726</enddate><creator>Liang, Dunyou</creator><creator>Chang, Xin</creator><creator>Peng, Feng</creator><creator>Wu, Bing</creator><creator>Cui, Xiaojun</creator><creator>Zuo, Xin</creator><creator>Ma, JianChao</creator><creator>Zhang, Guoyu</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20240726</creationdate><title>Harnessing the Potential of Advanced Large Vision Models to Enhance the Detection of Optoelectronic Imaging Signals</title><author>Liang, Dunyou ; Chang, Xin ; Peng, Feng ; Wu, Bing ; Cui, Xiaojun ; Zuo, Xin ; Ma, JianChao ; Zhang, Guoyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i91t-43b65259b2586e8971b884c170b1dfa586555de7764c476f49192fe2080c36f13</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Analytical models</topic><topic>Data models</topic><topic>Deep learning</topic><topic>Image segmentation</topic><topic>Large Vision Models</topic><topic>Measurement</topic><topic>Optical fiber communication</topic><topic>Optoelectronic imaging</topic><topic>Training data</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Liang, Dunyou</creatorcontrib><creatorcontrib>Chang, Xin</creatorcontrib><creatorcontrib>Peng, Feng</creatorcontrib><creatorcontrib>Wu, Bing</creatorcontrib><creatorcontrib>Cui, Xiaojun</creatorcontrib><creatorcontrib>Zuo, Xin</creatorcontrib><creatorcontrib>Ma, JianChao</creatorcontrib><creatorcontrib>Zhang, Guoyu</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liang, Dunyou</au><au>Chang, Xin</au><au>Peng, Feng</au><au>Wu, Bing</au><au>Cui, Xiaojun</au><au>Zuo, Xin</au><au>Ma, JianChao</au><au>Zhang, Guoyu</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Harnessing the Potential of Advanced Large Vision Models to Enhance the Detection of Optoelectronic Imaging Signals</atitle><btitle>2024 22nd International Conference on Optical Communications and Networks (ICOCN)</btitle><stitle>ICOCN</stitle><date>2024-07-26</date><risdate>2024</risdate><spage>1</spage><epage>3</epage><pages>1-3</pages><eissn>2771-3059</eissn><eisbn>9798350367652</eisbn><abstract>This study focuses on exploring the use of SAM (Segment Anything Model), an advanced visual foundation model, to enhance the detection of optoelectronic imaging signals. We fine-tuned the mask encoder of SAM and used the Electron Microscopy Dataset as the experimental dataset. To evaluate the effect, the U-net model was also used as a comparison benchmark. The experimental results show that the IoU metrics of SAM outperform those of U-net when only a small amount of data is available, demonstrating that the fine-tuned SAM has a unique advantage in recognizing photoelectric imaging signals.</abstract><pub>IEEE</pub><doi>10.1109/ICOCN63276.2024.10647230</doi><tpages>3</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2771-3059 |
ispartof | 2024 22nd International Conference on Optical Communications and Networks (ICOCN), 2024, p.1-3 |
issn | 2771-3059 |
language | eng |
recordid | cdi_ieee_primary_10647230 |
source | IEEE Xplore All Conference Series |
subjects | Analytical models Data models Deep learning Image segmentation Large Vision Models Measurement Optical fiber communication Optoelectronic imaging Training data Visualization |
title | Harnessing the Potential of Advanced Large Vision Models to Enhance the Detection of Optoelectronic Imaging Signals |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-03-03T16%3A43%3A51IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Harnessing%20the%20Potential%20of%20Advanced%20Large%20Vision%20Models%20to%20Enhance%20the%20Detection%20of%20Optoelectronic%20Imaging%20Signals&rft.btitle=2024%2022nd%20International%20Conference%20on%20Optical%20Communications%20and%20Networks%20(ICOCN)&rft.au=Liang,%20Dunyou&rft.date=2024-07-26&rft.spage=1&rft.epage=3&rft.pages=1-3&rft.eissn=2771-3059&rft_id=info:doi/10.1109/ICOCN63276.2024.10647230&rft.eisbn=9798350367652&rft_dat=%3Cieee_CHZPO%3E10647230%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i91t-43b65259b2586e8971b884c170b1dfa586555de7764c476f49192fe2080c36f13%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10647230&rfr_iscdi=true |