Loading…
Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery
Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surge...
Saved in:
Published in: | IEEE journal of biomedical and health informatics 2024-03, Vol.28 (3), p.1599-1610 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413 |
---|---|
cites | cdi_FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413 |
container_end_page | 1610 |
container_issue | 3 |
container_start_page | 1599 |
container_title | IEEE journal of biomedical and health informatics |
container_volume | 28 |
creator | Giap, Binh Duong Srinivasan, Karthik Mahmoud, Ossama Mian, Shahzad I. Tannen, Bradford L. Nallasamy, Nambi |
description | Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surgeons in detecting risk factors for pupillary instability prior to the development of surgical complications. However, surgical illumination variations, surgical instrument obstruction, and lens material hydration during cataract surgery can limit pupil segmentation accuracy. To address these problems, we propose a novel method named adaptive wavelet tensor feature extraction (AWTFE). AWTFE is designed to enhance the accuracy of deep learning-powered pupil recognition systems. First, we represent the correlations among spatial information, color channels, and wavelet subbands by constructing a third-order tensor. We then utilize higher-order singular value decomposition to eliminate redundant information adaptively and estimate pupil feature information. We evaluated the proposed method by conducting experiments with state-of-the-art deep learning segmentation models on our BigCat dataset consisting of 5,700 annotated intraoperative images from 190 cataract surgeries and a public CaDIS dataset. The experimental results reveal that the AWTFE method effectively identifies features relevant to the pupil region and improved the overall performance of segmentation models by up to 2.26% (BigCat) and 3.31% (CaDIS). Incorporation of the AWTFE method led to statistically significant improvements in segmentation performance (P |
doi_str_mv | 10.1109/JBHI.2023.3345837 |
format | article |
fullrecord | <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_proquest_miscellaneous_2905514561</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10368283</ieee_id><sourcerecordid>2905514561</sourcerecordid><originalsourceid>FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413</originalsourceid><addsrcrecordid>eNpdkN9LwzAQx4Mobsz9AYJIwRdfOpNc06aP29jcZKCw-VzS5jo6-mMmrbj_3tZtIt5LLsfnvhwfQm4ZHTFGw6eXyWI54pTDCMATEoIL0ufMly7nVF6eexZ6PTK0dkfbku0o9K9JDyTjgQj9PlmPtdrX2Sc6GyxtZdyJsqidOaq6MejMvmqjkjqrSietjPPW7LPcWeO2wLJWP-OsdKaqVh3lrBuzRXO4IVepyi0OT--AvM9nm-nCXb0-L6fjlZuAoLXLgjD1NAVQASBnKWAstJRah1L6vlYIQosAgEH78RAhxgSlpmkiJI89BgPyeMzdm-qjQVtHRWYTzHNVYtXYiIdUCOYJv0Mf_qG7qjFle11LeYHkACFtKXakElNZazCN9iYrlDlEjEad9KiTHnXSo5P0duf-lNzEBerfjbPiFrg7Ahki_gkEX3IJ8A0zrYTl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2947823390</pqid></control><display><type>article</type><title>Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Giap, Binh Duong ; Srinivasan, Karthik ; Mahmoud, Ossama ; Mian, Shahzad I. ; Tannen, Bradford L. ; Nallasamy, Nambi</creator><creatorcontrib>Giap, Binh Duong ; Srinivasan, Karthik ; Mahmoud, Ossama ; Mian, Shahzad I. ; Tannen, Bradford L. ; Nallasamy, Nambi</creatorcontrib><description>Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surgeons in detecting risk factors for pupillary instability prior to the development of surgical complications. However, surgical illumination variations, surgical instrument obstruction, and lens material hydration during cataract surgery can limit pupil segmentation accuracy. To address these problems, we propose a novel method named adaptive wavelet tensor feature extraction (AWTFE). AWTFE is designed to enhance the accuracy of deep learning-powered pupil recognition systems. First, we represent the correlations among spatial information, color channels, and wavelet subbands by constructing a third-order tensor. We then utilize higher-order singular value decomposition to eliminate redundant information adaptively and estimate pupil feature information. We evaluated the proposed method by conducting experiments with state-of-the-art deep learning segmentation models on our BigCat dataset consisting of 5,700 annotated intraoperative images from 190 cataract surgeries and a public CaDIS dataset. The experimental results reveal that the AWTFE method effectively identifies features relevant to the pupil region and improved the overall performance of segmentation models by up to 2.26% (BigCat) and 3.31% (CaDIS). Incorporation of the AWTFE method led to statistically significant improvements in segmentation performance (P < 1.29 × 10 −10 for each model) and yielded the highest-performing model overall (Dice coefficients of 94.74% and 96.71% for the BigCat and CaDIS datasets, respectively). In performance comparisons, the AWTFE consistently outperformed other feature extraction methods in enhancing model performance. In addition, the proposed AWTFE method significantly improved pupil recognition performance by up to 2.87% in particularly challenging phases of cataract surgery.</description><identifier>ISSN: 2168-2194</identifier><identifier>ISSN: 2168-2208</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2023.3345837</identifier><identifier>PMID: 38127596</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Accuracy ; Cataract - diagnostic imaging ; Cataract Extraction - methods ; Cataract surgery ; Cataracts ; Datasets ; Deep learning ; Eye surgery ; Feature extraction ; Humans ; Image Processing, Computer-Assisted ; Lenses ; Performance assessment ; Pupil ; pupil segmentation ; Pupils ; Recognition ; Risk factors ; Segmentation ; Singular value decomposition ; Spatial data ; Statistical analysis ; Surgery ; Surgical instruments ; Surgical outcomes ; tensor ; Tensors ; wavelet transform</subject><ispartof>IEEE journal of biomedical and health informatics, 2024-03, Vol.28 (3), p.1599-1610</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413</citedby><cites>FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413</cites><orcidid>0000-0001-8211-106X ; 0000-0001-7501-7198 ; 0000-0002-8345-9719</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10368283$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,54796</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/38127596$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Giap, Binh Duong</creatorcontrib><creatorcontrib>Srinivasan, Karthik</creatorcontrib><creatorcontrib>Mahmoud, Ossama</creatorcontrib><creatorcontrib>Mian, Shahzad I.</creatorcontrib><creatorcontrib>Tannen, Bradford L.</creatorcontrib><creatorcontrib>Nallasamy, Nambi</creatorcontrib><title>Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description>Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surgeons in detecting risk factors for pupillary instability prior to the development of surgical complications. However, surgical illumination variations, surgical instrument obstruction, and lens material hydration during cataract surgery can limit pupil segmentation accuracy. To address these problems, we propose a novel method named adaptive wavelet tensor feature extraction (AWTFE). AWTFE is designed to enhance the accuracy of deep learning-powered pupil recognition systems. First, we represent the correlations among spatial information, color channels, and wavelet subbands by constructing a third-order tensor. We then utilize higher-order singular value decomposition to eliminate redundant information adaptively and estimate pupil feature information. We evaluated the proposed method by conducting experiments with state-of-the-art deep learning segmentation models on our BigCat dataset consisting of 5,700 annotated intraoperative images from 190 cataract surgeries and a public CaDIS dataset. The experimental results reveal that the AWTFE method effectively identifies features relevant to the pupil region and improved the overall performance of segmentation models by up to 2.26% (BigCat) and 3.31% (CaDIS). Incorporation of the AWTFE method led to statistically significant improvements in segmentation performance (P < 1.29 × 10 −10 for each model) and yielded the highest-performing model overall (Dice coefficients of 94.74% and 96.71% for the BigCat and CaDIS datasets, respectively). In performance comparisons, the AWTFE consistently outperformed other feature extraction methods in enhancing model performance. In addition, the proposed AWTFE method significantly improved pupil recognition performance by up to 2.87% in particularly challenging phases of cataract surgery.</description><subject>Accuracy</subject><subject>Cataract - diagnostic imaging</subject><subject>Cataract Extraction - methods</subject><subject>Cataract surgery</subject><subject>Cataracts</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>Eye surgery</subject><subject>Feature extraction</subject><subject>Humans</subject><subject>Image Processing, Computer-Assisted</subject><subject>Lenses</subject><subject>Performance assessment</subject><subject>Pupil</subject><subject>pupil segmentation</subject><subject>Pupils</subject><subject>Recognition</subject><subject>Risk factors</subject><subject>Segmentation</subject><subject>Singular value decomposition</subject><subject>Spatial data</subject><subject>Statistical analysis</subject><subject>Surgery</subject><subject>Surgical instruments</subject><subject>Surgical outcomes</subject><subject>tensor</subject><subject>Tensors</subject><subject>wavelet transform</subject><issn>2168-2194</issn><issn>2168-2208</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNpdkN9LwzAQx4Mobsz9AYJIwRdfOpNc06aP29jcZKCw-VzS5jo6-mMmrbj_3tZtIt5LLsfnvhwfQm4ZHTFGw6eXyWI54pTDCMATEoIL0ufMly7nVF6eexZ6PTK0dkfbku0o9K9JDyTjgQj9PlmPtdrX2Sc6GyxtZdyJsqidOaq6MejMvmqjkjqrSietjPPW7LPcWeO2wLJWP-OsdKaqVh3lrBuzRXO4IVepyi0OT--AvM9nm-nCXb0-L6fjlZuAoLXLgjD1NAVQASBnKWAstJRah1L6vlYIQosAgEH78RAhxgSlpmkiJI89BgPyeMzdm-qjQVtHRWYTzHNVYtXYiIdUCOYJv0Mf_qG7qjFle11LeYHkACFtKXakElNZazCN9iYrlDlEjEad9KiTHnXSo5P0duf-lNzEBerfjbPiFrg7Ahki_gkEX3IJ8A0zrYTl</recordid><startdate>20240301</startdate><enddate>20240301</enddate><creator>Giap, Binh Duong</creator><creator>Srinivasan, Karthik</creator><creator>Mahmoud, Ossama</creator><creator>Mian, Shahzad I.</creator><creator>Tannen, Bradford L.</creator><creator>Nallasamy, Nambi</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-8211-106X</orcidid><orcidid>https://orcid.org/0000-0001-7501-7198</orcidid><orcidid>https://orcid.org/0000-0002-8345-9719</orcidid></search><sort><creationdate>20240301</creationdate><title>Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery</title><author>Giap, Binh Duong ; Srinivasan, Karthik ; Mahmoud, Ossama ; Mian, Shahzad I. ; Tannen, Bradford L. ; Nallasamy, Nambi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Cataract - diagnostic imaging</topic><topic>Cataract Extraction - methods</topic><topic>Cataract surgery</topic><topic>Cataracts</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>Eye surgery</topic><topic>Feature extraction</topic><topic>Humans</topic><topic>Image Processing, Computer-Assisted</topic><topic>Lenses</topic><topic>Performance assessment</topic><topic>Pupil</topic><topic>pupil segmentation</topic><topic>Pupils</topic><topic>Recognition</topic><topic>Risk factors</topic><topic>Segmentation</topic><topic>Singular value decomposition</topic><topic>Spatial data</topic><topic>Statistical analysis</topic><topic>Surgery</topic><topic>Surgical instruments</topic><topic>Surgical outcomes</topic><topic>tensor</topic><topic>Tensors</topic><topic>wavelet transform</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Giap, Binh Duong</creatorcontrib><creatorcontrib>Srinivasan, Karthik</creatorcontrib><creatorcontrib>Mahmoud, Ossama</creatorcontrib><creatorcontrib>Mian, Shahzad I.</creatorcontrib><creatorcontrib>Tannen, Bradford L.</creatorcontrib><creatorcontrib>Nallasamy, Nambi</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical & Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology & Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health & Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing & Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Giap, Binh Duong</au><au>Srinivasan, Karthik</au><au>Mahmoud, Ossama</au><au>Mian, Shahzad I.</au><au>Tannen, Bradford L.</au><au>Nallasamy, Nambi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2024-03-01</date><risdate>2024</risdate><volume>28</volume><issue>3</issue><spage>1599</spage><epage>1610</epage><pages>1599-1610</pages><issn>2168-2194</issn><issn>2168-2208</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract>Cataract surgery remains the only definitive treatment for visually significant cataracts, which are a major cause of preventable blindness worldwide. Successful performance of cataract surgery relies on stable dilation of the pupil. Automated pupil segmentation from surgical videos can assist surgeons in detecting risk factors for pupillary instability prior to the development of surgical complications. However, surgical illumination variations, surgical instrument obstruction, and lens material hydration during cataract surgery can limit pupil segmentation accuracy. To address these problems, we propose a novel method named adaptive wavelet tensor feature extraction (AWTFE). AWTFE is designed to enhance the accuracy of deep learning-powered pupil recognition systems. First, we represent the correlations among spatial information, color channels, and wavelet subbands by constructing a third-order tensor. We then utilize higher-order singular value decomposition to eliminate redundant information adaptively and estimate pupil feature information. We evaluated the proposed method by conducting experiments with state-of-the-art deep learning segmentation models on our BigCat dataset consisting of 5,700 annotated intraoperative images from 190 cataract surgeries and a public CaDIS dataset. The experimental results reveal that the AWTFE method effectively identifies features relevant to the pupil region and improved the overall performance of segmentation models by up to 2.26% (BigCat) and 3.31% (CaDIS). Incorporation of the AWTFE method led to statistically significant improvements in segmentation performance (P < 1.29 × 10 −10 for each model) and yielded the highest-performing model overall (Dice coefficients of 94.74% and 96.71% for the BigCat and CaDIS datasets, respectively). In performance comparisons, the AWTFE consistently outperformed other feature extraction methods in enhancing model performance. In addition, the proposed AWTFE method significantly improved pupil recognition performance by up to 2.87% in particularly challenging phases of cataract surgery.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>38127596</pmid><doi>10.1109/JBHI.2023.3345837</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-8211-106X</orcidid><orcidid>https://orcid.org/0000-0001-7501-7198</orcidid><orcidid>https://orcid.org/0000-0002-8345-9719</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2168-2194 |
ispartof | IEEE journal of biomedical and health informatics, 2024-03, Vol.28 (3), p.1599-1610 |
issn | 2168-2194 2168-2208 2168-2208 |
language | eng |
recordid | cdi_proquest_miscellaneous_2905514561 |
source | IEEE Electronic Library (IEL) Journals |
subjects | Accuracy Cataract - diagnostic imaging Cataract Extraction - methods Cataract surgery Cataracts Datasets Deep learning Eye surgery Feature extraction Humans Image Processing, Computer-Assisted Lenses Performance assessment Pupil pupil segmentation Pupils Recognition Risk factors Segmentation Singular value decomposition Spatial data Statistical analysis Surgery Surgical instruments Surgical outcomes tensor Tensors wavelet transform |
title | Adaptive Tensor-Based Feature Extraction for Pupil Segmentation in Cataract Surgery |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T11%3A52%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20Tensor-Based%20Feature%20Extraction%20for%20Pupil%20Segmentation%20in%20Cataract%20Surgery&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Giap,%20Binh%20Duong&rft.date=2024-03-01&rft.volume=28&rft.issue=3&rft.spage=1599&rft.epage=1610&rft.pages=1599-1610&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2023.3345837&rft_dat=%3Cproquest_ieee_%3E2905514561%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c350t-179f4d033a73e21f3eb5d88dd98866dae35d5733136da4ee3bece8d0fc582b413%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2947823390&rft_id=info:pmid/38127596&rft_ieee_id=10368283&rfr_iscdi=true |