Loading…
A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Vegetation Recognition From Hyperspectral Imagery
Spectral-spatial based deep learning models have recently proven to be effective in hyperspectral image (HSI) classification for various earth monitoring applications such as land cover classification and agricultural monitoring. However, due to the nature of "black-box" model representati...
Saved in:
Published in: | arXiv.org 2021-04 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | |
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Shi, Yue Han, Liangxiu Huang, Wenjiang Chang, Sheng Dong, Yingying Dancey, Darren Han, Lianghao |
description | Spectral-spatial based deep learning models have recently proven to be effective in hyperspectral image (HSI) classification for various earth monitoring applications such as land cover classification and agricultural monitoring. However, due to the nature of "black-box" model representation, how to explain and interpret the learning process and the model decision, especially for vegetation classification, remains an open challenge. This study proposes a novel interpretable deep learning model -- a biologically interpretable two-stage deep neural network (BIT-DNN), by incorporating the prior-knowledge (i.e. biophysical and biochemical attributes and their hierarchical structures of target entities) based spectral-spatial feature transformation into the proposed framework, capable of achieving both high accuracy and interpretability on HSI based classification tasks. The proposed model introduces a two-stage feature learning process: in the first stage, an enhanced interpretable feature block extracts the low-level spectral features associated with the biophysical and biochemical attributes of target entities; and in the second stage, an interpretable capsule block extracts and encapsulates the high-level joint spectral-spatial features representing the hierarchical structure of biophysical and biochemical attributes of these target entities, which provides the model an improved performance on classification and intrinsic interpretability with reduced computational complexity. We have tested and evaluated the model using four real HSI datasets for four separate tasks (i.e. plant species classification, land cover classification, urban scene recognition, and crop disease recognition tasks). The proposed model has been compared with five state-of-the-art deep learning models. |
doi_str_mv | 10.48550/arxiv.2004.08886 |
format | article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2392840550</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2392840550</sourcerecordid><originalsourceid>FETCH-LOGICAL-a520-d63f171544331dd628aa9178a3d8fc93c12ea73584e7e5173325b8ef7b567d483</originalsourceid><addsrcrecordid>eNotkEtPg0AcxDcmJja1H8DbJl70QN0nuxz7sJakqYkhXpsF_hAqZXFZrPjpxcdp5jK_yQxCN5TMhZaSPBj3WX3MGSFiTrTW4QWaMM5poAVjV2jWdUdCCAsVk5JP0NcCLytb27LKTF0POG48uNaBN2kNODnboPOmBLwGaPEeemfqUfzZujd8t4yTYL3f3-ONdfgVyjHlK9vgF8hs2VS_fuPsCW-HFlzXQuZ_8vFpJLrhGl0Wpu5g9q9TlGwek9U22D0_xavFLjCSkSAPeUEVlUKMI_I8ZNqYiCpteK6LLOIZZWAUl1qAAkkV50ymGgqVylDlQvMpuv3Dts6-99D5w9H2rhkbD4xHTAsyvsa_AZfRXj4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2392840550</pqid></control><display><type>article</type><title>A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Vegetation Recognition From Hyperspectral Imagery</title><source>Publicly Available Content (ProQuest)</source><creator>Shi, Yue ; Han, Liangxiu ; Huang, Wenjiang ; Chang, Sheng ; Dong, Yingying ; Dancey, Darren ; Han, Lianghao</creator><creatorcontrib>Shi, Yue ; Han, Liangxiu ; Huang, Wenjiang ; Chang, Sheng ; Dong, Yingying ; Dancey, Darren ; Han, Lianghao</creatorcontrib><description>Spectral-spatial based deep learning models have recently proven to be effective in hyperspectral image (HSI) classification for various earth monitoring applications such as land cover classification and agricultural monitoring. However, due to the nature of "black-box" model representation, how to explain and interpret the learning process and the model decision, especially for vegetation classification, remains an open challenge. This study proposes a novel interpretable deep learning model -- a biologically interpretable two-stage deep neural network (BIT-DNN), by incorporating the prior-knowledge (i.e. biophysical and biochemical attributes and their hierarchical structures of target entities) based spectral-spatial feature transformation into the proposed framework, capable of achieving both high accuracy and interpretability on HSI based classification tasks. The proposed model introduces a two-stage feature learning process: in the first stage, an enhanced interpretable feature block extracts the low-level spectral features associated with the biophysical and biochemical attributes of target entities; and in the second stage, an interpretable capsule block extracts and encapsulates the high-level joint spectral-spatial features representing the hierarchical structure of biophysical and biochemical attributes of these target entities, which provides the model an improved performance on classification and intrinsic interpretability with reduced computational complexity. We have tested and evaluated the model using four real HSI datasets for four separate tasks (i.e. plant species classification, land cover classification, urban scene recognition, and crop disease recognition tasks). The proposed model has been compared with five state-of-the-art deep learning models.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2004.08886</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Agricultural land ; Artificial neural networks ; Biochemistry ; Classification ; Deep learning ; Feature extraction ; Hyperspectral imaging ; Image classification ; Land cover ; Machine learning ; Model accuracy ; Monitoring ; Neural networks ; Plant diseases ; Recognition ; Spectra ; Structural hierarchy ; Tensors</subject><ispartof>arXiv.org, 2021-04</ispartof><rights>2021. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2392840550?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>776,780,25732,27904,36991,44569</link.rule.ids></links><search><creatorcontrib>Shi, Yue</creatorcontrib><creatorcontrib>Han, Liangxiu</creatorcontrib><creatorcontrib>Huang, Wenjiang</creatorcontrib><creatorcontrib>Chang, Sheng</creatorcontrib><creatorcontrib>Dong, Yingying</creatorcontrib><creatorcontrib>Dancey, Darren</creatorcontrib><creatorcontrib>Han, Lianghao</creatorcontrib><title>A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Vegetation Recognition From Hyperspectral Imagery</title><title>arXiv.org</title><description>Spectral-spatial based deep learning models have recently proven to be effective in hyperspectral image (HSI) classification for various earth monitoring applications such as land cover classification and agricultural monitoring. However, due to the nature of "black-box" model representation, how to explain and interpret the learning process and the model decision, especially for vegetation classification, remains an open challenge. This study proposes a novel interpretable deep learning model -- a biologically interpretable two-stage deep neural network (BIT-DNN), by incorporating the prior-knowledge (i.e. biophysical and biochemical attributes and their hierarchical structures of target entities) based spectral-spatial feature transformation into the proposed framework, capable of achieving both high accuracy and interpretability on HSI based classification tasks. The proposed model introduces a two-stage feature learning process: in the first stage, an enhanced interpretable feature block extracts the low-level spectral features associated with the biophysical and biochemical attributes of target entities; and in the second stage, an interpretable capsule block extracts and encapsulates the high-level joint spectral-spatial features representing the hierarchical structure of biophysical and biochemical attributes of these target entities, which provides the model an improved performance on classification and intrinsic interpretability with reduced computational complexity. We have tested and evaluated the model using four real HSI datasets for four separate tasks (i.e. plant species classification, land cover classification, urban scene recognition, and crop disease recognition tasks). The proposed model has been compared with five state-of-the-art deep learning models.</description><subject>Agricultural land</subject><subject>Artificial neural networks</subject><subject>Biochemistry</subject><subject>Classification</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Hyperspectral imaging</subject><subject>Image classification</subject><subject>Land cover</subject><subject>Machine learning</subject><subject>Model accuracy</subject><subject>Monitoring</subject><subject>Neural networks</subject><subject>Plant diseases</subject><subject>Recognition</subject><subject>Spectra</subject><subject>Structural hierarchy</subject><subject>Tensors</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNotkEtPg0AcxDcmJja1H8DbJl70QN0nuxz7sJakqYkhXpsF_hAqZXFZrPjpxcdp5jK_yQxCN5TMhZaSPBj3WX3MGSFiTrTW4QWaMM5poAVjV2jWdUdCCAsVk5JP0NcCLytb27LKTF0POG48uNaBN2kNODnboPOmBLwGaPEeemfqUfzZujd8t4yTYL3f3-ONdfgVyjHlK9vgF8hs2VS_fuPsCW-HFlzXQuZ_8vFpJLrhGl0Wpu5g9q9TlGwek9U22D0_xavFLjCSkSAPeUEVlUKMI_I8ZNqYiCpteK6LLOIZZWAUl1qAAkkV50ymGgqVylDlQvMpuv3Dts6-99D5w9H2rhkbD4xHTAsyvsa_AZfRXj4</recordid><startdate>20210414</startdate><enddate>20210414</enddate><creator>Shi, Yue</creator><creator>Han, Liangxiu</creator><creator>Huang, Wenjiang</creator><creator>Chang, Sheng</creator><creator>Dong, Yingying</creator><creator>Dancey, Darren</creator><creator>Han, Lianghao</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210414</creationdate><title>A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Vegetation Recognition From Hyperspectral Imagery</title><author>Shi, Yue ; Han, Liangxiu ; Huang, Wenjiang ; Chang, Sheng ; Dong, Yingying ; Dancey, Darren ; Han, Lianghao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a520-d63f171544331dd628aa9178a3d8fc93c12ea73584e7e5173325b8ef7b567d483</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Agricultural land</topic><topic>Artificial neural networks</topic><topic>Biochemistry</topic><topic>Classification</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Hyperspectral imaging</topic><topic>Image classification</topic><topic>Land cover</topic><topic>Machine learning</topic><topic>Model accuracy</topic><topic>Monitoring</topic><topic>Neural networks</topic><topic>Plant diseases</topic><topic>Recognition</topic><topic>Spectra</topic><topic>Structural hierarchy</topic><topic>Tensors</topic><toplevel>online_resources</toplevel><creatorcontrib>Shi, Yue</creatorcontrib><creatorcontrib>Han, Liangxiu</creatorcontrib><creatorcontrib>Huang, Wenjiang</creatorcontrib><creatorcontrib>Chang, Sheng</creatorcontrib><creatorcontrib>Dong, Yingying</creatorcontrib><creatorcontrib>Dancey, Darren</creatorcontrib><creatorcontrib>Han, Lianghao</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content (ProQuest)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shi, Yue</au><au>Han, Liangxiu</au><au>Huang, Wenjiang</au><au>Chang, Sheng</au><au>Dong, Yingying</au><au>Dancey, Darren</au><au>Han, Lianghao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Vegetation Recognition From Hyperspectral Imagery</atitle><jtitle>arXiv.org</jtitle><date>2021-04-14</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Spectral-spatial based deep learning models have recently proven to be effective in hyperspectral image (HSI) classification for various earth monitoring applications such as land cover classification and agricultural monitoring. However, due to the nature of "black-box" model representation, how to explain and interpret the learning process and the model decision, especially for vegetation classification, remains an open challenge. This study proposes a novel interpretable deep learning model -- a biologically interpretable two-stage deep neural network (BIT-DNN), by incorporating the prior-knowledge (i.e. biophysical and biochemical attributes and their hierarchical structures of target entities) based spectral-spatial feature transformation into the proposed framework, capable of achieving both high accuracy and interpretability on HSI based classification tasks. The proposed model introduces a two-stage feature learning process: in the first stage, an enhanced interpretable feature block extracts the low-level spectral features associated with the biophysical and biochemical attributes of target entities; and in the second stage, an interpretable capsule block extracts and encapsulates the high-level joint spectral-spatial features representing the hierarchical structure of biophysical and biochemical attributes of these target entities, which provides the model an improved performance on classification and intrinsic interpretability with reduced computational complexity. We have tested and evaluated the model using four real HSI datasets for four separate tasks (i.e. plant species classification, land cover classification, urban scene recognition, and crop disease recognition tasks). The proposed model has been compared with five state-of-the-art deep learning models.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2004.08886</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2392840550 |
source | Publicly Available Content (ProQuest) |
subjects | Agricultural land Artificial neural networks Biochemistry Classification Deep learning Feature extraction Hyperspectral imaging Image classification Land cover Machine learning Model accuracy Monitoring Neural networks Plant diseases Recognition Spectra Structural hierarchy Tensors |
title | A Biologically Interpretable Two-stage Deep Neural Network (BIT-DNN) For Vegetation Recognition From Hyperspectral Imagery |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T05%3A44%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Biologically%20Interpretable%20Two-stage%20Deep%20Neural%20Network%20(BIT-DNN)%20For%20Vegetation%20Recognition%20From%20Hyperspectral%20Imagery&rft.jtitle=arXiv.org&rft.au=Shi,%20Yue&rft.date=2021-04-14&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2004.08886&rft_dat=%3Cproquest%3E2392840550%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-a520-d63f171544331dd628aa9178a3d8fc93c12ea73584e7e5173325b8ef7b567d483%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2392840550&rft_id=info:pmid/&rfr_iscdi=true |