Loading…

HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification

Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specific...

Full description

Saved in:
Bibliographic Details
Published in:Remote sensing (Basel, Switzerland) Switzerland), 2023-07, Vol.15 (14), p.3491
Main Authors: Xie, Jiaxing, Hua, Jiajun, Chen, Shaonan, Wu, Peiwen, Gao, Peng, Sun, Daozong, Lyu, Zhendong, Lyu, Shilei, Xue, Xiuyun, Lu, Jianqiang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753
cites cdi_FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753
container_end_page
container_issue 14
container_start_page 3491
container_title Remote sensing (Basel, Switzerland)
container_volume 15
creator Xie, Jiaxing
Hua, Jiajun
Chen, Shaonan
Wu, Peiwen
Gao, Peng
Sun, Daozong
Lyu, Zhendong
Lyu, Shilei
Xue, Xiuyun
Lu, Jianqiang
description Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function.
doi_str_mv 10.3390/rs15143491
format article
fullrecord <record><control><sourceid>gale_doaj_</sourceid><recordid>TN_cdi_doaj_primary_oai_doaj_org_article_4e4e21b77b804791aac3818a445bcebe</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A759221983</galeid><doaj_id>oai_doaj_org_article_4e4e21b77b804791aac3818a445bcebe</doaj_id><sourcerecordid>A759221983</sourcerecordid><originalsourceid>FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753</originalsourceid><addsrcrecordid>eNpdkcFuEzEQhlcIJKrSC09giRvSFttjxza3ELU0UhEHytmatWeDo2S92NtD374mQYDwHMYe__-n0UzXvRX8GsDxD6UKLRQoJ150F5Ib2Svp5Mt_7q-7q1r3vB0A4bi66Ka7p5nKt9tcjlQ-sjV7KDjV8fTsP2GlyG6m2C-5b4mdxHWmsBQ8sO0Rd8Q2B6w1jSngkvLEvtDyI0fWCGxT8vzf95vu1YiHSle_82X3_fbmYXPX33_9vN2s7_ugOF96uXIQV6NyYCQKownGQSIajSg5cmFF1FqAHcig5RZXQdsVUASIKAaj4bLbnrkx497PJR2xPPmMyZ8Kuew8liWFA3lFimQzmcFyZZxADGCFRaX0EGigxnp3Zs0l_3ykuvh9fixTa99Lq0BwkKCa6vqs2mGDpmnMbUahRaRjCnmiMbX62mgnpXAWmuH92RBKrrXQ-KdNwf2vffq_-4Rnjf6ROQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2843103234</pqid></control><display><type>article</type><title>HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification</title><source>Publicly Available Content Database</source><creator>Xie, Jiaxing ; Hua, Jiajun ; Chen, Shaonan ; Wu, Peiwen ; Gao, Peng ; Sun, Daozong ; Lyu, Zhendong ; Lyu, Shilei ; Xue, Xiuyun ; Lu, Jianqiang</creator><creatorcontrib>Xie, Jiaxing ; Hua, Jiajun ; Chen, Shaonan ; Wu, Peiwen ; Gao, Peng ; Sun, Daozong ; Lyu, Zhendong ; Lyu, Shilei ; Xue, Xiuyun ; Lu, Jianqiang</creatorcontrib><description>Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function.</description><identifier>ISSN: 2072-4292</identifier><identifier>EISSN: 2072-4292</identifier><identifier>DOI: 10.3390/rs15143491</identifier><language>eng</language><publisher>Basel: MDPI AG</publisher><subject>Ablation ; Accuracy ; Adaptive sampling ; Agricultural land ; Agriculture ; Artificial neural networks ; Classification ; Coders ; crop classification ; Crops ; Deep learning ; Embedding ; Environmental aspects ; Environmental monitoring ; hyperspectral image classification ; Hyperspectral imaging ; Identification and classification ; Image classification ; Image enhancement ; Image segmentation ; Learning ; Methods ; Modules ; Neural networks ; Recurrent neural networks ; Remote sensing ; Semantic segmentation ; Semantics ; Spatial data ; Spatial discrimination learning ; transformer</subject><ispartof>Remote sensing (Basel, Switzerland), 2023-07, Vol.15 (14), p.3491</ispartof><rights>COPYRIGHT 2023 MDPI AG</rights><rights>2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753</citedby><cites>FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753</cites><orcidid>0000-0002-6417-4646 ; 0000-0002-2729-1165</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.proquest.com/docview/2843103234/fulltextPDF?pq-origsite=primo$$EPDF$$P50$$Gproquest$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2843103234?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,25753,27924,27925,37012,44590,75126</link.rule.ids></links><search><creatorcontrib>Xie, Jiaxing</creatorcontrib><creatorcontrib>Hua, Jiajun</creatorcontrib><creatorcontrib>Chen, Shaonan</creatorcontrib><creatorcontrib>Wu, Peiwen</creatorcontrib><creatorcontrib>Gao, Peng</creatorcontrib><creatorcontrib>Sun, Daozong</creatorcontrib><creatorcontrib>Lyu, Zhendong</creatorcontrib><creatorcontrib>Lyu, Shilei</creatorcontrib><creatorcontrib>Xue, Xiuyun</creatorcontrib><creatorcontrib>Lu, Jianqiang</creatorcontrib><title>HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification</title><title>Remote sensing (Basel, Switzerland)</title><description>Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function.</description><subject>Ablation</subject><subject>Accuracy</subject><subject>Adaptive sampling</subject><subject>Agricultural land</subject><subject>Agriculture</subject><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Coders</subject><subject>crop classification</subject><subject>Crops</subject><subject>Deep learning</subject><subject>Embedding</subject><subject>Environmental aspects</subject><subject>Environmental monitoring</subject><subject>hyperspectral image classification</subject><subject>Hyperspectral imaging</subject><subject>Identification and classification</subject><subject>Image classification</subject><subject>Image enhancement</subject><subject>Image segmentation</subject><subject>Learning</subject><subject>Methods</subject><subject>Modules</subject><subject>Neural networks</subject><subject>Recurrent neural networks</subject><subject>Remote sensing</subject><subject>Semantic segmentation</subject><subject>Semantics</subject><subject>Spatial data</subject><subject>Spatial discrimination learning</subject><subject>transformer</subject><issn>2072-4292</issn><issn>2072-4292</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><sourceid>DOA</sourceid><recordid>eNpdkcFuEzEQhlcIJKrSC09giRvSFttjxza3ELU0UhEHytmatWeDo2S92NtD374mQYDwHMYe__-n0UzXvRX8GsDxD6UKLRQoJ150F5Ib2Svp5Mt_7q-7q1r3vB0A4bi66Ka7p5nKt9tcjlQ-sjV7KDjV8fTsP2GlyG6m2C-5b4mdxHWmsBQ8sO0Rd8Q2B6w1jSngkvLEvtDyI0fWCGxT8vzf95vu1YiHSle_82X3_fbmYXPX33_9vN2s7_ugOF96uXIQV6NyYCQKownGQSIajSg5cmFF1FqAHcig5RZXQdsVUASIKAaj4bLbnrkx497PJR2xPPmMyZ8Kuew8liWFA3lFimQzmcFyZZxADGCFRaX0EGigxnp3Zs0l_3ykuvh9fixTa99Lq0BwkKCa6vqs2mGDpmnMbUahRaRjCnmiMbX62mgnpXAWmuH92RBKrrXQ-KdNwf2vffq_-4Rnjf6ROQ</recordid><startdate>20230701</startdate><enddate>20230701</enddate><creator>Xie, Jiaxing</creator><creator>Hua, Jiajun</creator><creator>Chen, Shaonan</creator><creator>Wu, Peiwen</creator><creator>Gao, Peng</creator><creator>Sun, Daozong</creator><creator>Lyu, Zhendong</creator><creator>Lyu, Shilei</creator><creator>Xue, Xiuyun</creator><creator>Lu, Jianqiang</creator><general>MDPI AG</general><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7QR</scope><scope>7SC</scope><scope>7SE</scope><scope>7SN</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>BHPHI</scope><scope>BKSAR</scope><scope>C1K</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>H8G</scope><scope>HCIFZ</scope><scope>JG9</scope><scope>JQ2</scope><scope>KR7</scope><scope>L6V</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M7S</scope><scope>P5Z</scope><scope>P62</scope><scope>P64</scope><scope>PCBAR</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-6417-4646</orcidid><orcidid>https://orcid.org/0000-0002-2729-1165</orcidid></search><sort><creationdate>20230701</creationdate><title>HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification</title><author>Xie, Jiaxing ; Hua, Jiajun ; Chen, Shaonan ; Wu, Peiwen ; Gao, Peng ; Sun, Daozong ; Lyu, Zhendong ; Lyu, Shilei ; Xue, Xiuyun ; Lu, Jianqiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Ablation</topic><topic>Accuracy</topic><topic>Adaptive sampling</topic><topic>Agricultural land</topic><topic>Agriculture</topic><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Coders</topic><topic>crop classification</topic><topic>Crops</topic><topic>Deep learning</topic><topic>Embedding</topic><topic>Environmental aspects</topic><topic>Environmental monitoring</topic><topic>hyperspectral image classification</topic><topic>Hyperspectral imaging</topic><topic>Identification and classification</topic><topic>Image classification</topic><topic>Image enhancement</topic><topic>Image segmentation</topic><topic>Learning</topic><topic>Methods</topic><topic>Modules</topic><topic>Neural networks</topic><topic>Recurrent neural networks</topic><topic>Remote sensing</topic><topic>Semantic segmentation</topic><topic>Semantics</topic><topic>Spatial data</topic><topic>Spatial discrimination learning</topic><topic>transformer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Xie, Jiaxing</creatorcontrib><creatorcontrib>Hua, Jiajun</creatorcontrib><creatorcontrib>Chen, Shaonan</creatorcontrib><creatorcontrib>Wu, Peiwen</creatorcontrib><creatorcontrib>Gao, Peng</creatorcontrib><creatorcontrib>Sun, Daozong</creatorcontrib><creatorcontrib>Lyu, Zhendong</creatorcontrib><creatorcontrib>Lyu, Shilei</creatorcontrib><creatorcontrib>Xue, Xiuyun</creatorcontrib><creatorcontrib>Lu, Jianqiang</creatorcontrib><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Chemoreception Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Ecology Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>Advanced Technologies &amp; Aerospace Database‎ (1962 - current)</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>Natural Science Collection</collection><collection>Earth, Atmospheric &amp; Aquatic Science Collection</collection><collection>Environmental Sciences and Pollution Management</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Copper Technical Reference Library</collection><collection>SciTech Premium Collection</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Civil Engineering Abstracts</collection><collection>ProQuest Engineering Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Engineering Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>Earth, Atmospheric &amp; Aquatic Science Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Remote sensing (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xie, Jiaxing</au><au>Hua, Jiajun</au><au>Chen, Shaonan</au><au>Wu, Peiwen</au><au>Gao, Peng</au><au>Sun, Daozong</au><au>Lyu, Zhendong</au><au>Lyu, Shilei</au><au>Xue, Xiuyun</au><au>Lu, Jianqiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification</atitle><jtitle>Remote sensing (Basel, Switzerland)</jtitle><date>2023-07-01</date><risdate>2023</risdate><volume>15</volume><issue>14</issue><spage>3491</spage><pages>3491-</pages><issn>2072-4292</issn><eissn>2072-4292</eissn><abstract>Crop classification of large-scale agricultural land is crucial for crop monitoring and yield estimation. Hyperspectral image classification has proven to be an effective method for this task. Most current popular hyperspectral image classification methods are based on image classification, specifically on convolutional neural networks (CNNs) and recurrent neural networks (RNNs). In contrast, this paper focuses on methods based on semantic segmentation and proposes a new transformer-based approach called HyperSFormer for crop hyperspectral image classification. The key enhancement of the proposed method is the replacement of the encoder in SegFormer with an improved Swin Transformer while keeping the SegFormer decoder. The entire model adopts a simple and uniform transformer architecture. Additionally, the paper introduces the hyper patch embedding (HPE) module to extract spectral and local spatial information from the hyperspectral images, which enhances the effectiveness of the features used as input for the model. To ensure detailed model processing and achieve end-to-end hyperspectral image classification, the transpose padding upsample (TPU) module is proposed for the model’s output. In order to address the problem of insufficient and imbalanced samples in hyperspectral image classification, the paper designs an adaptive min log sampling (AMLS) strategy and a loss function that incorporates dice loss and focal loss to assist model training. Experimental results using three public hyperspectral image datasets demonstrate the strong performance of HyperSFormer, particularly in the presence of imbalanced sample data, complex negative samples, and mixed sample classes. HyperSFormer outperforms state-of-the-art methods, including fast patch-free global learning (FPGA), a spectral–spatial-dependent global learning framework (SSDGL), and SegFormer, by at least 2.7% in the mean intersection over union (mIoU). It also improves the overall accuracy and average accuracy values by at least 0.9% and 0.3%, respectively, and the kappa coefficient by at least 0.011. Furthermore, ablation experiments were conducted to determine the optimal hyperparameter and loss function settings for the proposed method, validating the rationality of these settings and the fusion loss function.</abstract><cop>Basel</cop><pub>MDPI AG</pub><doi>10.3390/rs15143491</doi><orcidid>https://orcid.org/0000-0002-6417-4646</orcidid><orcidid>https://orcid.org/0000-0002-2729-1165</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2072-4292
ispartof Remote sensing (Basel, Switzerland), 2023-07, Vol.15 (14), p.3491
issn 2072-4292
2072-4292
language eng
recordid cdi_doaj_primary_oai_doaj_org_article_4e4e21b77b804791aac3818a445bcebe
source Publicly Available Content Database
subjects Ablation
Accuracy
Adaptive sampling
Agricultural land
Agriculture
Artificial neural networks
Classification
Coders
crop classification
Crops
Deep learning
Embedding
Environmental aspects
Environmental monitoring
hyperspectral image classification
Hyperspectral imaging
Identification and classification
Image classification
Image enhancement
Image segmentation
Learning
Methods
Modules
Neural networks
Recurrent neural networks
Remote sensing
Semantic segmentation
Semantics
Spatial data
Spatial discrimination learning
transformer
title HyperSFormer: A Transformer-Based End-to-End Hyperspectral Image Classification Method for Crop Classification
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T20%3A06%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=HyperSFormer:%20A%20Transformer-Based%20End-to-End%20Hyperspectral%20Image%20Classification%20Method%20for%20Crop%20Classification&rft.jtitle=Remote%20sensing%20(Basel,%20Switzerland)&rft.au=Xie,%20Jiaxing&rft.date=2023-07-01&rft.volume=15&rft.issue=14&rft.spage=3491&rft.pages=3491-&rft.issn=2072-4292&rft.eissn=2072-4292&rft_id=info:doi/10.3390/rs15143491&rft_dat=%3Cgale_doaj_%3EA759221983%3C/gale_doaj_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c400t-2693d6f49372a175e3fb2aa75aa20a0181d55138be7a808a6c5863ed33da1b753%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2843103234&rft_id=info:pmid/&rft_galeid=A759221983&rfr_iscdi=true