Loading…

A vision transformer for fine-grained classification by reducing noise and enhancing discriminative information

Recently, several Vision Transformer (ViT) based methods have been proposed for Fine-Grained Visual Classification (FGVC). These methods significantly surpass existing CNN-based ones, demonstrating the effectiveness of ViT in FGVC tasks. However, there are some limitations when applying ViT directly...

Full description

Saved in:
Bibliographic Details
Published in:Pattern recognition 2024-01, Vol.145, p.109979, Article 109979
Main Authors: Zhang, Zi-Chao, Chen, Zhen-Duo, Wang, Yongxin, Luo, Xin, Xu, Xin-Shun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recently, several Vision Transformer (ViT) based methods have been proposed for Fine-Grained Visual Classification (FGVC). These methods significantly surpass existing CNN-based ones, demonstrating the effectiveness of ViT in FGVC tasks. However, there are some limitations when applying ViT directly to FGVC. First, ViT needs to split images into patches and calculate the attention of every pair, which may result in heavy noise calculation during the training phase and unsatisfying performance when handling fine-grained images with complex backgrounds and small objects. Second, complementary information is important for FGVC, but a standard ViT works by using the class token in the final layer for classification which is not enough to extract comprehensive fine-grained information at different levels. Third, the class token fuses the information of all patches in the same manner, in other words, the class token treats each patch equally. However, the discriminative parts should be more critical. To address these issues, we propose ACC-ViT including three novel components, i.e., Attention Patch Combination (APC), Critical Regions Filter (CRF), and Complementary Tokens Integration (CTI). Thereinto, APC pieces informative patches from two images to generate a new image to mitigate the noisy calculation and reinforce the differences between images. CRF emphasizes tokens corresponding to discriminative regions to generate a new class token for subtle feature learning. To extract comprehensive information, CTI integrates complementary information captured by class tokens in different ViT layers. We conduct comprehensive experiments on four widely used datasets and the results demonstrate that ACC-ViT can achieve competitive performance. The source code is available at https://github.com/Hector0426/fine-grained-image-classification-with-vit. •A framework is proposed based on the analysis of limitations of applying ViT to FGVC.•The Attentive Patch Combination module reduces the influence of noise computation.•The Critical Regions Filter module enhances the model’s learning of critical parts.•The Complementary Tokens Integration module extracts complementary features.•Extensive experiments have been conducted demonstrating the superiority.
ISSN:0031-3203
1873-5142
DOI:10.1016/j.patcog.2023.109979