Loading…

Adaptive granular data compression and interval granulation for efficient classification

Efficiency is crucial in deep learning tasks and has garnered significant attention in green deep learning research field. However, existing methods often sacrifice efficiency for slight accuracy improvement, requiring extensive computational resources. This paper proposes an adaptive granular data...

Full description

Saved in:
Bibliographic Details
Published in:Information sciences 2025-02, Vol.691, p.121644, Article 121644
Main Authors: Cai, Kecan, Zhang, Hongyun, Li, Miao, Miao, Duoqian
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Efficiency is crucial in deep learning tasks and has garnered significant attention in green deep learning research field. However, existing methods often sacrifice efficiency for slight accuracy improvement, requiring extensive computational resources. This paper proposes an adaptive granular data compression and interval granulation method to improve classification efficiency without compromising accuracy. The approach comprises two main components: Adaptive Granular Data Compression (AG), and Interval Granulation (IG). Specifically, AG employs principle of justifiable granularity for adaptive generating granular data. AG enables the extraction of abstract granular subset representations from the original dataset, capturing essential features and thereby reducing computational complexity. The quality of the generated granular data is evaluated using coverage and specificity criteria, which are standard metrics in evaluating information granules. Furthermore, the design of IG performs AG operation on the input data at regular intervals during the training process. The multiple regular granulation operations during the training process increase the diversity of samples and help the model achieve better training. It is noteworthy that the proposed method can be extended to any convolution-based and attention-based classification neural network. Extensive experiments conducted on benchmark datasets demonstrate that the proposed method significantly enhances the classification efficiency without compromising accuracy.
ISSN:0020-0255
DOI:10.1016/j.ins.2024.121644