Loading…

Split-Boost Neural Networks

The calibration and training of a neural network is a complex and time-consuming procedure that requires significant computational resources to achieve satisfactory results. Key obstacles are a large number of hyperparameters to select and the onset of overfitting in the face of a small amount of da...

Full description

Saved in:
Bibliographic Details
Published in:IFAC-PapersOnLine 2024, Vol.58 (15), p.241-246
Main Authors: Cestari, Raffaele G., Maroni, Gabriele, Cannelli, Loris, Piga, Dario, Formentin, Simone
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The calibration and training of a neural network is a complex and time-consuming procedure that requires significant computational resources to achieve satisfactory results. Key obstacles are a large number of hyperparameters to select and the onset of overfitting in the face of a small amount of data. In this framework, we propose an innovative training strategy for feed-forward architectures - called split-boost - that improves performance and automatically includes a regularizing behaviour without modeling it explicitly. Such a novel approach ultimately allows us to avoid explicitly modeling the regularization term, decreasing the total number of hyperparameters and speeding up the tuning phase. The proposed strategy is tested on a real-world (anonymized) dataset within a benchmark medical insurance design problem.
ISSN:2405-8963
2405-8963
DOI:10.1016/j.ifacol.2024.08.535