Loading…

Towards optimal model evaluation: enhancing active testing with actively improved estimators

With rapid advancements in machine learning and statistical models, ensuring the reliability of these models through accurate evaluation has become imperative. Traditional evaluation methods often rely on fully labeled test data, a requirement that is becoming increasingly impractical due to the gro...

Full description

Saved in:
Bibliographic Details
Published in:Scientific reports 2024-05, Vol.14 (1), p.10690-10690, Article 10690
Main Authors: Lee, JooChul, Kolla, Likhitha, Chen, Jinbo
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With rapid advancements in machine learning and statistical models, ensuring the reliability of these models through accurate evaluation has become imperative. Traditional evaluation methods often rely on fully labeled test data, a requirement that is becoming increasingly impractical due to the growing size of datasets. In this work, we address this issue by extending existing work on active testing (AT) methods which are designed to sequentially sample and label data for evaluating pre-trained models. We propose two novel estimators: the Actively Improved Levelled Unbiased Risk (AILUR) and the Actively Improved Inverse Probability Weighting (AIIPW) estimators which are derived from nonparametric smoothing estimation. In addition, a model recalibration process is designed for the AIIPW estimator to optimize the sampling probability within the AT framework. We evaluate the proposed estimators on four real-world datasets and demonstrate that they consistently outperform existing AT methods. Our study also shows that the proposed methods are robust to changes in subsample sizes, and effective at reducing labeling costs.
ISSN:2045-2322
2045-2322
DOI:10.1038/s41598-024-58633-3