Loading…

Assessing the Generalizability of a Performance Predictive Model

A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts th...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-05
Main Authors: Nikolikj, Ana, Cenikj, Gjorgjina, Ispirova, Gordana, Vermetten, Diederick, Lang, Ryan Dieter, Andries Petrus Engelbrecht, Doerr, Carola, Korošec, Peter, Tome Eftimov
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.
ISSN:2331-8422