Loading…

Assessing the Generalizability of a Performance Predictive Model

A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts th...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2023-05
Main Authors: Nikolikj, Ana, Cenikj, Gjorgjina, Ispirova, Gordana, Vermetten, Diederick, Lang, Ryan Dieter, Andries Petrus Engelbrecht, Doerr, Carola, Korošec, Peter, Tome Eftimov
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Nikolikj, Ana
Cenikj, Gjorgjina
Ispirova, Gordana
Vermetten, Diederick
Lang, Ryan Dieter
Andries Petrus Engelbrecht
Doerr, Carola
Korošec, Peter
Tome Eftimov
description A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2821741321</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2821741321</sourcerecordid><originalsourceid>FETCH-proquest_journals_28217413213</originalsourceid><addsrcrecordid>eNqNyr0KwjAUQOEgCBbtOwScC81Nazsq4s8idHAvaXurKTHR3FTQp9fBB3A6w3cmLAIpRVJmADMWEw1pmsKqgDyXEVtviJBI2wsPV-QHtOiV0W_VaKPDi7ueK16h752_Kdsirzx2ug36ifzkOjQLNu2VIYx_nbPlfnfeHpO7d48RKdSDG739Ug0liCITEoT87_oARTs5Cw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2821741321</pqid></control><display><type>article</type><title>Assessing the Generalizability of a Performance Predictive Model</title><source>Publicly Available Content Database</source><creator>Nikolikj, Ana ; Cenikj, Gjorgjina ; Ispirova, Gordana ; Vermetten, Diederick ; Lang, Ryan Dieter ; Andries Petrus Engelbrecht ; Doerr, Carola ; Korošec, Peter ; Tome Eftimov</creator><creatorcontrib>Nikolikj, Ana ; Cenikj, Gjorgjina ; Ispirova, Gordana ; Vermetten, Diederick ; Lang, Ryan Dieter ; Andries Petrus Engelbrecht ; Doerr, Carola ; Korošec, Peter ; Tome Eftimov</creatorcontrib><description>A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Benchmarks ; Machine learning ; Performance prediction ; Prediction models ; Representations ; Supervised learning ; Training ; Workflow</subject><ispartof>arXiv.org, 2023-05</ispartof><rights>2023. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2821741321?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>778,782,25736,36995,44573</link.rule.ids></links><search><creatorcontrib>Nikolikj, Ana</creatorcontrib><creatorcontrib>Cenikj, Gjorgjina</creatorcontrib><creatorcontrib>Ispirova, Gordana</creatorcontrib><creatorcontrib>Vermetten, Diederick</creatorcontrib><creatorcontrib>Lang, Ryan Dieter</creatorcontrib><creatorcontrib>Andries Petrus Engelbrecht</creatorcontrib><creatorcontrib>Doerr, Carola</creatorcontrib><creatorcontrib>Korošec, Peter</creatorcontrib><creatorcontrib>Tome Eftimov</creatorcontrib><title>Assessing the Generalizability of a Performance Predictive Model</title><title>arXiv.org</title><description>A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.</description><subject>Algorithms</subject><subject>Benchmarks</subject><subject>Machine learning</subject><subject>Performance prediction</subject><subject>Prediction models</subject><subject>Representations</subject><subject>Supervised learning</subject><subject>Training</subject><subject>Workflow</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNyr0KwjAUQOEgCBbtOwScC81Nazsq4s8idHAvaXurKTHR3FTQp9fBB3A6w3cmLAIpRVJmADMWEw1pmsKqgDyXEVtviJBI2wsPV-QHtOiV0W_VaKPDi7ueK16h752_Kdsirzx2ug36ifzkOjQLNu2VIYx_nbPlfnfeHpO7d48RKdSDG739Ug0liCITEoT87_oARTs5Cw</recordid><startdate>20230531</startdate><enddate>20230531</enddate><creator>Nikolikj, Ana</creator><creator>Cenikj, Gjorgjina</creator><creator>Ispirova, Gordana</creator><creator>Vermetten, Diederick</creator><creator>Lang, Ryan Dieter</creator><creator>Andries Petrus Engelbrecht</creator><creator>Doerr, Carola</creator><creator>Korošec, Peter</creator><creator>Tome Eftimov</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230531</creationdate><title>Assessing the Generalizability of a Performance Predictive Model</title><author>Nikolikj, Ana ; Cenikj, Gjorgjina ; Ispirova, Gordana ; Vermetten, Diederick ; Lang, Ryan Dieter ; Andries Petrus Engelbrecht ; Doerr, Carola ; Korošec, Peter ; Tome Eftimov</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28217413213</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Benchmarks</topic><topic>Machine learning</topic><topic>Performance prediction</topic><topic>Prediction models</topic><topic>Representations</topic><topic>Supervised learning</topic><topic>Training</topic><topic>Workflow</topic><toplevel>online_resources</toplevel><creatorcontrib>Nikolikj, Ana</creatorcontrib><creatorcontrib>Cenikj, Gjorgjina</creatorcontrib><creatorcontrib>Ispirova, Gordana</creatorcontrib><creatorcontrib>Vermetten, Diederick</creatorcontrib><creatorcontrib>Lang, Ryan Dieter</creatorcontrib><creatorcontrib>Andries Petrus Engelbrecht</creatorcontrib><creatorcontrib>Doerr, Carola</creatorcontrib><creatorcontrib>Korošec, Peter</creatorcontrib><creatorcontrib>Tome Eftimov</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nikolikj, Ana</au><au>Cenikj, Gjorgjina</au><au>Ispirova, Gordana</au><au>Vermetten, Diederick</au><au>Lang, Ryan Dieter</au><au>Andries Petrus Engelbrecht</au><au>Doerr, Carola</au><au>Korošec, Peter</au><au>Tome Eftimov</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Assessing the Generalizability of a Performance Predictive Model</atitle><jtitle>arXiv.org</jtitle><date>2023-05-31</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>A key component of automated algorithm selection and configuration, which in most cases are performed using supervised machine learning (ML) methods is a good-performing predictive model. The predictive model uses the feature representation of a set of problem instances as input data and predicts the algorithm performance achieved on them. Common machine learning models struggle to make predictions for instances with feature representations not covered by the training data, resulting in poor generalization to unseen problems. In this study, we propose a workflow to estimate the generalizability of a predictive model for algorithm performance, trained on one benchmark suite to another. The workflow has been tested by training predictive models across benchmark suites and the results show that generalizability patterns in the landscape feature space are reflected in the performance space.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-05
issn 2331-8422
language eng
recordid cdi_proquest_journals_2821741321
source Publicly Available Content Database
subjects Algorithms
Benchmarks
Machine learning
Performance prediction
Prediction models
Representations
Supervised learning
Training
Workflow
title Assessing the Generalizability of a Performance Predictive Model
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T17%3A22%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Assessing%20the%20Generalizability%20of%20a%20Performance%20Predictive%20Model&rft.jtitle=arXiv.org&rft.au=Nikolikj,%20Ana&rft.date=2023-05-31&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2821741321%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_28217413213%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2821741321&rft_id=info:pmid/&rfr_iscdi=true