Loading…

Active learning for efficiently training emulators of computationally expensive mathematical models

An emulator is a fast‐to‐evaluate statistical approximation of a detailed mathematical model (simulator). When used in lieu of simulators, emulators can expedite tasks that require many repeated evaluations, such as sensitivity analyses, policy optimization, model calibration, and value‐of‐informati...

Full description

Saved in:
Bibliographic Details
Published in:Statistics in medicine 2020-11, Vol.39 (25), p.3521-3548
Main Authors: Ellis, Alexandra G., Iskandar, Rowan, Schmid, Christopher H., Wong, John B., Trikalinos, Thomas A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:An emulator is a fast‐to‐evaluate statistical approximation of a detailed mathematical model (simulator). When used in lieu of simulators, emulators can expedite tasks that require many repeated evaluations, such as sensitivity analyses, policy optimization, model calibration, and value‐of‐information analyses. Emulators are developed using the output of simulators at specific input values (design points). Developing an emulator that closely approximates the simulator can require many design points, which becomes computationally expensive. We describe a self‐terminating active learning algorithm to efficiently develop emulators tailored to a specific emulation task, and compare it with algorithms that optimize geometric criteria (random latin hypercube sampling and maximum projection designs) and other active learning algorithms (treed Gaussian Processes that optimize typical active learning criteria). We compared the algorithms' root mean square error (RMSE) and maximum absolute deviation from the simulator (MAX) for seven benchmark functions and in a prostate cancer screening model. In the empirical analyses, in simulators with greatly varying smoothness over the input domain, active learning algorithms resulted in emulators with smaller RMSE and MAX for the same number of design points. In all other cases, all algorithms performed comparably. The proposed algorithm attained satisfactory performance in all analyses, had smaller variability than the treed Gaussian Processes, and, on average, had similar or better performance as the treed Gaussian Processes in six out of seven benchmark functions and in the prostate cancer model.
ISSN:0277-6715
1097-0258
DOI:10.1002/sim.8679