Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design
An Active Learning Performance Model for Parallel Bayesian Calibration of Expensive Simulations
Özge Sürer · Stefan M. Wild
Keywords: [ sequential design ] [ algorithm comparison ] [ computational benchmarking ] [ uncertainty quantification ]
Estimating parameters of simulation models based on observed data is an especially challenging task when the computational expense of the model -- necessitated by faithfully capturing the real system -- limits the learning process. When simulation models are expensive to evaluate, emulators are often built to efficiently approximate model outputs during model calibration. Computing-informed active learning, guided by intelligent acquisition functions, can improve data collection for emulators, thereby enhancing the calibration's efficiency. However, the performance of active learning strategies depends on computational factors such as computing environment (e.g., parallel resources available), tradeoffs in (calibration and simulation) algorithm's ability to benefit from parallelism, and the computational expense of the simulation models. In addition to overviewing these considerations, this work provides examples exemplifying the tradeoffs that make such learning difficult.