Speaker
Description
Bayesian Optimization has emerged as a useful addition to the DOE toolbox, well-suited for industrial R&D where resource constraints incentivize spending a minimal number of experiments on complex optimization problems.
While Bayesian Optimization is quite simple to use in principle, the experimenter still has to make choices regarding their strategy and algorithm setup. The question is, how sensitive is optimization performance to these choices?
This presentation looks at how these strategic choices affect optimization speed and reliability across benchmarks that mimic physical processes. These simulations were used to investigate choices like initial data set sizing, acquisition function selection, replication strategy and termination criterion. The results demonstrate that initial data set size mainly affects consistency of outcomes across different problems, rather than average run time. The findings provide practical guidance for users seeking to use Bayesian Optimization effectively in industrial settings.
Classification | Mainly application |
---|---|
Keywords | Bayesian Optimization, algorithm configuation, practical implementation |