Speaker
Description
This work formulates model selection as an infinite-armed bandit problem, namely, a problem in which a decision maker iteratively selects one of an infinite number of fixed choices (i.e., arms) when the properties of each choice are only partially known at the time of allocation and may become better understood over time, via the attainment of rewards.
Here, the arms are machine learning models to train and selecting an arm corresponds to a partial training of the model (resource allocation).
The reward is the accuracy of the selected model after its partial training.
We aim to identify the best model at the end of a finite number of resource allocations and thus consider the best arm identification setup.
We propose the algorithm Mutant-UCB that incorporates operators from evolutionary algorithms into the UCB-E (Upper Confidence Bound Exploration) bandit algorithm introduced by Audibert et al. (2010).
Tests carried out on three open source image classification data sets attest to the relevance of this novel combining approach, which outperforms the state-of-the-art for a fixed budget.
Type of presentation | Talk |
---|---|
Classification | Both methodology and application |
Keywords | Infinite-armed bandits, Best arm identification, Model selection, Neural architecture optimisation, Hyperparameter optimisation, Evolutionnary algorithm, Image classification, AutoML, Online Learning |