Speaker
Description
Errors-in-Variables is a statistical concept to model errors in the input variables, which can be caused for example by noise. It is well-known in statistics that not accounting for such errors can cause a bias in the model. However, most existing deep learning approaches have so far not taken Errors-in-Variables into account, which might be due to the increased numerical burden or the challenge in assigning an appropriate prior in a Bayesian treatment. We propose a scalable method for handling Errors-in-Variables in Bayesian deep learning based on a variational inference scheme. The presented approach thereby exploits a relevant, but generally overlooked, source of uncertainty. We discuss the approach along various simulated and real examples and observe that using an Errors-in-Variables model leads to an increase in the uncertainty. For the case of image classification we show how an appropriate Bayesian treatment of the input can yield a significant improvement in prediction performance compared to models without Errors-in-Variables.
Classification | Mainly methodology |
---|---|
Keywords | Deep Learning, Errors-in-Variables, Uncertainty |