Speaker
Description
Solving inverse problems with the Bayesian paradigm relies on a sensible choice of the prior. Elicitation of expert knowledge and formulation of physical constraints in a probabilistic sense is often challenging. Recently, the advances made in machine leaning and statistical generative models have been used to develop novel approaches to Bayesian inference relying on data-driven and highly informative priors. A generative model is able to synthesize new data that resemble the properties of a given data set. Famous examples comprise the generation of high-quality images of faces from people that do not exist. For the inverse problem, the underlying data set should reflect the properties of the sought solution, such as typical structures of the tissue in the human brain in MR imaging. Such a data distribution can often be assumed to be embedded in a low dimensional manifold of the original data space. Typically, the inference is carried out in the manifold determined by the generative model, since the lower dimensionality favors the optimization. However, this proceeding lacks important statistical aspects, such as the existence of a posterior probability density function or the consistency of Bayes estimators. Therefore, we explore an alternative approach for Bayesian inference in the original high dimensional space based on probabilistic generative models that admit the aforementioned properties. In addition, based on a Laplace approximation, the posterior can be estimated numerically efficient and for linear Gaussian models even analytically. We perform numerical experiments on typical data sets from machine learning and confirm our theoretical findings. In conjunction with our asymptotic analysis, a heuristic guidance on the choice of the method is presented.
Keywords | High-dimensional Bayesian inference, generative models, asymptotic analysis |
---|