Fast ε-free Inference of Simulation Models with Bayesian Conditional Density Estimation

Iain Murray, and George Papamakarios.

Many statistical models can be simulated forwards but have intractable likelihoods. Approximate Bayesian Computation (ABC) methods are used to infer properties of these models from data. Traditionally these methods approximate the posterior over parameters by conditioning on data being inside an ϵ-ball around the observed data, which is only correct in the limit ϵ→0. Monte Carlo methods can then draw samples from the approximate posterior to approximate predictions or error bars on parameters. These algorithms critically slow down as ϵ→0, and in practice draw samples from a broader distribution than the posterior. We propose a new approach to likelihood-free inference based on Bayesian conditional density estimation. Preliminary inferences based on limited simulation data are used to guide later simulations. In some cases, learning an accurate parametric representation of the entire true posterior distribution requires fewer model simulations than Monte Carlo ABC methods need to produce a single sample from an approximate posterior.

Advances in Neural Information Processing Systems 29, 2016.
[PDF, DjVu, GoogleViewer, arXiv, BibTeX].

Code: github, snapshot.

Discussion by Dennis Prangle (2016).

Update 2019:

Greenberg, Nonnenmacher and Macke’s APT (ICML, 2019) looks like a better way to get a direct neural estimate of a posterior distribution. I would now try that instead of the method in this paper.

Also check out SNL (AISTATS, 2019), our neural surrogate likelihood method, which can be better or worse than APT, depending on the situation.