Advances in Approximate Bayesian Inference

NIPS 2017 Workshop; December 8, 2017
Seaside Ballroom, Long Beach Convention Center, Long Beach, USA


The workshop's recording is available on Youtube.

Session 1

8:30 - 8:35 Introduction
8:35 - 9:00 Invited Iain Murray: Learning priors, likelihoods, or posteriors Slides
9:00 - 9:15 Contributed Josip Djolonga: Learning Implicit Generative Models Using Differentiable Graph Tests
9:15 - 9:40 Invited Yingzhen Li: Gradient Estimators for Implicit Models Slides
9:40 - 10:00 Invited Dawen Liang: Variational Autoencoders for Recommendation Slides
10:00 - 10:30 Poster Spotlights
10:30 - 11:25 Coffee Break and Poster Session

Session 2

11:25 - 11:45 Invited Cedric Archambeau: Approximate Inference in Industry: Two Applications at Amazon Slides
11:45 - 12:00 Contributed Futoshi Futami: Variational Inference based on Robust Divergences Slides
12:00 - 1:00 Lunch Break

Session 3

1:00 - 2:05 Poster Session
2:05 - 2:20 Contributed Kira Kempinska: Adversarial Sequential Monte Carlo Slides
2:20 - 2:35 Contributed Florian Wenzel: Scalable Logit Gaussian Process Classification Slides
2:35 - 3:00 Invited Andreas Damianou: Variational inference in deep Gaussian processes Slides
3:00 - 3:30 Coffee Break and Poster Session

Session 4

3:30 - 3:45 Contributed Andrew Miller: Taylor Residual Estimators via Automatic Differentiation Slides
3:45 - 4:10 Invited Antti Honkela: Differential privacy and Bayesian learning Slides
4:10 - 4:25 Contributed Yixin Wang: Frequentist Consistency of Variational Bayes
4:25 - 5:30 Panel: On the Foundations and Future of Approximate Inference
Tim Salimans, Katherine Heller, David Blei, Max Welling, Zoubin Ghahramani
Moderator: Matt Hoffman

Abstracts


Iain Murray (University of Edinburgh)

Learning priors, likelihoods, or posteriors

Abstract. As the description of the workshop states: variational and Monte Carlo methods are currently the mainstream techniques for approximate Bayesian inference. However, we can also apply machine learning models to solve inference problems in several ways. Firstly, there's no point doing careful Bayesian inference if the model is silly. We can represent good models, often with hard-to-specify priors or expensive likelihoods, with surrogates learned from data. Secondly, we can learn how to do inference from experience or simulated data. However, this is a workshop, so we can have a friendly conversation... There's a huge choice of what to do here, and frankly it's often not clear what the best approach is, and there are a lot of open theoretical questions. I'll give some thoughts, but may raise more questions than answers.

Yingzhen Li (University of Cambridge)

Gradient Estimators for Implicit Models

Abstract. This talk is organised in two parts. First I will start by revisiting fundamental tractability issues of Bayesian computation and argue that density evaluation of the approximate posterior is mostly unnecessary. Then I will present one of our recent work on an algorithm for fitting implicit posterior distributions. In a nutshell, we proposed a gradient estimation method that allow variational inference to be applied to those approximate distributions without a tractable density.

Dawen Liang (Netflix)

Variational Autoencoders for Recommendation

Abstract. In this talk, I will present how we extend variational autoencoders (VAEs) to collaborative filtering for implicit feedback. We introduce a different regularization parameter for the learning objective, which proves to be crucial for achieving competitive performance. The resulting model and learning algorithm has information-theoretic connections to maximum entropy discrimination and the information bottleneck principle, as well as many recent work on understanding the trade-offs in learning latent variable models with VAEs. Empirically, we show that the proposed approach significantly outperforms state-of-the-art baselines on several real-world datasets. Finally, we identify the pros and cons of employing a principled Bayesian inference approach and characterize settings where it provides the most significant improvements.

Andreas Damianou (Amazon)

Variational inference in deep Gaussian processes

Abstract. Combining deep nets with probabilistic reasoning is challenging, because uncertainty needs to be propagated across the neural network during inference. This comes in addition to the (easier) propagation of gradients. In this talk I will talk about a family of variational approximation methods developed to tackle the aforementioned computational issue in Deep Gaussian processes, which can be seen as non-parametric Bayesian neural networks.

Antti Honkela (University of Helsinki)

Differential privacy and Bayesian learning

Abstract. Differential privacy allows deriving strong privacy guarantees for algorithms using private data. In my talk I will introduce and review different approaches for differentially private Bayesian learning building upon different forms of exact and approximate inference.