In *Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS)*, 2018

We introduce a dynamic mechanism for the solution of analytically-tractable substructure in probabilistic programs, to reduce variance in Monte Carlo estimators. For inference with Sequential Monte Carlo, it yields improvements such as locally-optimal proposals and Rao-Blackwellization, with little modification to model code necessary. A directed graph is maintained alongside the running program, evolving dynamically as the program triggers operations upon it. Nodes of the graph represent random variables, and edges the analytically-tractable relationships between them (e.g. conjugate priors and affine transformations). Each random variable is held in the graph for as long as possible, sampled only when used by the program in a context that cannot be resolved analytically. This allows it to be analytically conditioned on as many observations as possible before sampling. We have implemented the approach in both Anglican and a new probabilistic programming language named Birch. We demonstrate it on a number of small examples, and a larger mixed linear-nonlinear state-space model.

```
@Article{,
title = {Delayed Sampling and Automatic {R}ao-{B}lackwellization of Probabilistic Programs},
author = {Lawrence M. Murray and Daniel Lundén and Jan Kudlicka and David Broman and Thomas B. Schön},
journal = {Proceedings of Machine Learning Research, Twenty-First International Conference on Artificial Intelligence and Statistics},
year = {2018},
volume = {84},
pages = {1037--1046},
url = {http://proceedings.mlr.press/v84/murray18a.html}
}
```