Tuan Anh Le

Neural Adaptive Sequential Monte Carlo

19 February 2018

Goal: Given a state space model , we want to learn proposal distributions .

We minimize using stochastic gradient descent where the gradient is: \begin{align} \frac{\partial}{\partial \phi} \KL{p(x_{1:T} \given y_{1:T})}{q_\phi(x_{1:T} \given y_{1:T})} &= \frac{\partial}{\partial \phi} \int p(x_{1:T} \given y_{1:T}) \log \frac{p(x_{1:T} \given y_{1:T})}{q_\phi(x_{1:T} \given y_{1:T})} \,\mathrm dx_{1:T} \\ &= -\int p(x_{1:T} \given y_{1:T}) \frac{\partial}{\partial \phi} \log q_\phi(x_{1:T} \given y_{1:T}) \,\mathrm dx_{1:T}. \label{eq:gradient} \end{align}

This gradient can be estimated using importance sampling (or sequential Monte Carlo): \begin{align} -\sum_{k = 1}^K \bar w_T^k \frac{\partial}{\partial \phi} \log q_\phi(x_{1:T}^k \given y_{1:T}), \label{eq:gradient-estimator-1} \end{align} where are the normalized weights and are the particle values of an importance sampler (or final normalized weights and particle values of SMC). This gradient estimator is biased but consistent.

In the case of SMC, one could also estimate \eqref{eq:gradient} as: \begin{align} -\sum_{k = 1}^K \left(\bar w_1^k \frac{\partial}{\partial \phi} \log q_\phi(x_1^k \given y_1) + \sum_{t = 2}^T \bar w_t^k \frac{\partial}{\partial \phi} \log q_\phi(x_t^k \given x_{1:t - 1}^{a_{t - 1}^k}, y_{1:t}) \right), \label{eq:gradient-estimator-2} \end{align} where and are intermediate particle (normalized) weights and values. This is what is done by the paper. This gradient estimator is biased and it is not clear whether it is consistent. It is not clear whether the bias and/or variance of \eqref{eq:gradient-estimator-2} is smaller than \eqref{eq:gradient-estimator-1}.

[back]