# analysis-by-synthesis by learning to invert generative black boxes, wip

19 April 2017

notes on (Nair, Susskind, & Hinton, 2008).

understanding: 5/10
code: ?

given:

• training data $\{y_n\}$
• black box generative model $p(y \given x)$

goal: approximate posterior $q_{\phi}(x \given y) \approx p(x \given y)$. (not exactly posterior since they have no defined prior).

## approach

1. pick $y_n$ from training data set.
2. run it through the recognition and perturb to obtain $x$.
3. obtain $y$ from black box generator.
4. perform supervised learning on $(x, y)$:
1. get $x'$ from recognition network.
2. calculate loss between $x$ and $x'$.
@inproceedings{nair2008analysis,