analysis-by-synthesis by learning to invert generative black boxes, wip
19 April 2017
notes on (Nair et al., 2008).
understanding: 5/10
code: ?
task
given:
- training data \(\{y_n\}\)
- black box generative model \(p(y \given x)\)
goal: approximate posterior \(q_{\phi}(x \given y) \approx p(x \given y)\). (not exactly posterior since they have no defined prior).
approach
- pick \(y_n\) from training data set.
- run it through the recognition and perturb to obtain \(x\).
- obtain \(y\) from black box generator.
- perform supervised learning on \((x, y)\):
- get \(x'\) from recognition network.
- calculate loss between \(x\) and \(x'\).
- take gradient step.
references
- Nair, V., Susskind, J., & Hinton, G. E. (2008). Analysis-bySynthesis by Learning to Invert Generative Black Boxes.
@inproceedings{nair2008analysis,
title = {Analysis-bySynthesis by Learning to Invert Generative Black Boxes},
author = {Nair, Vinod and Susskind, Josh and Hinton, Geoffrey E.},
year = {2008}
}
[back]