Deep generative models of natural images
|
|
- Jessie Carson
- 5 years ago
- Views:
Transcription
1 Spring 2016
2 1 Motivation 2 3 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models 4
3 Outline 1 Motivation 2 3 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models 4
4 Generative models Have access to x p data (x) through training set Want to learn a model x p model (x) Want p model to be similar to p data Samples drawn from p model reflect structure of p data Samples from true data distribution have high likelihood under p model
5 Why do generative modeling? Unsupervised representation learning Can transfer learned representation so discriminative tasks, retrieval, clustering, etc. Train network with both discriminative and generative criterion Utilize unlabeled data, regularize Understand data Density estimation Data augmentation...
6 Focus of this talk Generative modeling is a HUGE field...i will focus on (a selected set of) deep directed models of natural images
7 Outline 1 Motivation 2 3 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models 4
8 Directed graphical models z x We assume data is generated by: z p(z) x p(x z) z is latent/hidden x is observed (image) Use θ to denote parameters of the generative model
9 Deep directed graphical models P (x) = h P (x h)p (h) Intractable Can t optimize data likelihood directly
10 Bounding the marginal likelihood Recall Jenson s inequality: When f is concave, f(e[x]) E[f(x)] log p(x) = log z = log z z = z p(x, z) p(x, z) q(z) q(z) p(x, z) q(z) q(z) log = L(x; θ, φ) (by Jensons inequality) q(z) log p(x, z) q(z) log q(z) = E q(z) [log p(x, z)] }{{} + H(q(z)) }{{} Expectation of joint distribution Entropy z
11 Evidence Lower BOund (ELBO) Bound is tight when variational approximation matches true posterior: log p(x) L(x; θ, φ) = log p(x) q(z) log z = q(z) log p(x) z = z q(z) log q(z)p(x) p(x, z) = D KL (q(z; φ) p(z x)) z p(x, z) q(z) q(z) log p(x, z) q(z)
12 Summary Assume existence of q(z; φ) Bound log p(x; θ) with L(x; θ, φ) Bound is tight when: D KL (q(z; φ) p(z x)) = 0 q(z; φ) = p(z x)
13 Learning directed graphical models Maximize bound on likelihood of data: max θ N log p(x i ; θ) i=1 max θ,φ 1,...,φ N N L(x i ; θ, φ i ) i=1 Historically, used different φ i for every data point But we ll move away from this soon.. q(z; φ) typically factorized distribution For more info see Blei et al. (2003)
14 New method of learning: approximate inference model Instead of having different variational parameters for each data point, fit a conditional parametric function The output of this function will be the parameters of the variational distribution q(z x) Instead of q(z) we have q φ (z x) ELBO becomes: L(x; θ, φ) = E qφ (z x)[log p θ (x, z)] }{{} + H(q φ (z x)) }{{} Expectation of joint distribution Entropy
15 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Outline 1 Motivation 2 3 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models 4
16 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Variational autoencoder Encoder network maps from image space to latent space Outputs parameters of q φ (z x) Decoder maps from latent space back into image space Outputs parameters of p θ (x z) [Kingma & Welling (2013)]
17 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Variational autoencoder Rearranging the ELBO: L(x; θ, φ) = = = z z z q φ (z x) log q φ (z x) log z p(x, z) q φ (z x) p(x z)p(z) z q φ (z x) q φ (z x) log p(x z) + q φ (z x) log = E q(z x) log p(x z) E q(z x) log q(z x) p(z) = E q(z x) log p(x z) D KL (q(z x) p(z)) }{{}}{{} Reconstruction term Prior term z p(z) q φ (z x)
18 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Variational autoencoder Inference network outputs parameters of q φ (z x) Generative network outputs parameters of p θ (x z) Optimize θ and φ jointly by maximizing ELBO: L(x; θ, φ) = E q(z x) log p(x z) D KL (q(z x) p(z)) }{{}}{{} Reconstruction term Prior term
19 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Stochastic gradient variation bayes (SGVB) estimator Reparameterization trick : re-parameterize z q φ (h z) as z = g φ (x, ɛ) with ɛ p(ɛ) For example, with a Gaussian can write z N (µ, σ 2 ) as z = µ + ɛσ 2 with ɛ N (0, 1) [Kingma & Welling (2013); Rezende et al. (2014)]
20 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Stochastic gradient variation bayes (SGVB) estimator L(x; θ, φ) = E q(z x) log p(x z) D KL (q(z x) p(z)) }{{}}{{} Reconstruction term Prior term Using reparameterization trick we form Monte Carlo estimate of reconstruction term: E qφ (z x) log p θ (x z) = E p(ɛ) log p θ (x g φ (x, ɛ)) 1 L log p θ (x g φ (x, ɛ)) L i=1 where ɛ p(ɛ) KL divergence term can often be computed analytically (eg. Gaussian)
21 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models VAE learned manifold [Kingma & Welling (2013)]
22 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models VAE samples [Kingma & Welling (2013)]
23 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models VAE tradeoffs Pros: Theoretically pleasing Optimizes bound on likelihood Easy to implement Cons: Samples tend to be blurry Maximum likelihood minimizes D KL(p data p model ) [Theis et al. (2016)]
24 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Generative adversarial networks Don t focus on optimizing p(x), just learn to sample Two networks pitted against one another: Generative model G captures data distribution Discriminative model D distinguishes between real and fake samples [Goodfellow et al. (2014)]
25 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Generative adversarial networks D is trained to estimate the probability that a sample came from data distribution rather than G G is trained to maximize the probability of D making a mistake min G max E x p D data(x) log D(x) + E z pnoise(z) log(1 D(G(z)))
26 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models GAN samples
27 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models GAN tradeoffs Pros: Very powerful model High quality samples Cons: Tricky to train (no clear objective to track, instability) Can ignore large parts of image space Because approximately minimizing Jenson-Shannon divergence [Goodfellow et al. (2014); Theis et al. (2016); Huszar (2015)] [Theis et al. (2016)]
28 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Generative moment matching networks Same idea as GANs, but different optimization method Match moments of data and generative distributions Maximum mean discrepancy Estimator for answering whether two samples come from same distribution Evaluate MMD on generated samples [Li et al. (2015); Dziugaite et al. (2015)]
29 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models Generative moment matching networks L MMD 2 = 1 N = 1 N 2 2 NM N φ(x i) 1 M i=1 N N i=1 i =1 M φ(x j) 2 j=1 φ(x i) φ(x i ) 1 M 2 N j = 1 M φ(x i) φ(x j) i=1 Can make use of kernel trick M j=1 j =1 M φ(x j) φ(x j ) If φ is identity, then matching means Complex φ can match higher order moments
30 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models GMMN samples
31 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models GMMN tradeoffs Pros: Theoretically pleasing Cons: Batch size very important Samples aren t great (get better when combined with autoencoder) [Theis et al. (2016)]
32 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models How to evaluate a generative model? Log likelihood on held out data Makes sense when goal is density estimation Many approaches don t have tractable likelihood or it isn t explicitly represented Have to resort to Parzen window estimates [Breuleux et al. (2009)] Quality of samples But a lookup table of training images will succeed here... Best: evaluate in context of particular application See Theis et al. (2016) for more details
33 Outline 1 Motivation 2 3 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models 4
34 DRAW: Deep Recurrent Attentive Writer Basic idea: Iteratively construct image Observe image through sequence of glimpses Recurrent encoder and decoder Optimizes variational bound Attention mechanism determines: Input region observed by encoder Output region modified by decoder [Gregor et al. (2015)]
35 DRAW samples
36 Generating images from captions Language model: bidirectional RNN Image model: conditional DRAW network Image sharpening with Laplacian pyramid adversarial network [Mansimov et al. (2016)]
37 Generating images from captions [Mansimov et al. (2016)]
38 Laplacian pyramid of adversarial networks Difficult to generate large images in one shot Break problem up into sequence of manageable steps Samples drawn in coarse-to-fine fashion Each scale is a convnet trained using GAN framework [Denton et al. (2015)]
39 LAPGAN training procedure I 2 I 2 I 3 I 3 I = I 0 I 1 I 1 z 0 z 1 ~ z 2 I 3 l 2 D 3 Real/ Generated? l 0 G 0 h 1 l 1 D 1 ~ h 1 G 1 D 2 G 2 G 3 h 2 ~ h 2 z 3 Real/ Generated? h 0 ~ h 0 Real/Generated? D 0 Real/Generated?
40 LAPGAN samples
41 LAPGAN samples
42 Deep convolutional generative adversarial networks (DCGAN) Radford et al. (2016) propose several tricks to make GAN training more stable
43 DCGAN vector arithmetic
44 Texture synthesis with spatial LSTMs Two dimensional LSTM Sequentially predict pixels in an image conditioned on previous pixels [Theis & Bethge (2015)]
45 Texture synthesis with spatial LSTMs [Theis & Bethge (2015)]
46 Pixel recurrent neural networks Sequentially predict pixels in an image conditioned on previous pixels Uses spatial LSTM [van den Oord et al. (2016)]
47 Appendix References References I Blei, David M., Ng, Andrew Y., & Jordan, Michael I Latent Dirichlet Allocation. Journal of Machine Learning Research. Breuleux, O., Bengio, Y., & Vincent, P Unlearning for better mixing. Technical report, Universite de Montreal. Denton, Emily, Chintala, Soumith, Szlam, Arthur, & Fergus, Rob Deep generative image models using a laplacian pyramid of adversarial networks. In: NIPS. Dziugaite, Gintare Karolina, Roy, Daniel M., & Ghahramani, Zoubin Training generative neural networks via Maximum Mean Discrepancy optimization. In: UAI.
48 Appendix References References II Goodfellow, Ian J., Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron C., & Bengio, Yoshua Generative adversarial networks. In: NIPS. Gregor, Karol, Danihelka, Ivo, Graves, Alex, & Wierstra, Daan DRAW: A Recurrent Neural Network For Image Generation. CoRR, abs/ Huszar, F How (not) to train your generative model: schedule sampling, likelihood, adversary? arxiv preprint arxiv: Kingma, Diederik P, & Welling, Max Auto-encoding variational bayes. arxiv preprint arxiv:
49 Appendix References References III Li, Yujia, Swersky, Kevin, & Zemel, Richard Generative Moment Matching Networks. In: ICML. Mansimov, Elman, Parisotto, Emilio, Ba, Jimmy, & Salakhutdinov, Ruslan Generating Images from Captions with Attention. In: ICLR. Radford, Alec, Metz, Luke, & Chintala, Soumith Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. In: ICLR. Rezende, Danilo Jimenez, Mohamed, Shakir, & Wierstra, Daan Stochastic backpropagation and approximate inference in deep generative models. arxiv preprint arxiv: Theis, Lucas, & Bethge, Matthias Generative Image Modeling Using Spatial LSTMs. In: NIPS.
50 Appendix References References IV Theis, Lucas, van den Oord, Aaron, & Bethge, Matthias A note on the evaluation of generative models. In: ICLR. van den Oord, Aaron, Kalchbrenner, Nal, & Kavukcuoglu, Koray Pixel Recurrent Neural Networks. In: ICML.
Deep Generative Models Variational Autoencoders
Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative
More informationAmortised MAP Inference for Image Super-resolution. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi & Ferenc Huszár ICLR 2017
Amortised MAP Inference for Image Super-resolution Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi & Ferenc Huszár ICLR 2017 Super Resolution Inverse problem: Given low resolution representation
More informationGenerative Adversarial Network
Generative Adversarial Network Many slides from NIPS 2014 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio Generative adversarial
More informationTowards Conceptual Compression
Towards Conceptual Compression Karol Gregor karolg@google.com Frederic Besse fbesse@google.com Danilo Jimenez Rezende danilor@google.com Ivo Danihelka danihelka@google.com Daan Wierstra wierstra@google.com
More informationAuto-Encoding Variational Bayes
Auto-Encoding Variational Bayes Diederik P (Durk) Kingma, Max Welling University of Amsterdam Ph.D. Candidate, advised by Max Durk Kingma D.P. Kingma Max Welling Problem class Directed graphical model:
More informationarxiv: v2 [cs.lg] 17 Dec 2018
Lu Mi 1 * Macheng Shen 2 * Jingzhao Zhang 2 * 1 MIT CSAIL, 2 MIT LIDS {lumi, macshen, jzhzhang}@mit.edu The authors equally contributed to this work. This report was a part of the class project for 6.867
More informationIntroduction to Generative Adversarial Networks
Introduction to Generative Adversarial Networks Luke de Oliveira Vai Technologies Lawrence Berkeley National Laboratory @lukede0 @lukedeo lukedeo@vaitech.io https://ldo.io 1 Outline Why Generative Modeling?
More informationWhen Variational Auto-encoders meet Generative Adversarial Networks
When Variational Auto-encoders meet Generative Adversarial Networks Jianbo Chen Billy Fang Cheng Ju 14 December 2016 Abstract Variational auto-encoders are a promising class of generative models. In this
More informationAlternatives to Direct Supervision
CreativeAI: Deep Learning for Graphics Alternatives to Direct Supervision Niloy Mitra Iasonas Kokkinos Paul Guerrero Nils Thuerey Tobias Ritschel UCL UCL UCL TUM UCL Timetable Theory and Basics State of
More informationImplicit generative models: dual vs. primal approaches
Implicit generative models: dual vs. primal approaches Ilya Tolstikhin MPI for Intelligent Systems ilya@tue.mpg.de Machine Learning Summer School 2017 Tübingen, Germany Contents 1. Unsupervised generative
More informationVariational Autoencoders. Sargur N. Srihari
Variational Autoencoders Sargur N. srihari@cedar.buffalo.edu Topics 1. Generative Model 2. Standard Autoencoder 3. Variational autoencoders (VAE) 2 Generative Model A variational autoencoder (VAE) is a
More informationGAN Frontiers/Related Methods
GAN Frontiers/Related Methods Improving GAN Training Improved Techniques for Training GANs (Salimans, et. al 2016) CSC 2541 (07/10/2016) Robin Swanson (robin@cs.toronto.edu) Training GANs is Difficult
More informationUnsupervised Learning
Deep Learning for Graphics Unsupervised Learning Niloy Mitra Iasonas Kokkinos Paul Guerrero Vladimir Kim Kostas Rematas Tobias Ritschel UCL UCL/Facebook UCL Adobe Research U Washington UCL Timetable Niloy
More information19: Inference and learning in Deep Learning
10-708: Probabilistic Graphical Models 10-708, Spring 2017 19: Inference and learning in Deep Learning Lecturer: Zhiting Hu Scribes: Akash Umakantha, Ryan Williamson 1 Classes of Deep Generative Models
More informationDEEP LEARNING PART THREE - DEEP GENERATIVE MODELS CS/CNS/EE MACHINE LEARNING & DATA MINING - LECTURE 17
DEEP LEARNING PART THREE - DEEP GENERATIVE MODELS CS/CNS/EE 155 - MACHINE LEARNING & DATA MINING - LECTURE 17 GENERATIVE MODELS DATA 3 DATA 4 example 1 DATA 5 example 2 DATA 6 example 3 DATA 7 number of
More informationLearning to generate with adversarial networks
Learning to generate with adversarial networks Gilles Louppe June 27, 2016 Problem statement Assume training samples D = {x x p data, x X } ; We want a generative model p model that can draw new samples
More informationProgress on Generative Adversarial Networks
Progress on Generative Adversarial Networks Wangmeng Zuo Vision Perception and Cognition Centre Harbin Institute of Technology Content Image generation: problem formulation Three issues about GAN Discriminate
More informationGenerating Images from Captions with Attention. Elman Mansimov Emilio Parisotto Jimmy Lei Ba Ruslan Salakhutdinov
Generating Images from Captions with Attention Elman Mansimov Emilio Parisotto Jimmy Lei Ba Ruslan Salakhutdinov Reasoning, Attention, Memory workshop, NIPS 2015 Motivation To simplify the image modelling
More informationarxiv: v6 [stat.ml] 15 Jun 2015
VARIATIONAL RECURRENT AUTO-ENCODERS Otto Fabius & Joost R. van Amersfoort Machine Learning Group University of Amsterdam {ottofabius,joost.van.amersfoort}@gmail.com ABSTRACT arxiv:1412.6581v6 [stat.ml]
More informationTackling Over-pruning in Variational Autoencoders
Serena Yeung 1 Anitha Kannan 2 Yann Dauphin 2 Li Fei-Fei 1 Abstract Variational autoencoders (VAE) are directed generative models that learn factorial latent variables. As noted by Burda et al. (2015),
More informationarxiv: v1 [stat.ml] 10 Dec 2018
1st Symposium on Advances in Approximate Bayesian Inference, 2018 1 7 Disentangled Dynamic Representations from Unordered Data arxiv:1812.03962v1 [stat.ml] 10 Dec 2018 Leonhard Helminger Abdelaziz Djelouah
More information(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17
Improving Generative Adversarial Networks with Denoising Feature Matching David Warde-Farley 1 Yoshua Bengio 1 1 University of Montreal, ICLR,2017 Presenter: Bargav Jayaraman Outline 1 Introduction 2 Background
More informationarxiv: v2 [cs.lg] 9 Jun 2017
Shengjia Zhao 1 Jiaming Song 1 Stefano Ermon 1 arxiv:1702.08396v2 [cs.lg] 9 Jun 2017 Abstract Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning
More informationPIXELCNN++: IMPROVING THE PIXELCNN WITH DISCRETIZED LOGISTIC MIXTURE LIKELIHOOD AND OTHER MODIFICATIONS
PIXELCNN++: IMPROVING THE PIXELCNN WITH DISCRETIZED LOGISTIC MIXTURE LIKELIHOOD AND OTHER MODIFICATIONS Tim Salimans, Andrej Karpathy, Xi Chen, Diederik P. Kingma {tim,karpathy,peter,dpkingma}@openai.com
More informationarxiv: v1 [cs.lg] 10 Nov 2016
Disentangling factors of variation in deep representations using adversarial training arxiv:1611.03383v1 [cs.lg] 10 Nov 2016 Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann LeCun 719
More informationAutoencoder. Representation learning (related to dictionary learning) Both the input and the output are x
Deep Learning 4 Autoencoder, Attention (spatial transformer), Multi-modal learning, Neural Turing Machine, Memory Networks, Generative Adversarial Net Jian Li IIIS, Tsinghua Autoencoder Autoencoder Unsupervised
More informationIMPROVING SAMPLING FROM GENERATIVE AUTOENCODERS WITH MARKOV CHAINS
IMPROVING SAMPLING FROM GENERATIVE AUTOENCODERS WITH MARKOV CHAINS Antonia Creswell, Kai Arulkumaran & Anil A. Bharath Department of Bioengineering Imperial College London London SW7 2BP, UK {ac2211,ka709,aab01}@ic.ac.uk
More informationIMPLICIT AUTOENCODERS
IMPLICIT AUTOENCODERS Anonymous authors Paper under double-blind review ABSTRACT In this paper, we describe the implicit autoencoder (IAE), a generative autoencoder in which both the generative path and
More informationJOINT MULTIMODAL LEARNING WITH DEEP GENERA-
JOINT MULTIMODAL LEARNING WITH DEEP GENERA- TIVE MODELS Masahiro Suzuki, Kotaro Nakayama, Yutaka Matsuo The University of Tokyo Bunkyo-ku, Tokyo, Japan {masa,k-nakayama,matsuo}@weblab.t.u-tokyo.ac.jp ABSTRACT
More informationScore function estimator and variance reduction techniques
and variance reduction techniques Wilker Aziz University of Amsterdam May 24, 2018 Wilker Aziz Discrete variables 1 Outline 1 2 3 Wilker Aziz Discrete variables 1 Variational inference for belief networks
More informationAdversarially Learned Inference
Institut des algorithmes d apprentissage de Montréal Adversarially Learned Inference Aaron Courville CIFAR Fellow Université de Montréal Joint work with: Vincent Dumoulin, Ishmael Belghazi, Olivier Mastropietro,
More informationPixel-level Generative Model
Pixel-level Generative Model Generative Image Modeling Using Spatial LSTMs (2015NIPS) L. Theis and M. Bethge University of Tübingen, Germany Pixel Recurrent Neural Networks (2016ICML) A. van den Oord,
More informationarxiv: v1 [cs.lg] 28 Feb 2017
Shengjia Zhao 1 Jiaming Song 1 Stefano Ermon 1 arxiv:1702.08658v1 cs.lg] 28 Feb 2017 Abstract We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard
More informationLatent Regression Bayesian Network for Data Representation
2016 23rd International Conference on Pattern Recognition (ICPR) Cancún Center, Cancún, México, December 4-8, 2016 Latent Regression Bayesian Network for Data Representation Siqi Nie Department of Electrical,
More informationarxiv: v4 [cs.cv] 21 Oct 2017
Sync-DRAW: Automatic Video Generation using Deep Recurrent Attentive Architectures Gaurav Mittal gaurav.mittal.191013@gmail.com Tanya Marwah IIT Hyderabad ee13b1044@iith.ac.in Vineeth N Balasubramanian
More informationDOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION
DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION Yen-Cheng Liu 1, Wei-Chen Chiu 2, Sheng-De Wang 1, and Yu-Chiang Frank Wang 1 1 Graduate Institute of Electrical Engineering,
More informationarxiv: v2 [cs.lg] 7 Jun 2017
Pixel Deconvolutional Networks Hongyang Gao Washington State University Pullman, WA 99164 hongyang.gao@wsu.edu Hao Yuan Washington State University Pullman, WA 99164 hao.yuan@wsu.edu arxiv:1705.06820v2
More informationAdversarial Symmetric Variational Autoencoder
Adversarial Symmetric Variational Autoencoder Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li and Lawrence Carin Department of Electrical and Computer Engineering, Duke University
More informationUCLA UCLA Electronic Theses and Dissertations
UCLA UCLA Electronic Theses and Dissertations Title Application of Generative Adversarial Network on Image Style Transformation and Image Processing Permalink https://escholarship.org/uc/item/66w654x7
More informationDOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION
2017 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 25 28, 2017, TOKYO, JAPAN DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION Yen-Cheng Liu 1,
More informationGenerative Mixture of Networks
Generative Mixture of Networks Ershad Banijamali 1 Ali Ghodsi 2 Pascal Popuart 1 1 School of Computer Science, University of Waterloo 2 Department of Statistics and Actuarial Science, University of Waterloo
More informationDeep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Emily Denton Dept. of Computer Science Courant Institute New York University Soumith Chintala Arthur Szlam Rob Fergus Facebook
More informationDeep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Supplementary Material
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks Supplementary Material Emily Denton Dept. of Computer Science Courant Institute New York University Soumith Chintala Arthur
More informationThe Amortized Bootstrap
Eric Nalisnick 1 Padhraic Smyth 1 Abstract We use amortized inference in conjunction with implicit models to approximate the bootstrap distribution over model parameters. We call this the amortized bootstrap,
More informationarxiv: v2 [stat.ml] 21 Oct 2017
Variational Approaches for Auto-Encoding Generative Adversarial Networks arxiv:1706.04987v2 stat.ml] 21 Oct 2017 Mihaela Rosca Balaji Lakshminarayanan David Warde-Farley Shakir Mohamed DeepMind {mihaelacr,balajiln,dwf,shakir}@google.com
More informationarxiv: v1 [cs.lg] 18 Nov 2015 ABSTRACT
ADVERSARIAL AUTOENCODERS Alireza Makhzani University of Toronto makhzani@psi.utoronto.ca Jonathon Shlens & Navdeep Jaitly & Ian Goodfellow Google Brain {shlens,ndjaitly,goodfellow}@google.com arxiv:1511.05644v1
More informationarxiv: v1 [cs.lg] 9 Nov 2015
GENERATING IMAGES FROM CAPTIONS WITH ATTENTION Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba & Ruslan Salakhutdinov Department of Computer Science University of Toronto Toronto, Ontario, Canada {emansim,eparisotto,rsalakhu}@cs.toronto.edu,
More informationLecture 21 : A Hybrid: Deep Learning and Graphical Models
10-708: Probabilistic Graphical Models, Spring 2018 Lecture 21 : A Hybrid: Deep Learning and Graphical Models Lecturer: Kayhan Batmanghelich Scribes: Paul Liang, Anirudha Rayasam 1 Introduction and Motivation
More informationCoupled Generative Adversarial Networks
Coupled Generative Adversarial Networks Ming-Yu Liu Mitsubishi Electric Research Labs (MERL), mliu@merl.com Oncel Tuzel Mitsubishi Electric Research Labs (MERL), oncel@merl.com Abstract We propose coupled
More informationOne-Shot Generalization in Deep Generative Models
One-Shot Generalization in Deep Generative Models Danilo J. Rezende* Shakir Mohamed* Ivo Danihelka Karol Gregor Daan Wierstra Google DeepMind, London bstract Humans have an impressive ability to reason
More informationVideo Generation Using 3D Convolutional Neural Network
Video Generation Using 3D Convolutional Neural Network Shohei Yamamoto Grad. School of Information Science and Technology The University of Tokyo yamamoto@mi.t.u-tokyo.ac.jp Tatsuya Harada Grad. School
More informationarxiv: v2 [cs.lg] 7 Feb 2019
Implicit Autoencoders Alireza Makhzani Vector Institute for Artificial Intelligence University of Toronto makhzani@vectorinstitute.ai arxiv:1805.09804v2 [cs.lg 7 Feb 2019 Abstract In this paper, we describe
More informationarxiv: v2 [cs.lg] 25 May 2016
Adversarial Autoencoders Alireza Makhzani University of Toronto makhzani@psi.toronto.edu Jonathon Shlens & Navdeep Jaitly Google Brain {shlens,ndjaitly}@google.com arxiv:1511.05644v2 [cs.lg] 25 May 2016
More informationProbabilistic Programming with Pyro
Probabilistic Programming with Pyro the pyro team Eli Bingham Theo Karaletsos Rohit Singh JP Chen Martin Jankowiak Fritz Obermeyer Neeraj Pradhan Paul Szerlip Noah Goodman Why Pyro? Why probabilistic modeling?
More informationGenerating Images with Perceptual Similarity Metrics based on Deep Networks
Generating Images with Perceptual Similarity Metrics based on Deep Networks Alexey Dosovitskiy and Thomas Brox University of Freiburg {dosovits, brox}@cs.uni-freiburg.de Abstract We propose a class of
More informationarxiv: v4 [cs.lg] 29 May 2016
Generating images with recurrent adversarial networks arxiv:1602.05110v4 [cs.lg] 29 May 2016 Daniel Jiwoong Im Montreal Institute for Learning Algorithm University of Montreal imdaniel@iro.umontreal.ca
More informationAuxiliary Guided Autoregressive Variational Autoencoders
Auxiliary Guided Autoregressive Variational Autoencoders Thomas Lucas, Jakob Verbeek To cite this version: Thomas Lucas, Jakob Verbeek. Auxiliary Guided Autoregressive Variational Autoencoders. 2017.
More informationTriple Generative Adversarial Nets
Triple Generative Adversarial Nets Chongxuan Li, Kun Xu, Jun Zhu, Bo Zhang Dept. of Comp. Sci. & Tech., TNList Lab, State Key Lab of Intell. Tech. & Sys., Center for Bio-Inspired Computing Research, Tsinghua
More informationarxiv: v1 [cs.lg] 31 Dec 2015
arxiv:1512.09300v1 [cs.lg] 31 Dec 2015 Anders Boesen Lindbo Larsen 1 ABLL@DTU.DK Søren Kaae Sønderby 2 SKAAESONDERBY@GMAIL.DK Ole Winther 1,2 OLWI@DTU.DK 1 Department for Applied Mathematics and Computer
More informationarxiv: v4 [cs.lg] 27 Nov 2017 ABSTRACT
PIXEL DECONVOLUTIONAL NETWORKS Hongyang Gao Washington State University hongyang.gao@wsu.edu Hao Yuan Washington State University hao.yuan@wsu.edu Zhengyang Wang Washington State University zwang6@eecs.wsu.edu
More informationIterative Inference Models
Iterative Inference Models Joseph Marino, Yisong Yue California Institute of Technology {jmarino, yyue}@caltech.edu Stephan Mt Disney Research stephan.mt@disneyresearch.com Abstract Inference models, which
More informationDeep Fakes using Generative Adversarial Networks (GAN)
Deep Fakes using Generative Adversarial Networks (GAN) Tianxiang Shen UCSD La Jolla, USA tis038@eng.ucsd.edu Ruixian Liu UCSD La Jolla, USA rul188@eng.ucsd.edu Ju Bai UCSD La Jolla, USA jub010@eng.ucsd.edu
More informationSymmetric Variational Autoencoder and Connections to Adversarial Learning
Symmetric Variational Autoencoder and Connections to Adversarial Learning Liqun Chen 1 Shuyang Dai 1 Yunchen Pu 1 Erjin Zhou 4 Chunyuan Li 1 Qinliang Su 2 Changyou Chen 3 Lawrence Carin 1 1 Duke University,
More informationGENERATIVE ADVERSARIAL NETWORKS (GAN) Presented by Omer Stein and Moran Rubin
GENERATIVE ADVERSARIAL NETWORKS (GAN) Presented by Omer Stein and Moran Rubin GENERATIVE MODEL Given a training dataset, x, try to estimate the distribution, Pdata(x) Explicitly or Implicitly (GAN) Explicitly
More informationarxiv: v1 [eess.sp] 23 Oct 2018
Reproducing AmbientGAN: Generative models from lossy measurements arxiv:1810.10108v1 [eess.sp] 23 Oct 2018 Mehdi Ahmadi Polytechnique Montreal mehdi.ahmadi@polymtl.ca Mostafa Abdelnaim University de Montreal
More informationToward better reconstruction of style images with GANs
Toward better reconstruction of style images with GANs Alexander Lorbert, Nir Ben-Zvi, Arridhana Ciptadi, Eduard Oks, Ambrish Tyagi Amazon Lab126 [lorbert,nirbenz,ciptadia,oksed,ambrisht]@amazon.com ABSTRACT
More informationDeep Hybrid Discriminative-Generative Models for Semi-Supervised Learning
Volodymyr Kuleshov 1 Stefano Ermon 1 Abstract We propose a framework for training deep probabilistic models that interpolate between discriminative and generative approaches. Unlike previously proposed
More informationarxiv: v3 [cs.lg] 10 Apr 2016
arxiv:1602.05110v3 [cs.lg] 10 Apr 2016 Daniel Jiwoong Im 1 Chris Dongjoo Kim 2 Hui Jiang 2 Roland Memisevic 1 1 Montreal Institute for Learning Algorithms, University of Montreal 2 Department of Engineering
More informationarxiv: v2 [cs.cv] 6 Nov 2017
arxiv:1701.02096v2 [cs.cv] 6 Nov 2017 Improved Texture Networks: Maximizing Quality and Diversity in Feed-forward Stylization and Texture Synthesis Dmitry Ulyanov Skolkovo Institute of Science and Technology
More informationGenerative Adversarial Text to Image Synthesis
Generative Adversarial Text to Image Synthesis Scott Reed, Zeynep Akata, Xinchen Yan, Lajanugen Logeswaran, Bernt Schiele, Honglak Lee Presented by: Jingyao Zhan Contents Introduction Related Work Method
More informationarxiv: v2 [cs.cv] 20 Sep 2016
Coupled Generative Adversarial Networks Ming-Yu Liu Mitsubishi Electric Research Labs (MERL), mliu@merl.com Oncel Tuzel Mitsubishi Electric Research Labs (MERL), oncel@merl.com arxiv:1606.07536v2 [cs.cv]
More informationTREE-STRUCTURED VARIATIONAL AUTOENCODER
TREE-STRUCTURED VARIATIONAL AUTOENCODER Richard Shin Department of Electrical Engineering and Computer Science University of California, Berkeley ricshin@cs.berkeley.edu Alexander A. Alemi, Geoffrey Irving
More informationConditional DCGAN For Anime Avatar Generation
Conditional DCGAN For Anime Avatar Generation Wang Hang School of Electronic Information and Electrical Engineering Shanghai Jiao Tong University Shanghai 200240, China Email: wang hang@sjtu.edu.cn Abstract
More informationShow, Attend and Tell: Neural Image Caption Generation with Visual Attention
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio Presented
More informationDenoising Adversarial Autoencoders
Denoising Adversarial Autoencoders Antonia Creswell BICV Imperial College London Anil Anthony Bharath BICV Imperial College London Email: ac2211@ic.ac.uk arxiv:1703.01220v4 [cs.cv] 4 Jan 2018 Abstract
More informationarxiv: v1 [cs.lg] 8 Dec 2016 Abstract
An Architecture for Deep, Hierarchical Generative Models Philip Bachman phil.bachman@maluuba.com Maluuba Research arxiv:1612.04739v1 [cs.lg] 8 Dec 2016 Abstract We present an architecture which lets us
More informationLEARNING TO INFER ABSTRACT 1 INTRODUCTION. Under review as a conference paper at ICLR Anonymous authors Paper under double-blind review
LEARNING TO INFER Anonymous authors Paper under double-blind review ABSTRACT Inference models, which replace an optimization-based inference procedure with a learned model, have been fundamental in advancing
More informationarxiv: v2 [cs.cv] 26 Mar 2017
TAC-GAN Text Conditioned Auxiliary Classifier Generative Adversarial Network arxiv:1703.06412v2 [cs.cv] 26 ar 2017 Ayushman Dash 1 John Gamboa 1 Sheraz Ahmed 3 arcus Liwicki 14 uhammad Zeshan Afzal 12
More informationSemi-Amortized Variational Autoencoders
Semi-Amortized Variational Autoencoders Yoon Kim Sam Wiseman Andrew Miller David Sontag Alexander Rush Code: https://github.com/harvardnlp/sa-vae Background: Variational Autoencoders (VAE) (Kingma et al.
More informationProgressive Generative Hashing for Image Retrieval
Progressive Generative Hashing for Image Retrieval Yuqing Ma, Yue He, Fan Ding, Sheng Hu, Jun Li, Xianglong Liu 2018.7.16 01 BACKGROUND the NNS problem in big data 02 RELATED WORK Generative adversarial
More informationGENERATIVE ADVERSARIAL NETWORK-BASED VIR-
GENERATIVE ADVERSARIAL NETWORK-BASED VIR- TUAL TRY-ON WITH CLOTHING REGION Shizuma Kubo, Yusuke Iwasawa, and Yutaka Matsuo The University of Tokyo Bunkyo-ku, Japan {kubo, iwasawa, matsuo}@weblab.t.u-tokyo.ac.jp
More informationGeometric Enclosing Networks
Geometric Enclosing Networks Trung Le, Hung Vu, Tu Dinh Nguyen and Dinh Phung Faculty of Information Technology, Monash University Center for Pattern Recognition and Data Analytics, Deakin University,
More informationThe Multi-Entity Variational Autoencoder
The Multi-Entity Variational Autoencoder Charlie Nash 1,2, S. M. Ali Eslami 2, Chris Burgess 2, Irina Higgins 2, Daniel Zoran 2, Theophane Weber 2, Peter Battaglia 2 1 Edinburgh University 2 DeepMind Abstract
More informationCoupled Generative Adversarial Nets
MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Coupled Generative Adversarial Nets Liu, M.-Y.; Tuzel, O. TR206-070 June 206 Abstract We propose the coupled generative adversarial nets (CoGAN)
More informationGenerative Models II. Phillip Isola, MIT, OpenAI DLSS 7/27/18
Generative Models II Phillip Isola, MIT, OpenAI DLSS 7/27/18 What s a generative model? For this talk: models that output high-dimensional data (Or, anything involving a GAN, VAE, PixelCNN, etc) Useful
More informationDeep Generative Models and a Probabilistic Programming Library
Deep Generative Models and a Probabilistic Programming Library Discriminative (Deep) Learning Learn a (differentiable) function mapping from input to output x f(x; θ) y Gradient back-propagation Generative
More informationGenerative Adversarial Networks (GANs)
Generative Adversarial Networks (GANs) Hossein Azizpour Most of the slides are courtesy of Dr. Ian Goodfellow (Research Scientist at OpenAI) and from his presentation at NIPS 2016 tutorial Note. I am generally
More informationarxiv: v1 [cs.cv] 20 Mar 2017
I2T2I: LEARNING TEXT TO IMAGE SYNTHESIS WITH TEXTUAL DATA AUGMENTATION Hao Dong, Jingqing Zhang, Douglas McIlwraith, Yike Guo arxiv:1703.06676v1 [cs.cv] 20 Mar 2017 Data Science Institute, Imperial College
More informationLecture 19: Generative Adversarial Networks
Lecture 19: Generative Adversarial Networks Roger Grosse 1 Introduction Generative modeling is a type of machine learning where the aim is to model the distribution that a given set of data (e.g. images,
More informationarxiv: v1 [cs.cv] 17 Nov 2016
Inverting The Generator Of A Generative Adversarial Network arxiv:1611.05644v1 [cs.cv] 17 Nov 2016 Antonia Creswell BICV Group Bioengineering Imperial College London ac2211@ic.ac.uk Abstract Anil Anthony
More informationarxiv: v1 [stat.ml] 31 May 2018
Ratio Matching MMD Nets: Low dimensional projections for effective deep generative models arxiv:1806.00101v1 [stat.ml] 31 May 2018 Akash Srivastava School of Informatics akash.srivastava@ed.ac.uk Kai Xu
More informationFlow-GAN: Bridging implicit and prescribed learning in generative models
Aditya Grover 1 Manik Dhar 1 Stefano Ermon 1 Abstract Evaluating the performance of generative models for unsupervised learning is inherently challenging due to the lack of welldefined and tractable objectives.
More informationAutoencoders. Stephen Scott. Introduction. Basic Idea. Stacked AE. Denoising AE. Sparse AE. Contractive AE. Variational AE GAN.
Stacked Denoising Sparse Variational (Adapted from Paul Quint and Ian Goodfellow) Stacked Denoising Sparse Variational Autoencoding is training a network to replicate its input to its output Applications:
More informationTexture Synthesis with Spatial Generative Adversarial Networks
Texture Synthesis with Spatial Generative Adversarial Networks Nikolay Jetchev Urs Bergmann Roland Vollgraf Zalando Research {nikolay.jetchev,urs.bergmann,roland.vollgraf}@zalando.de 1 Abstract Generative
More informationModeling and Transforming Speech using Variational Autoencoders
Modeling and Transforming Speech using Variational Autoencoders Merlijn Blaauw, Jordi Bonada Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain merlijn.blaauw@upf.edu, jordi.bonada@upf.edu
More informationVisual Recommender System with Adversarial Generator-Encoder Networks
Visual Recommender System with Adversarial Generator-Encoder Networks Bowen Yao Stanford University 450 Serra Mall, Stanford, CA 94305 boweny@stanford.edu Yilin Chen Stanford University 450 Serra Mall
More informationarxiv: v1 [cs.cv] 7 Mar 2018
Accepted as a conference paper at the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN) 2018 Inferencing Based on Unsupervised Learning of Disentangled
More informationAn Empirical Study of Generative Adversarial Networks for Computer Vision Tasks
An Empirical Study of Generative Adversarial Networks for Computer Vision Tasks Report for Undergraduate Project - CS396A Vinayak Tantia (Roll No: 14805) Guide: Prof Gaurav Sharma CSE, IIT Kanpur, India
More informationAutoencoding Beyond Pixels Using a Learned Similarity Metric
Autoencoding Beyond Pixels Using a Learned Similarity Metric International Conference on Machine Learning, 2016 Anders Boesen Lindbo Larsen, Hugo Larochelle, Søren Kaae Sønderby, Ole Winther Technical
More informationGANosaic: Mosaic Creation with Generative Texture Manifolds
GANosaic: Mosaic Creation with Generative Texture Manifolds Nikolay Jetchev nikolay.jetchev@zalando.de Zalando Research Urs Bergmann urs.bergmann@zalando.de Zalando Research Abstract Calvin Seward calvin.seward@zalando.de
More information