Implicit generative models: dual vs. primal approaches

Size: px
Start display at page:

Download "Implicit generative models: dual vs. primal approaches"

Transcription

1 Implicit generative models: dual vs. primal approaches Ilya Tolstikhin MPI for Intelligent Systems Machine Learning Summer School 2017 Tübingen, Germany

2 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

3 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

4 The task: There exists an unknown distribution P X over the data space X and we have an i.i.d. sample X 1,..., X n from P X. Find a model distribution P G over X similar to P X. We will work with latent variable models P G defined by 2 steps: 1. Sample a code Z from the latent space Z; 2. Map Z to G(Z) X with a (random) transformation G: Z X. p G (x) := p G (x z)p z (z)dx. Z All techniques mentioned in this talk share two features: While P G has no analytical expression, it is easy to sample from; The objective allows for SGD training.

5 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

6 How to measure a similarity between P X and P G? f-divergences Take any convex f : (0, ) R with f(1) = 0. ( ) p(x) D f (P Q) := f q(x)dx q(x) Integral Probability Metrics Take any class F of bounded real-valued functions on X. γ F (P, Q) := sup E P [f(x)] E Q [f(y )] f F Optimal transport Take any cost c(x, y): X X R +. W c (P, Q) := X inf E (X,Y ) Γ[c(X, Y )], Γ P(X P,Y Q) where P(X P, Y Q) is a set of all joint distributions of (X, Y ) with marginals P and Q respectively.

7 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

8 The goal: minimize D f (P X P G ) with respect to P G Variational (dual) representation of f-divergences: D f (P Q) = [ sup E X P [T (X)] E Y Q f ( T (Y ) )] T : X dom(f ) where f (x) := sup u x u f(u) is a convex conjugate of f. Solving inf PG D f (P X P G ) is equivalent to inf sup [ E X PX [T (X)] E Z PZ f ( T (G(Z)) )] (*) G T 1. Estimate expectations with samples: inf sup 1 G T N N T (X i ) 1 M i=1 M f ( T (G(Z j )) ). j=1 2. Parametrize T = T ω and G = G θ using any flexible functions (eg. deep nets) and run SGD on (*).

9 The goal: minimize D f (P X P G ) with respect to P G Variational (dual) representation of f-divergences: D f (P Q) = [ sup E X P [T (X)] E Y Q f ( T (Y ) )] T : X dom(f ) where f (x) := sup u x u f(u) is a convex conjugate of f. Solving inf PG D f (P X P G ) is equivalent to inf sup [ E X PX [T (X)] E Z PZ f ( T (G(Z)) )] (*) G T 1. Estimate expectations with samples: inf sup 1 G T N N T (X i ) 1 M i=1 M f ( T (G(Z j )) ). j=1 2. Parametrize T = T ω and G = G θ using any flexible functions (eg. deep nets) and run SGD on (*).

10 Original Generative Adversarial Networks Variational (dual) representation of f-divergences: D f (P X P G ) = [ sup E X P [T (X)] E Z PZ f ( T (G(Z)) )] T : X dom(f ) where f (x) := sup u x u f(u) is a convex conjugate of f. 1. Take f(x) = (x + 1) log x x log x and f (t) = log ( 2 e t). The domain of f is (, log 2); 2. Take T = g f T ω, where g f (v) = log 2 log(1 + e v ); 3. Parametrize G = G θ and T ω with deep nets Up to additive 2 log 2 term inf PG D f (P X P G ) is equivalent to ( ) 1 inf sup E X PX log G θ T ω 1 + e + E 1 Tω(X) Z P Z log 1 ( ) 1 + e Tω G θ (Z) Compare to the original GAN objective inf G θ sup T ω E X Pd [log T ω (X)] + E Z PZ [log ( 1 T ω (G θ (Z)) ) ].

11 Theory vs. practice: do we know what GANs do? Variational (dual) representation of f-divergences: D f (P X P G ) = [ sup E X P [T (X)] E Z PZ f ( T (G(Z)) )] T : X dom(f ) where f (x) := sup u x u f(u) is a convex conjugate of f. inf G θ sup T ω E X Pd [log T ω (X)] + E Z PZ [log ( 1 T ω (G θ (Z)) ) ]. GANs are not precisely solving inf PG JS(P X P G ), because: 1. GANs replace expectations with sample averages. Uniform lows of large numbers may not apply, as our function classes are huge; 2. Instead of taking supremum over all possible witness functions T GANs optimize over classes of DNNs; 3. In practice GANs never optimize T ω to the end because of various computational/numerical reasons.

12 A possible criticism of f-divergences: When P X and P G are supported on disjoint manifolds f-divergences often max out. This leads to numerical instabilities: no useful gradients for G. Consider P G and P G supported on manifolds M and M. Suppose d(m, M X ) < d(m, M X ), where M X is the true manifold. f-divergences will often give the same numbers. Possible solutions: 1. The smoothing: add a noise to both P X and P G before comparing. 2. Use other divergences, including IPMs and the optimal transport.

13 Minimizing MMD between P X and P G Take any reproducing kernel k : X X R. Let B k be a unit ball of the corresponding RKHS H k. Maximum Mean Discrepancy is the following IPM: γ k (P X, P G ) := sup T B k E PX [T (X)] E PG [T (Y )] (MMD) This optimization problem has a closed form analytical solution. One can play the adversarial game using (MMD) instead of D f (P X P G ): No need to train the discriminator T ; On the other hand, B k is a rather restricted class; One can also train k adversarially, resulting in a stronger objective: inf max γ k (P X, P G ). P G k

14 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

15 Minimizing the 1-Wasserstein distance 1-Wasserstein distance is defined by W 1 (P, Q) := inf E (X,Y ) Γ[d(X, Y )], Γ P(X P,Y Q) where P(X P, Y Q) is a set of all joint distributions of (X, Y ) with marginals P and Q respectively and (X, d) is a metric space. Kantorovich-Rubinstein duality: W 1 (P, Q) = sup T F L E PX [T (X)] E PG [T (Y )], (KR) where F L are all the bounded 1-Lipschitz functions on (X, d). WGAN: In order to solve inf PG W 1 (P X, P G ) let s play the adversarial training card on (KR). Parametrize T = T ω using the weight clipping or perform the gradient penalization. Unfortunately, (KR) holds only for the 1-Wasserstein distance.

16 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

17 VAE: Maximizing the marginal log-likelihood inf KL(P X P G ) inf E PX [log p G (X)]. P G P G Variational upper bound: for any conditional distribution Q(Z X) E PX [log p G (X)] = E PX [ KL ( Q(Z X), PZ ) EQ(Z X) [log p G (X Z)] ] In particular, if Q is not restricted: E PX [ KL ( Q(Z X), PG (Z X) )] E PX [ KL ( Q(Z X), PZ ) EQ(Z X) [log p G (X Z)] ]. E PX [log p G (X)] = inf Q E P X [ KL ( Q(Z X), PZ ) EQ(Z X) [log p G (X Z)] ] Variational Auto-Encoders use the upper bound and Latent variable models with any P G (X Z), eg. N (X; G(Z), σ 2 I) Set P Z (Z) = N (Z; 0, I) and Q(Z X) = N (Z; µ(x), Σ(X)) Parametrize G = G θ, µ, and Σ with deep nets. Run SGD.

18 AVB: reducing the gap in the upper bound Variational upper bound: E PX [log p G (X)] inf Q Q E P X [ KL ( Q(Z X), PZ ) EQ(Z X) [log p G (X Z)] ] Adversarial Variational Bayes reduces the variational gap by Allowing for flexible encoders Q e (Z X), defined implicitly by random variables e(x, ɛ), where ɛ P ɛ ; Replacing the KL divergence in the objective by the adversarial approximation (any of the ones discussed above) Parametrize e with a deep net. Run SGD. Downsides of VAE and AVB: Literature reports blurry samples. This is caused by the combination of KL objective and the Gaussian decoder. Importantly, P G (X Z) is trained only for encoded training points, i.e. for Z Q(Z X) and X P X. But we sample from Z P Z.

19 Unregularized Auto-Encoders Variational upper bound: E PX [log p G (X)] inf Q Q E P X [ KL ( Q(Z X), PZ ) EQ(Z X) [log p G (X Z)] ] The KL term in the upper bound may be viewed as a regularizer; Dropping it results in classical auto-encoders, where the encoder-decoder pair tries to reconstruct all training images; In this case training images X often end up being mapped to different spots chaotically scattered in the Z space; As a result, Z captures no useful representations. Sampling is hard.

20 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

21 Minimizing the optimal transport Optimal transport for a cost function c(x, y): X X R + is W c (P X, P G ) := inf E (X,Y ) Γ[c(X, Y )], Γ P(X P X,Y P G ) If P G (Y Z = z) = δ G(z) for all z Z, where G: Z X, we have W c (P X, P G ) = [ ( )] inf E PX E Q(Z X) c X, G(Z), Q : Q Z =P Z where Q Z is the marginal distribution of Z when X P X, Z Q(Z X).

22 Minimizing the optimal transport Optimal transport for a cost function c(x, y): X X R + is W c (P X, P G ) := inf E (X,Y ) Γ[c(X, Y )], Γ P(X P X,Y P G ) If P G (Y Z = z) = δ G(z) for all z Z, where G: Z X, we have W c (P X, P G ) = [ ( )] inf E PX E Q(Z X) c X, G(Z), Q : Q Z =P Z where Q Z is the marginal distribution of Z when X P X, Z Q(Z X).

23 Relaxing the constraint W c (P X, P G ) = [ ( )] inf E PX E Q(Z X) c X, G(Z), Q : Q Z =P Z Penalized Optimal Transport replaces the constraint with a penalty: POT(P X, P G ) := inf Q E P X E Q(Z X) [ c ( X, G(Z) )] + λ D(QZ, P Z ) and uses the adversarial training in the Z space to approximate D. For the 2-Wasserstein distance c(x, Y ) = X Y 2 2 POT recovers Adversarial Auto-Encoders; For the 1-Wasserstein distance c(x, Y ) = X Y 2 POT and WGAN are solving the same problem from the primal and dual forms respectively. Importantly, unlike VAE, POT does not force Q(Z X = x) to intersect for different x, which is known to lead to the blurriness.

24 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

25 GANs approach the problem from a dual perspective. They are known to produce very sharply looking images. max E Z P Z [T (G(Z))] G But notoriously hard to train, unstable (although many would disagree), and sometimes lead to mode collapses. GANs come without an encoder.

26 (Gulrajani et al., 2017) aka Improved WGAN, 32X32 CIFAR-10

27 (Radford et al., 2015) aka DCGAN, 64X64 LSUN

28 VAEs approach the problem from its primal. They enjoy a very stable training and often lead to diverse samples. max E X P X E Z Q(Z X) [c ( X, G(Z) ] G But the samples look blurry VAEs come with encoders. Various papers are trying to combine a stability and recall of VAEs with the precision of GANs: Choose an adversarially trained cost function c; Combine AE costs with the GAN criteria;...

29 (Mescheder et al., 2017) aka AVB, CelebA

30 VAE trained on CIFAR-10, Z of 20 dim.

31 (Bousquet et al., 2017) aka POT, CIFAR-10, same architecture

32 (Bousquet et al., 2017) aka POT, CIFAR-10, test reconstruction

33 Contents 1. Unsupervised generative modelling and implicit models 2. Distances on probability measures 3. GAN and f-gan: minimizing f-divergences (dual formulation) 4. WGAN: minimizing the optimal transport (dual formulation) 5. VAE: minimizing the KL-divergence (primal formulation) 6. POT: minimizing the optimal transport (primal formulation) 7. Dual vs. primal: precision vs. recall? Unifying VAE and GAN Most importantly: WE NEED AN ADEQUATE WAY TO EVALUATE THE MODELS

34 Literature 1. Nowozin, Cseke, Tomioka. f-gan: Training generative neural samplers using variational divergence minimization, Goodfellow et al. Generative adversarial nets, Arjovsky, Chintala, Bottou. Wasserstein GAN, Arjovsky, Bottou. Towards Principled Methods for Training Generative Adversarial Networks, Li et al. MMD GAN: Towards Deeper Understanding of Moment Matching Network, Dziugaite, Roy, Ghahramani. Training generative neural networks via maximum mean discrepancy optimization, Kingma, Welling. Auto-encoding variational Bayes, Makhzani et al. Adversarial autoencoders, Mescheder, Nowozin, Geiger. Adversarial variational bayes: Unifying variational autoencoders and generative adversarial networks, Bousquet et al. From optimal transport to generative modeling: the VEGAN cookbook, 2017.

Alternatives to Direct Supervision

Alternatives to Direct Supervision CreativeAI: Deep Learning for Graphics Alternatives to Direct Supervision Niloy Mitra Iasonas Kokkinos Paul Guerrero Nils Thuerey Tobias Ritschel UCL UCL UCL TUM UCL Timetable Theory and Basics State of

More information

Deep Generative Models Variational Autoencoders

Deep Generative Models Variational Autoencoders Deep Generative Models Variational Autoencoders Sudeshna Sarkar 5 April 2017 Generative Nets Generative models that represent probability distributions over multiple variables in some way. Directed Generative

More information

Unsupervised Learning

Unsupervised Learning Deep Learning for Graphics Unsupervised Learning Niloy Mitra Iasonas Kokkinos Paul Guerrero Vladimir Kim Kostas Rematas Tobias Ritschel UCL UCL/Facebook UCL Adobe Research U Washington UCL Timetable Niloy

More information

Towards Principled Methods for Training Generative Adversarial Networks. Martin Arjovsky & Léon Bottou

Towards Principled Methods for Training Generative Adversarial Networks. Martin Arjovsky & Léon Bottou Towards Principled Methods for Training Generative Adversarial Networks Martin Arjovsky & Léon Bottou Unsupervised learning - We have samples from an unknown distribution Unsupervised learning - We have

More information

GAN Frontiers/Related Methods

GAN Frontiers/Related Methods GAN Frontiers/Related Methods Improving GAN Training Improved Techniques for Training GANs (Salimans, et. al 2016) CSC 2541 (07/10/2016) Robin Swanson (robin@cs.toronto.edu) Training GANs is Difficult

More information

Adversarially Learned Inference

Adversarially Learned Inference Institut des algorithmes d apprentissage de Montréal Adversarially Learned Inference Aaron Courville CIFAR Fellow Université de Montréal Joint work with: Vincent Dumoulin, Ishmael Belghazi, Olivier Mastropietro,

More information

Lecture 3 GANs and Their Applications in Image Generation

Lecture 3 GANs and Their Applications in Image Generation Lecture 3 GANs and Their Applications in Image Generation Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2017 Outline Introduction Theoretical Part Application Part Existing Implementations

More information

19: Inference and learning in Deep Learning

19: Inference and learning in Deep Learning 10-708: Probabilistic Graphical Models 10-708, Spring 2017 19: Inference and learning in Deep Learning Lecturer: Zhiting Hu Scribes: Akash Umakantha, Ryan Williamson 1 Classes of Deep Generative Models

More information

arxiv: v1 [stat.ml] 11 Feb 2018

arxiv: v1 [stat.ml] 11 Feb 2018 Paul K. Rubenstein Bernhard Schölkopf Ilya Tolstikhin arxiv:80.0376v [stat.ml] Feb 08 Abstract We study the role of latent space dimensionality in Wasserstein auto-encoders (WAEs). Through experimentation

More information

Auto-Encoding Variational Bayes

Auto-Encoding Variational Bayes Auto-Encoding Variational Bayes Diederik P (Durk) Kingma, Max Welling University of Amsterdam Ph.D. Candidate, advised by Max Durk Kingma D.P. Kingma Max Welling Problem class Directed graphical model:

More information

Deep generative models of natural images

Deep generative models of natural images Spring 2016 1 Motivation 2 3 Variational autoencoders Generative adversarial networks Generative moment matching networks Evaluating generative models 4 Outline 1 Motivation 2 3 Variational autoencoders

More information

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17 Improving Generative Adversarial Networks with Denoising Feature Matching David Warde-Farley 1 Yoshua Bengio 1 1 University of Montreal, ICLR,2017 Presenter: Bargav Jayaraman Outline 1 Introduction 2 Background

More information

Introduction to Generative Models (and GANs)

Introduction to Generative Models (and GANs) Introduction to Generative Models (and GANs) Haoqiang Fan fhq@megvii.com Nov. 2017 Figures adapted from NIPS 2016 Tutorial Generative Adversarial Networks Generative Models: Learning the Distributions

More information

Generative Adversarial Networks (GANs)

Generative Adversarial Networks (GANs) Generative Adversarial Networks (GANs) Hossein Azizpour Most of the slides are courtesy of Dr. Ian Goodfellow (Research Scientist at OpenAI) and from his presentation at NIPS 2016 tutorial Note. I am generally

More information

Variational Autoencoders. Sargur N. Srihari

Variational Autoencoders. Sargur N. Srihari Variational Autoencoders Sargur N. srihari@cedar.buffalo.edu Topics 1. Generative Model 2. Standard Autoencoder 3. Variational autoencoders (VAE) 2 Generative Model A variational autoencoder (VAE) is a

More information

Variational Autoencoders

Variational Autoencoders red red red red red red red red red red red red red red red red red red red red Tutorial 03/10/2016 Generative modelling Assume that the original dataset is drawn from a distribution P(X ). Attempt to

More information

arxiv: v2 [cs.lg] 17 Dec 2018

arxiv: v2 [cs.lg] 17 Dec 2018 Lu Mi 1 * Macheng Shen 2 * Jingzhao Zhang 2 * 1 MIT CSAIL, 2 MIT LIDS {lumi, macshen, jzhzhang}@mit.edu The authors equally contributed to this work. This report was a part of the class project for 6.867

More information

Introduction to GAN. Generative Adversarial Networks. Junheng(Jeff) Hao

Introduction to GAN. Generative Adversarial Networks. Junheng(Jeff) Hao Introduction to GAN Generative Adversarial Networks Junheng(Jeff) Hao Adversarial Training is the coolest thing since sliced bread. -- Yann LeCun Roadmap 1. Generative Modeling 2. GAN 101: What is GAN?

More information

Symmetric Variational Autoencoder and Connections to Adversarial Learning

Symmetric Variational Autoencoder and Connections to Adversarial Learning Symmetric Variational Autoencoder and Connections to Adversarial Learning Liqun Chen 1 Shuyang Dai 1 Yunchen Pu 1 Erjin Zhou 4 Chunyuan Li 1 Qinliang Su 2 Changyou Chen 3 Lawrence Carin 1 1 Duke University,

More information

maagma Modified-Adversarial-Autoencoding Generator with Multiple Adversaries: Optimizing Multiple Learning Objectives for Image Generation with GANs

maagma Modified-Adversarial-Autoencoding Generator with Multiple Adversaries: Optimizing Multiple Learning Objectives for Image Generation with GANs maagma Modified-Adversarial-Autoencoding Generator with Multiple Adversaries: Optimizing Multiple Learning Objectives for Image Generation with GANs Sahil Chopra Stanford University 353 Serra Mall, Stanford

More information

Quantitative Evaluation of Generative Adversarial Networks and Improved Training Techniques

Quantitative Evaluation of Generative Adversarial Networks and Improved Training Techniques Quantitative Evaluation of Generative Adversarial Networks and Improved Training Techniques by Yadong Li to obtain the degree of Master of Science at the Delft University of Technology, to be defended

More information

arxiv: v2 [stat.ml] 21 Oct 2017

arxiv: v2 [stat.ml] 21 Oct 2017 Variational Approaches for Auto-Encoding Generative Adversarial Networks arxiv:1706.04987v2 stat.ml] 21 Oct 2017 Mihaela Rosca Balaji Lakshminarayanan David Warde-Farley Shakir Mohamed DeepMind {mihaelacr,balajiln,dwf,shakir}@google.com

More information

Introduction to GAN. Generative Adversarial Networks. Junheng(Jeff) Hao

Introduction to GAN. Generative Adversarial Networks. Junheng(Jeff) Hao Introduction to GAN Generative Adversarial Networks Junheng(Jeff) Hao Adversarial Training is the coolest thing since sliced bread. -- Yann LeCun Roadmap 1. Generative Modeling 2. GAN 101: What is GAN?

More information

Dist-GAN: An Improved GAN using Distance Constraints

Dist-GAN: An Improved GAN using Distance Constraints Dist-GAN: An Improved GAN using Distance Constraints Ngoc-Trung Tran [0000 0002 1308 9142], Tuan-Anh Bui [0000 0003 4123 262], and Ngai-Man Cheung [0000 0003 0135 3791] ST Electronics - SUTD Cyber Security

More information

Lecture 21 : A Hybrid: Deep Learning and Graphical Models

Lecture 21 : A Hybrid: Deep Learning and Graphical Models 10-708: Probabilistic Graphical Models, Spring 2018 Lecture 21 : A Hybrid: Deep Learning and Graphical Models Lecturer: Kayhan Batmanghelich Scribes: Paul Liang, Anirudha Rayasam 1 Introduction and Motivation

More information

Deep Generative Models and a Probabilistic Programming Library

Deep Generative Models and a Probabilistic Programming Library Deep Generative Models and a Probabilistic Programming Library Discriminative (Deep) Learning Learn a (differentiable) function mapping from input to output x f(x; θ) y Gradient back-propagation Generative

More information

Amortised MAP Inference for Image Super-resolution. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi & Ferenc Huszár ICLR 2017

Amortised MAP Inference for Image Super-resolution. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi & Ferenc Huszár ICLR 2017 Amortised MAP Inference for Image Super-resolution Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi & Ferenc Huszár ICLR 2017 Super Resolution Inverse problem: Given low resolution representation

More information

Autoencoder. Representation learning (related to dictionary learning) Both the input and the output are x

Autoencoder. Representation learning (related to dictionary learning) Both the input and the output are x Deep Learning 4 Autoencoder, Attention (spatial transformer), Multi-modal learning, Neural Turing Machine, Memory Networks, Generative Adversarial Net Jian Li IIIS, Tsinghua Autoencoder Autoencoder Unsupervised

More information

GENERATIVE ADVERSARIAL NETWORKS (GAN) Presented by Omer Stein and Moran Rubin

GENERATIVE ADVERSARIAL NETWORKS (GAN) Presented by Omer Stein and Moran Rubin GENERATIVE ADVERSARIAL NETWORKS (GAN) Presented by Omer Stein and Moran Rubin GENERATIVE MODEL Given a training dataset, x, try to estimate the distribution, Pdata(x) Explicitly or Implicitly (GAN) Explicitly

More information

Introduction to Generative Adversarial Networks

Introduction to Generative Adversarial Networks Introduction to Generative Adversarial Networks Luke de Oliveira Vai Technologies Lawrence Berkeley National Laboratory @lukede0 @lukedeo lukedeo@vaitech.io https://ldo.io 1 Outline Why Generative Modeling?

More information

Lecture 19: Generative Adversarial Networks

Lecture 19: Generative Adversarial Networks Lecture 19: Generative Adversarial Networks Roger Grosse 1 Introduction Generative modeling is a type of machine learning where the aim is to model the distribution that a given set of data (e.g. images,

More information

Autoencoding Beyond Pixels Using a Learned Similarity Metric

Autoencoding Beyond Pixels Using a Learned Similarity Metric Autoencoding Beyond Pixels Using a Learned Similarity Metric International Conference on Machine Learning, 2016 Anders Boesen Lindbo Larsen, Hugo Larochelle, Søren Kaae Sønderby, Ole Winther Technical

More information

arxiv: v1 [cs.ne] 11 Jun 2018

arxiv: v1 [cs.ne] 11 Jun 2018 Generative Adversarial Network Architectures For Image Synthesis Using Capsule Networks arxiv:1806.03796v1 [cs.ne] 11 Jun 2018 Yash Upadhyay University of Minnesota, Twin Cities Minneapolis, MN, 55414

More information

arxiv: v1 [cs.gr] 27 Dec 2018

arxiv: v1 [cs.gr] 27 Dec 2018 Sampling using Neural Networks for colorizing the grayscale images arxiv:1812.10650v1 [cs.gr] 27 Dec 2018 Wonbong Jang Department of Statistics London School of Economics London, WC2A 2AE w.jang@lse.ac.uk

More information

Generative Modeling with Convolutional Neural Networks. Denis Dus Data Scientist at InData Labs

Generative Modeling with Convolutional Neural Networks. Denis Dus Data Scientist at InData Labs Generative Modeling with Convolutional Neural Networks Denis Dus Data Scientist at InData Labs What we will discuss 1. 2. 3. 4. Discriminative vs Generative modeling Convolutional Neural Networks How to

More information

Generative Models in Deep Learning. Sargur N. Srihari

Generative Models in Deep Learning. Sargur N. Srihari Generative Models in Deep Learning Sargur N. Srihari srihari@cedar.buffalo.edu 1 Topics 1. Need for Probabilities in Machine Learning 2. Representations 1. Generative and Discriminative Models 2. Directed/Undirected

More information

IMPLICIT AUTOENCODERS

IMPLICIT AUTOENCODERS IMPLICIT AUTOENCODERS Anonymous authors Paper under double-blind review ABSTRACT In this paper, we describe the implicit autoencoder (IAE), a generative autoencoder in which both the generative path and

More information

Mode Regularized Generative Adversarial Networks

Mode Regularized Generative Adversarial Networks Mode Regularized Generative Adversarial Networks Tong Che 1 Yanran Li 2 Athul Paul Jacob 3 Yoshua Bengio 1 Wenjie Li 2 1 Montreal Institute for Learning Algorithms, Universite de Montreal, Montreal, Canada

More information

arxiv: v2 [cs.lg] 7 Feb 2019

arxiv: v2 [cs.lg] 7 Feb 2019 Implicit Autoencoders Alireza Makhzani Vector Institute for Artificial Intelligence University of Toronto makhzani@vectorinstitute.ai arxiv:1805.09804v2 [cs.lg 7 Feb 2019 Abstract In this paper, we describe

More information

Generative Adversarial Network

Generative Adversarial Network Generative Adversarial Network Many slides from NIPS 2014 Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio Generative adversarial

More information

Lab meeting (Paper review session) Stacked Generative Adversarial Networks

Lab meeting (Paper review session) Stacked Generative Adversarial Networks Lab meeting (Paper review session) Stacked Generative Adversarial Networks 2017. 02. 01. Saehoon Kim (Ph. D. candidate) Machine Learning Group Papers to be covered Stacked Generative Adversarial Networks

More information

Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models

Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models Flow-GAN: Combining Maximum Likelihood and Adversarial Learning in Generative Models Aditya Grover, Manik Dhar, Stefano Ermon Computer Science Department Stanford University {adityag, dmanik, ermon}@cs.stanford.edu

More information

Adversarial Symmetric Variational Autoencoder

Adversarial Symmetric Variational Autoencoder Adversarial Symmetric Variational Autoencoder Yunchen Pu, Weiyao Wang, Ricardo Henao, Liqun Chen, Zhe Gan, Chunyuan Li and Lawrence Carin Department of Electrical and Computer Engineering, Duke University

More information

Inverting The Generator Of A Generative Adversarial Network

Inverting The Generator Of A Generative Adversarial Network 1 Inverting The Generator Of A Generative Adversarial Network Antonia Creswell and Anil A Bharath, Imperial College London arxiv:1802.05701v1 [cs.cv] 15 Feb 2018 Abstract Generative adversarial networks

More information

DEEP LEARNING PART THREE - DEEP GENERATIVE MODELS CS/CNS/EE MACHINE LEARNING & DATA MINING - LECTURE 17

DEEP LEARNING PART THREE - DEEP GENERATIVE MODELS CS/CNS/EE MACHINE LEARNING & DATA MINING - LECTURE 17 DEEP LEARNING PART THREE - DEEP GENERATIVE MODELS CS/CNS/EE 155 - MACHINE LEARNING & DATA MINING - LECTURE 17 GENERATIVE MODELS DATA 3 DATA 4 example 1 DATA 5 example 2 DATA 6 example 3 DATA 7 number of

More information

arxiv: v3 [cs.cv] 3 Jul 2018

arxiv: v3 [cs.cv] 3 Jul 2018 Improved Training of Generative Adversarial Networks using Representative Features Duhyeon Bang 1 Hyunjung Shim 1 arxiv:1801.09195v3 [cs.cv] 3 Jul 2018 Abstract Despite the success of generative adversarial

More information

arxiv: v4 [stat.ml] 4 Dec 2017

arxiv: v4 [stat.ml] 4 Dec 2017 On the Effects of Batch and Weight Normalization in Generative Adversarial Networks arxiv:1704.03971v4 [stat.ml] 4 Dec 2017 Abstract Sitao Xiang 1 1, 2, 3 Hao Li 1 University of Southern California 2 Pinscreen

More information

Generative Adversarial Networks (GANs) Based on slides from Ian Goodfellow s NIPS 2016 tutorial

Generative Adversarial Networks (GANs) Based on slides from Ian Goodfellow s NIPS 2016 tutorial Generative Adversarial Networks (GANs) Based on slides from Ian Goodfellow s NIPS 2016 tutorial Generative Modeling Density estimation Sample generation Training examples Model samples Next Video Frame

More information

Generative Adversarial Networks (GANs) Ian Goodfellow, Research Scientist MLSLP Keynote, San Francisco

Generative Adversarial Networks (GANs) Ian Goodfellow, Research Scientist MLSLP Keynote, San Francisco Generative Adversarial Networks (GANs) Ian Goodfellow, Research Scientist MLSLP Keynote, San Francisco 2016-09-13 Generative Modeling Density estimation Sample generation Training examples Model samples

More information

Text to Image Synthesis Using Generative Adversarial Networks

Text to Image Synthesis Using Generative Adversarial Networks Text to Image Synthesis Using Generative Adversarial Networks arxiv:1805.00676v1 [cs.cv] 2 May 2018 Cristian Bodnar Supervisor: Dr Jon Shapiro University of Manchester School of Computer Science A final

More information

Geometric Enclosing Networks

Geometric Enclosing Networks Geometric Enclosing Networks Trung Le, Hung Vu, Tu Dinh Nguyen and Dinh Phung Faculty of Information Technology, Monash University Center for Pattern Recognition and Data Analytics, Deakin University,

More information

arxiv: v1 [stat.ml] 31 May 2018

arxiv: v1 [stat.ml] 31 May 2018 Ratio Matching MMD Nets: Low dimensional projections for effective deep generative models arxiv:1806.00101v1 [stat.ml] 31 May 2018 Akash Srivastava School of Informatics akash.srivastava@ed.ac.uk Kai Xu

More information

Model Generalization and the Bias-Variance Trade-Off

Model Generalization and the Bias-Variance Trade-Off Charu C. Aggarwal IBM T J Watson Research Center Yorktown Heights, NY Model Generalization and the Bias-Variance Trade-Off Neural Networks and Deep Learning, Springer, 2018 Chapter 4, Section 4.1-4.2 What

More information

Sliced Wasserstein Generative Models

Sliced Wasserstein Generative Models Jiqing Wu * 1 Zhiwu Huang * 1 Wen Li 1 Janine Thoma 1 Luc Van Gool 1 2 Following the standard paradigm of GANs, we introduce a new SWGAN model by applying a dual SOT block to the discriminator, such that

More information

Smooth Deep Image Generator from Noises

Smooth Deep Image Generator from Noises Smooth Deep Image Generator from Noises Tianyu Guo,2,3, Chang Xu 2, Boxin Shi 4, Chao Xu,3, Dacheng Tao 2 Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, China 2 UBTECH Sydney

More information

Auto-encoder with Adversarially Regularized Latent Variables

Auto-encoder with Adversarially Regularized Latent Variables Information Engineering Express International Institute of Applied Informatics 2017, Vol.3, No.3, P.11 20 Auto-encoder with Adversarially Regularized Latent Variables for Semi-Supervised Learning Ryosuke

More information

Gaussian Processes for Robotics. McGill COMP 765 Oct 24 th, 2017

Gaussian Processes for Robotics. McGill COMP 765 Oct 24 th, 2017 Gaussian Processes for Robotics McGill COMP 765 Oct 24 th, 2017 A robot must learn Modeling the environment is sometimes an end goal: Space exploration Disaster recovery Environmental monitoring Other

More information

arxiv: v1 [cs.lg] 28 Feb 2017

arxiv: v1 [cs.lg] 28 Feb 2017 Shengjia Zhao 1 Jiaming Song 1 Stefano Ermon 1 arxiv:1702.08658v1 cs.lg] 28 Feb 2017 Abstract We propose a new family of optimization criteria for variational auto-encoding models, generalizing the standard

More information

arxiv: v5 [cs.ai] 10 Dec 2017

arxiv: v5 [cs.ai] 10 Dec 2017 ON CONVERGENCE AND STABILITY OF GANS Naveen Kodali, Jacob Abernethy, James Hays & Zsolt Kira College of Computing Georgia Institute of Technology Atlanta, GA 30332, USA {nkodali3,prof,hays,zkira}@gatech.edu

More information

An Empirical Study of Generative Adversarial Networks for Computer Vision Tasks

An Empirical Study of Generative Adversarial Networks for Computer Vision Tasks An Empirical Study of Generative Adversarial Networks for Computer Vision Tasks Report for Undergraduate Project - CS396A Vinayak Tantia (Roll No: 14805) Guide: Prof Gaurav Sharma CSE, IIT Kanpur, India

More information

arxiv: v1 [cs.cv] 4 Nov 2018

arxiv: v1 [cs.cv] 4 Nov 2018 Improving GAN with neighbors embedding and gradient matching Ngoc-Trung Tran, Tuan-Anh Bui, Ngai-Man Cheung ST Electronics - SUTD Cyber Security Laboratory Singapore University of Technology and Design

More information

Improved Boundary Equilibrium Generative Adversarial Networks

Improved Boundary Equilibrium Generative Adversarial Networks Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2017.DOI Improved Boundary Equilibrium Generative Adversarial Networks YANCHUN LI 1, NANFENG

More information

Visual Recommender System with Adversarial Generator-Encoder Networks

Visual Recommender System with Adversarial Generator-Encoder Networks Visual Recommender System with Adversarial Generator-Encoder Networks Bowen Yao Stanford University 450 Serra Mall, Stanford, CA 94305 boweny@stanford.edu Yilin Chen Stanford University 450 Serra Mall

More information

Learning to generate with adversarial networks

Learning to generate with adversarial networks Learning to generate with adversarial networks Gilles Louppe June 27, 2016 Problem statement Assume training samples D = {x x p data, x X } ; We want a generative model p model that can draw new samples

More information

arxiv: v1 [cs.cv] 17 Apr 2017 Abstract

arxiv: v1 [cs.cv] 17 Apr 2017 Abstract Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking Felix Juefei-Xu Carnegie Mellon University felixu@cmu.edu Vishnu Naresh Boddeti Michigan State University vishnu@msu.edu Marios

More information

Generative Adversarial Network: a Brief Introduction. Lili Mou

Generative Adversarial Network: a Brief Introduction. Lili Mou Generative Adversarial Network: a Brief Introduction Lili Mou doublepower.mou@gmail.com Outline Generative adversarial net Conditional generative adversarial net Deep generative image models using Laplacian

More information

arxiv: v1 [eess.sp] 23 Oct 2018

arxiv: v1 [eess.sp] 23 Oct 2018 Reproducing AmbientGAN: Generative models from lossy measurements arxiv:1810.10108v1 [eess.sp] 23 Oct 2018 Mehdi Ahmadi Polytechnique Montreal mehdi.ahmadi@polymtl.ca Mostafa Abdelnaim University de Montreal

More information

Autoencoders. Stephen Scott. Introduction. Basic Idea. Stacked AE. Denoising AE. Sparse AE. Contractive AE. Variational AE GAN.

Autoencoders. Stephen Scott. Introduction. Basic Idea. Stacked AE. Denoising AE. Sparse AE. Contractive AE. Variational AE GAN. Stacked Denoising Sparse Variational (Adapted from Paul Quint and Ian Goodfellow) Stacked Denoising Sparse Variational Autoencoding is training a network to replicate its input to its output Applications:

More information

arxiv: v1 [cs.gr] 22 Jan 2019

arxiv: v1 [cs.gr] 22 Jan 2019 Generation High resolution 3D model from natural language by Generative Adversarial Network Kentaro Fukamizu, Masaaki Kondo, Ryuichi Sakamoto arxiv:1901.07165v1 [cs.gr] 22 Jan 2019 Abstract Since creating

More information

Progress on Generative Adversarial Networks

Progress on Generative Adversarial Networks Progress on Generative Adversarial Networks Wangmeng Zuo Vision Perception and Cognition Centre Harbin Institute of Technology Content Image generation: problem formulation Three issues about GAN Discriminate

More information

arxiv: v1 [cs.lg] 21 Dec 2018

arxiv: v1 [cs.lg] 21 Dec 2018 Non-Adversarial Image Synthesis with Generative Latent Nearest Neighbors Yedid Hoshen Facebook AI Research Jitendra Malik Facebook AI Research and UC Berkeley arxiv:1812.08985v1 [cs.lg] 21 Dec 2018 Abstract

More information

arxiv: v1 [cs.cv] 16 Jul 2017

arxiv: v1 [cs.cv] 16 Jul 2017 enerative adversarial network based on resnet for conditional image restoration Paper: jc*-**-**-****: enerative Adversarial Network based on Resnet for Conditional Image Restoration Meng Wang, Huafeng

More information

Bidirectional GAN. Adversarially Learned Inference (ICLR 2017) Adversarial Feature Learning (ICLR 2017)

Bidirectional GAN. Adversarially Learned Inference (ICLR 2017) Adversarial Feature Learning (ICLR 2017) Bidirectional GAN Adversarially Learned Inference (ICLR 2017) V. Dumoulin 1, I. Belghazi 1, B. Poole 2, O. Mastropietro 1, A. Lamb 1, M. Arjovsky 3 and A. Courville 1 1 Universite de Montreal & 2 Stanford

More information

When Variational Auto-encoders meet Generative Adversarial Networks

When Variational Auto-encoders meet Generative Adversarial Networks When Variational Auto-encoders meet Generative Adversarial Networks Jianbo Chen Billy Fang Cheng Ju 14 December 2016 Abstract Variational auto-encoders are a promising class of generative models. In this

More information

GAN and Feature Representation. Hung-yi Lee

GAN and Feature Representation. Hung-yi Lee GAN and Feature Representation Hung-yi Lee Outline Generator (Decoder) Discrimi nator + Encoder GAN+Autoencoder x InfoGAN Encoder z Generator Discrimi (Decoder) x nator scalar Discrimi z Generator x scalar

More information

Reconstructing Pore Networks Using Generative Adversarial Networks

Reconstructing Pore Networks Using Generative Adversarial Networks Reconstructing Pore Networks Using Generative Adversarial Networks Kelly Guan (kmguan@stanford.edu) I. INTRODUCTION Understanding fluid flow in porous media at the microscale is relevant to many fields,

More information

Data Set Extension with Generative Adversarial Nets

Data Set Extension with Generative Adversarial Nets Department of Artificial Intelligence University of Groningen, The Netherlands Data Set Extension with Generative Adversarial Nets Master s Thesis Luuk Boulogne S2366681 Primary supervisor: Secondary supervisor:

More information

Denoising Adversarial Autoencoders

Denoising Adversarial Autoencoders Denoising Adversarial Autoencoders Antonia Creswell BICV Imperial College London Anil Anthony Bharath BICV Imperial College London Email: ac2211@ic.ac.uk arxiv:1703.01220v4 [cs.cv] 4 Jan 2018 Abstract

More information

S+U Learning through ANs - Pranjit Kalita

S+U Learning through ANs - Pranjit Kalita S+U Learning through ANs - Pranjit Kalita - (from paper) Learning from Simulated and Unsupervised Images through Adversarial Training - Ashish Shrivastava, Tomas Pfister, Oncel Tuzel, Josh Susskind, Wenda

More information

Multi-Modal Generative Adversarial Networks

Multi-Modal Generative Adversarial Networks Multi-Modal Generative Adversarial Networks By MATAN BEN-YOSEF Under the supervision of PROF. DAPHNA WEINSHALL Faculty of Computer Science and Engineering THE HEBREW UNIVERSITY OF JERUSALEM A thesis submitted

More information

CSC412: Stochastic Variational Inference. David Duvenaud

CSC412: Stochastic Variational Inference. David Duvenaud CSC412: Stochastic Variational Inference David Duvenaud Admin A3 will be released this week and will be shorter Motivation for REINFORCE Class projects Class Project ideas Develop a generative model for

More information

Deep Hybrid Discriminative-Generative Models for Semi-Supervised Learning

Deep Hybrid Discriminative-Generative Models for Semi-Supervised Learning Volodymyr Kuleshov 1 Stefano Ermon 1 Abstract We propose a framework for training deep probabilistic models that interpolate between discriminative and generative approaches. Unlike previously proposed

More information

arxiv: v1 [cs.cv] 17 Nov 2016

arxiv: v1 [cs.cv] 17 Nov 2016 Inverting The Generator Of A Generative Adversarial Network arxiv:1611.05644v1 [cs.cv] 17 Nov 2016 Antonia Creswell BICV Group Bioengineering Imperial College London ac2211@ic.ac.uk Abstract Anil Anthony

More information

Score function estimator and variance reduction techniques

Score function estimator and variance reduction techniques and variance reduction techniques Wilker Aziz University of Amsterdam May 24, 2018 Wilker Aziz Discrete variables 1 Outline 1 2 3 Wilker Aziz Discrete variables 1 Variational inference for belief networks

More information

Parameter Continuation with Secant Approximation for Deep Neural Networks. Harsh Nilesh Pathak

Parameter Continuation with Secant Approximation for Deep Neural Networks. Harsh Nilesh Pathak Parameter Continuation with Secant Approximation for Deep Neural Networks by Harsh Nilesh Pathak A dissertation submitted in partial satisfaction of the requirements for the degree of Master of Science

More information

Extracting and Composing Robust Features with Denoising Autoencoders

Extracting and Composing Robust Features with Denoising Autoencoders Presenter: Alexander Truong March 16, 2017 Extracting and Composing Robust Features with Denoising Autoencoders Pascal Vincent, Hugo Larochelle, Yoshua Bengio, Pierre-Antoine Manzagol 1 Outline Introduction

More information

arxiv: v1 [cs.lg] 24 Jan 2019

arxiv: v1 [cs.lg] 24 Jan 2019 Jaehoon Cha Kyeong Soo Kim Sanghuyk Lee arxiv:9.879v [cs.lg] Jan 9 Abstract Noting the importance of the latent variables in inference and learning, we propose a novel framework for autoencoders based

More information

Semantic Segmentation. Zhongang Qi

Semantic Segmentation. Zhongang Qi Semantic Segmentation Zhongang Qi qiz@oregonstate.edu Semantic Segmentation "Two men riding on a bike in front of a building on the road. And there is a car." Idea: recognizing, understanding what's in

More information

Toward better reconstruction of style images with GANs

Toward better reconstruction of style images with GANs Toward better reconstruction of style images with GANs Alexander Lorbert, Nir Ben-Zvi, Arridhana Ciptadi, Eduard Oks, Ambrish Tyagi Amazon Lab126 [lorbert,nirbenz,ciptadia,oksed,ambrisht]@amazon.com ABSTRACT

More information

Introduction to Generative Adversarial Networks

Introduction to Generative Adversarial Networks Introduction to Generative Adversarial Networks Ian Goodfellow, OpenAI Research Scientist NIPS 2016 Workshop on Adversarial Training Barcelona, 2016-12-9 Adversarial Training A phrase whose usage is in

More information

Generalized Loss-Sensitive Adversarial Learning with Manifold Margins

Generalized Loss-Sensitive Adversarial Learning with Manifold Margins Generalized Loss-Sensitive Adversarial Learning with Manifold Margins Marzieh Edraki and Guo-Jun Qi Laboratory for MAchine Perception and LEarning (MAPLE) http://maple.cs.ucf.edu/ University of Central

More information

Flow-GAN: Bridging implicit and prescribed learning in generative models

Flow-GAN: Bridging implicit and prescribed learning in generative models Aditya Grover 1 Manik Dhar 1 Stefano Ermon 1 Abstract Evaluating the performance of generative models for unsupervised learning is inherently challenging due to the lack of welldefined and tractable objectives.

More information

DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION

DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION 2017 IEEE INTERNATIONAL WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, SEPT. 25 28, 2017, TOKYO, JAPAN DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION Yen-Cheng Liu 1,

More information

arxiv: v2 [cs.cv] 21 May 2018

arxiv: v2 [cs.cv] 21 May 2018 Pros and Cons of GAN Evaluation Measures Ali Borji Center for Research in Computer Vision University of Central Florida, Orlando, FL, USA aborji@crcv.ucf.edu arxiv:1802.03446v2 [cs.cv] 21 May 2018 May

More information

Day 3 Lecture 1. Unsupervised Learning

Day 3 Lecture 1. Unsupervised Learning Day 3 Lecture 1 Unsupervised Learning Semi-supervised and transfer learning Myth: you can t do deep learning unless you have a million labelled examples for your problem. Reality You can learn useful representations

More information

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning

COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning COMP 551 Applied Machine Learning Lecture 13: Unsupervised learning Associate Instructor: Herke van Hoof (herke.vanhoof@mail.mcgill.ca) Slides mostly by: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551

More information

Optimal transport for machine learning

Optimal transport for machine learning Optimal transport for machine learning Practical sessions Rémi Flamary, Nicolas Courty, Marco Cuturi Data SCience Summer School (DS3) 2018, Paris, France 1 Course organization A day in Optimal Transport

More information

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

Akarsh Pokkunuru EECS Department Contractive Auto-Encoders: Explicit Invariance During Feature Extraction Akarsh Pokkunuru EECS Department 03-16-2017 Contractive Auto-Encoders: Explicit Invariance During Feature Extraction 1 AGENDA Introduction to Auto-encoders Types of Auto-encoders Analysis of different

More information

arxiv: v2 [cs.cv] 6 Dec 2017

arxiv: v2 [cs.cv] 6 Dec 2017 Arbitrary Facial Attribute Editing: Only Change What You Want arxiv:1711.10678v2 [cs.cv] 6 Dec 2017 Zhenliang He 1,2 Wangmeng Zuo 4 Meina Kan 1 Shiguang Shan 1,3 Xilin Chen 1 1 Key Lab of Intelligent Information

More information

UCLA UCLA Electronic Theses and Dissertations

UCLA UCLA Electronic Theses and Dissertations UCLA UCLA Electronic Theses and Dissertations Title Application of Generative Adversarial Network on Image Style Transformation and Image Processing Permalink https://escholarship.org/uc/item/66w654x7

More information