Stochastic Simulation with Generative Adversarial Networks

Similar documents
arxiv: v1 [stat.ml] 15 Feb 2018

Reconstructing Pore Networks Using Generative Adversarial Networks

Deep Generative Models Variational Autoencoders

Alternatives to Direct Supervision

Unsupervised Learning

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17

Deep Learning for Visual Manipulation and Synthesis

Variational Autoencoders. Sargur N. Srihari

Image Restoration with Deep Generative Models

Lecture 21 : A Hybrid: Deep Learning and Graphical Models

An Empirical Study of Generative Adversarial Networks for Computer Vision Tasks

Gradient of the lower bound

Deep Generative Models and a Probabilistic Programming Library

Learning to generate with adversarial networks

GENERATIVE ADVERSARIAL NETWORKS (GAN) Presented by Omer Stein and Moran Rubin

Autoencoders. Stephen Scott. Introduction. Basic Idea. Stacked AE. Denoising AE. Sparse AE. Contractive AE. Variational AE GAN.

arxiv: v1 [cs.cv] 17 Nov 2016

Generative Adversarial Network

Learning to generate 3D shapes

Generative Adversarial Text to Image Synthesis

Constrained Convolutional Neural Networks for Weakly Supervised Segmentation. Deepak Pathak, Philipp Krähenbühl and Trevor Darrell

Adversarially Learned Inference

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models

arxiv: v1 [eess.sp] 23 Oct 2018

Quantitative Evaluation of Generative Adversarial Networks and Improved Training Techniques

Towards Principled Methods for Training Generative Adversarial Networks. Martin Arjovsky & Léon Bottou

Generative Modeling with Convolutional Neural Networks. Denis Dus Data Scientist at InData Labs

Auto-Encoding Variational Bayes

Amortised MAP Inference for Image Super-resolution. Casper Kaae Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi & Ferenc Huszár ICLR 2017

arxiv: v1 [stat.ml] 16 Aug 2017

19: Inference and learning in Deep Learning

Deep Learning Approaches to 3D Shape Completion

DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION

COMPUTER AND ROBOT VISION

Supervised Learning for Image Segmentation

Generative Adversarial Networks (GANs) Ian Goodfellow, Research Scientist MLSLP Keynote, San Francisco

Introduction to Generative Adversarial Networks

Quantifying Data Needs for Deep Feed-forward Neural Network Application in Reservoir Property Predictions

Lecture 19: Generative Adversarial Networks

Autoencoder. Representation learning (related to dictionary learning) Both the input and the output are x

ECCV Presented by: Boris Ivanovic and Yolanda Wang CS 331B - November 16, 2016

DOMAIN-ADAPTIVE GENERATIVE ADVERSARIAL NETWORKS FOR SKETCH-TO-PHOTO INVERSION

GAN Frontiers/Related Methods

Feature Visualization

08 An Introduction to Dense Continuous Robotic Mapping

Supplemental Material Deep Fluids: A Generative Network for Parameterized Fluid Simulations

arxiv: v2 [cs.lg] 17 Dec 2018

Inverting The Generator Of A Generative Adversarial Network

Mass Transport in a GDL with Variable Wettability

Introduction to GAN. Generative Adversarial Networks. Junheng(Jeff) Hao

COMPUTATIONAL NEURAL NETWORKS FOR GEOPHYSICAL DATA PROCESSING

arxiv: v1 [cs.cv] 4 Apr 2018

PSU Student Research Symposium 2017 Bayesian Optimization for Refining Object Proposals, with an Application to Pedestrian Detection Anthony D.

Supplementary Material: Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos

COMPUTATIONAL INTELLIGENCE

Generative Adversarial Networks (GANs) Based on slides from Ian Goodfellow s NIPS 2016 tutorial

Deep Learning for Computer Vision

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

Score function estimator and variance reduction techniques

Introduction to GAN. Generative Adversarial Networks. Junheng(Jeff) Hao

Lecture 6: Edge Detection

Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization. Presented by: Karen Lucknavalai and Alexandr Kuznetsov

Adaptive spatial resampling as a Markov chain Monte Carlo method for uncertainty quantification in seismic reservoir characterization

Perceptual Loss for Convolutional Neural Network Based Optical Flow Estimation. Zong-qing LU, Xiang ZHU and Qing-min LIAO *

Physical and geometric characterization of stochastic interfaces for the case of Bentheimer sandstone

EE-559 Deep learning Non-volume preserving networks

Seismic data reconstruction with Generative Adversarial Networks

Mode Regularized Generative Adversarial Networks

Perceptron: This is convolution!

Implicit generative models: dual vs. primal approaches

We B3 12 Full Waveform Inversion for Reservoir Characterization - A Synthetic Study

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Generative Adversarial Nets. Priyanka Mehta Sudhanshu Srivastava

Image Captioning and Generation From Text

DEEP LEARNING PART THREE - DEEP GENERATIVE MODELS CS/CNS/EE MACHINE LEARNING & DATA MINING - LECTURE 17

Visual Recommender System with Adversarial Generator-Encoder Networks

Recovering Realistic Texture in Image Super-resolution by Deep Spatial Feature Transform. Xintao Wang Ke Yu Chao Dong Chen Change Loy

MR IMAGE SEGMENTATION

GAN and Feature Representation. Hung-yi Lee

Convolution Neural Networks for Chinese Handwriting Recognition

Single Image Super Resolution of Textures via CNNs. Andrew Palmer

arxiv: v2 [cs.cv] 16 Dec 2017

INDUSTRIAL SYSTEM DEVELOPMENT FOR VOLUMETRIC INTEGRITY

Edge detection. Winter in Kraków photographed by Marcin Ryczek

Solution: filter the image, then subsample F 1 F 2. subsample blur subsample. blur

Rotation Invariance Neural Network

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Defense Data Generation in Distributed Deep Learning System Se-Yoon Oh / ADD-IDAR

Multiple cosegmentation

Development of an Automated Fingerprint Verification System

Lecture 7: Most Common Edge Detectors

Algorithms for Recognition of Low Quality Iris Images. Li Peng Xie University of Ottawa

Model Generalization and the Bias-Variance Trade-Off

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Introduction to Generative Adversarial Networks

Edge Detection CSC 767

3D Convolutional Neural Networks for Landing Zone Detection from LiDAR

VISION & LANGUAGE From Captions to Visual Concepts and Back

CONDITIONAL SIMULATION OF TRUNCATED RANDOM FIELDS USING GRADIENT METHODS

De-mark GAN: Removing Dense Watermark With Generative Adversarial Network

Transcription:

Stochastic Simulation with Generative Adversarial Networks Lukas Mosser, Olivier Dubrule, Martin J. Blunt lukas.mosser15@imperial.ac.uk, o.dubrule@imperial.ac.uk, m.blunt@imperial.ac.uk

(Deep) Generative Methods Task: Draw (new) samples from unknown density given a set of samples Main Problem: How to find the generative model? Generative Adversarial Networks (GAN) Two competing Neural Networks Variational Autoencoders (VAE) Bayesian Graphical Model of data distribution Autoregression (Pixel-CNN) Conditional Distribution on every sample Many More... Latent Variables Generative Model Samples from Model Training Set 1

Generative Adversarial Networks Toy Example Noise prior Latent space z (Goodfellow et. al. 2014) p data x Generator (z)? Training Data x Gradient-based Feedback Discriminator(x) 2

Generative Adversarial Networks Training Requirements: Training Set of data Generator creates samples G(z) Discriminator evaluates samples Cost function: GAN training two step procedure in supervised way Discriminator training step Generator fixed Train on real data samples Train on fake samples Generator training step Discriminator fixed Push generator towards real images 3

Ketton Limestone Dataset and Preprocessing Oolitic Limestone Intergranular pores Intragranular Micro-Porosity Ellipsoidal grains 99% Calcite Image Size: - 900^3 voxels @ 26.7 μm Extract Non-Overlapping Training Images (64 3 voxels) Training Set 5

Network Architecture - 3D Convolutional Network Represent G(z) and D(x) as deep neural networks: Generator Network Architecture Discriminator: Binary Classification Network -> Real / Fake 6

Reconstruction Quality Unconditional Simulation Ketton Training Image GAN generated sample Intergranular Porosity Moldic Features Micro-Porosity Training Time: 8 hours Generation: 5 sec. High visual quality Needs quantitative measures 7

Reconstruction Quality Criteria Statistical Properties Two-Point Probability Function S 2 r» Radial Average / Directional 3 Minkowski Functionals Porosity φ Specific Surface Area S v Integral of Mean Curvature Specific Euler Characteristic χ v Compute as function of image gray-level => Characteristic Curves Flow Properties: Solve Stokes flow in pore domain Permeability + Velocity Distributions 8

Ketton Comparison Directional S 2 r Isotropic Covariance Pronounced Oscillations -> Hole-Effect Captured by GAN model Smaller Variance of GAN model 9

Ketton Comparison Permeability Isotropic Permeability Range of effective (flowing) porosity: Data (0.29-0.37) GAN (0.31-0.33) Same order of magnitude and നk φ relationship 10

Opening the GAN black box What does the Generator learn? Multi-scale Representation of pore space Smaller Variance in GAN generated samples: Why? Latent space z p data x p gen x Generator can miss modes of the data distribution -> Mode-Collapse 11

Latent Space Interpolation Latent space z z = β z start + 1 β z end, β [0, 1] Interpolation path visualization Interpolation in latent space: Shows that generator has learned a meaningful representation in a lower dimensional space! 12

Computational Effort Main Computation cost training: Amortizes with number of samples due to low per sample cost / runtime 13

Image Inpainting (Yeh et al. 2016) Task: Restore missing details given a corrupted / masked image M x Use a generative model G(z) to find missing details, conditional to given information. Contextual Loss: L content = λ M G z M x 2 Perceptual Loss: L perc = log(1 D(G z ) Corresponds to likelihood Regularization for prior Stay close to real images Optimize loss by gradient descent on latent vector z (Cat, Dog, Leopard, Dachshund) 15 M x Human Artist L 2 Loss L content + L perc Credit: Kyle Kastner

Conditioning Pore Scale Example Two-dimensional data at pore-scale more abundant e.g. thin-sections Combine 3D generative model G(z) with 2D conditioning data Generative Model: Ketton Limestone GAN (Part 1) Mask: Three orthogonal cross-sections, honor 2D data in a 3D image Contextual Loss: L content = λ M G z M x 2 on orthogonal cross-sections Perceptual Loss: L perc = log(1 D(G z ) on whole volumetric generated image G(z) L Total = λ L content + L perceptual Optimize Total Loss, by modifying latent vector (GAN parameters fixed) -> Many local minima at error threshold -> stochastic volumes that honor 2D data 16

Conditioning Pore Scale Example Conditioning Data Ground Truth Volume Stochastic Sample 1 Conditioned to Data Stochastic Sample 2 Conditioned to Data Same 2D conditioning data leads to varied realizations in 3D 17

Conditioning Reservoir Scale Example Maules Creek Training Image (Credit G. Mariethoz) Pre-trained 3D-Generative Adversarial Network Condition to single well (1D conditioning) from ground truth data: No Variance at Well Single Realization Mean (N=1000) Standard Dev. (N=1000) 18

Conclusions: Generative Adversarial Networks are: Parametric Latent Vector Differentiable Allow for optimization Learned from training examples That allow continuous reparametrizations of geological models. Can be conditioned to existing grid-block scale data. Possibly very useful for solving stochastic inverse problems Main Idea: Represent prior with a (deep) generative model arxiv preprint arxiv:1806.03720 Unconditional Prior Two Acoustic Shots Single Vertical Well Ground Truth Posterior Sample 19

Thank you! References Reconstruction of three-dimensional porous media using generative adversarial neural networks. Physical Review E, 96(4), 043309, Mosser, L., Dubrule, O., & Blunt, M. J. (2017). Stochastic reconstruction of an oolitic limestone by generative adversarial networks. Transport in Porous Media, 1-23, Mosser, L., Dubrule, O., & Blunt, M. J. (2017). Conditioning of Generative Adversarial Networks for Pore and Reservoir Scale Models, 80 th EAGE Conference, Mosser L., Dubrule, O., & Blunt, M. J. (2018). Stochastic seismic waveform inversion using generative adversarial networks as a geological prior. arxiv preprint arxiv:1806.03720, Mosser, L., Dubrule, O., & Blunt, M. J. (2018). 19