Adversarial Examples in Deep Learning. Cho-Jui Hsieh Computer Science & Statistics UC Davis

Size: px
Start display at page:

Download "Adversarial Examples in Deep Learning. Cho-Jui Hsieh Computer Science & Statistics UC Davis"

Transcription

1 Adversarial Examples in Deep Learning Cho-Jui Hsieh Computer Science & Statistics UC Davis

2 Adversarial Example

3 Adversarial Example

4 More Examples Robustness is critical in real systems

5 More Examples Robustness is critical in real systems Real world adversarial examples: Attack stop sign in real world (Evtimov et al., 2017) Adversarial turtle (Athalye et al., 2017)

6 Research Questions Questions: Attack: How to generate adversarial example? (AISec 17, Arxiv 17) Verification: evaluate the robustness of deep networks (ICLR 18, Arxiv 18) Defense: how to improve robustness? (Arxiv 17)

7 Notations and Attack Procedure

8 Attack as an optimization problem Input image: x 0 R d Adversarial image: x R d x = arg min x Dis(x, x 0 ) + c g(x)

9 Attack as an optimization problem Input image: x 0 R d Adversarial image: x R d arg min x Dis(x, x 0 ) + c g(x) Dis(x, x 0 ): the distortion, e.g., x x 0 2 2

10 Attack as an optimization problem Input image: x 0 R d Adversarial image: x R d arg min x x x c g(x) Dis(x, x 0 ): the distortion g(x): loss to measure the successfulness of attack

11 Attack as an optimization problem Input image: x 0 R d Adversarial image: x R d arg min x x x c g(x) Dis(x, x 0 ): the distortion g(x): loss to measure the successfulness of attack c 0 controls the trade-off See (Carlini & Wagner, 2017)

12 Define successfulness of attack

13 Define successfulness of attack Untargeted attack: success if f j (x) y 0 g(x) = max{f y0 (x) max j y 0 f j (x), 0}

14 Define successfulness of attack Targeted attack: success if f j (x ) = t g(x) = max{max j t f j(x) f t (x), 0}

15 White box setting x = arg min x Dis(x, x 0 ) + c g(x) (1) Model f ( ) is revealed to attacker gradient of g(x) can be computed by back-propagation attacker minimizes (1) by gradient descent

16 White box setting x = arg min x Dis(x, x 0 ) + c g(x) (1) Model f ( ) is revealed to attacker gradient of g(x) can be computed by back-propagation attacker minimizes (1) by gradient descent Can be generalized to other definition of Dis(x, x 0 ) (e.g., l 1 penalty, Elastic net, ) EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. [C, S, Z, Y, H], AAAI, 2018.

17 Attack Image Captioning Targeted attack Output a specified sentence Keywords attack output contains specified keywords Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning. [C, Z, C, Y, H, 2017]

18 Attack Image Captioning Usually use CNN+RNN network (Figure from Show and Tell, Vinyals et al, 2014)

19 Attack Image Captioning Usually use CNN+RNN network (Figure from Show and Tell, Vinyals et al, 2014) Define the loss on sequence output for different attacks

20 Black Box Setting In practice, the deep network structure/parameters are not revealed to attackers cannot compute gradient f (x)

21 Black Box Setting In practice, the deep network structure/parameters are not revealed to attackers cannot compute gradient f (x) Attacker can query the ML model and get the probability output

22 Black Box Setting In practice, the deep network structure/parameters are not revealed to attackers cannot compute gradient f (x) Attacker can query the ML model and get the probability output Previous approach (Papernot et al., 2016): Train a substitute model using (x 1, f (x 1 )),, (x q, f (x q )) Attack this substitute model

23 Black Box Setting arg min x Dis(x, x 0 ) + c g(x) We can actually solve the same optimization problem! ZOO: Zeroth Order Optimization based Black-box Attacks. [C, Z, S, Y, H], CCS AI-Security 2017

24 Black Box Setting arg min x Dis(x, x 0 ) + c g(x) We can actually solve the same optimization problem! Symmetric difference quotient to estimate gradient: ĝ i F (x) F (x + ɛe i) F (x ɛe i ) x i 2ɛ ZOO: Zeroth Order Optimization based Black-box Attacks. [C, Z, S, Y, H], CCS AI-Security 2017

25 Black Box Setting arg min x Dis(x, x 0 ) + c g(x) We can actually solve the same optimization problem! Symmetric difference quotient to estimate gradient: Zeroth order gradient descent ĝ i i g(x) g(x + ɛe i) g(x ɛe i ) 2ɛ ĝ 1 x x η. ĝ d ZOO: Zeroth Order Optimization based Black-box Attacks. [C, Z, S, Y, H], CCS AI-Security 2017

26 Black Box Setting However, Need O(d) queries to estimate gradient

27 Black Box Setting However, Need O(d) queries to estimate gradient ImageNet: d = > 268K 100 iterations 26 million queries

28 Black Box Setting However, Need O(d) queries to estimate gradient ImageNet: d = > 268K 100 iterations 26 million queries Reduce number of queries: Stochastic Coordinate descent: update a small set of coordinates at each time Greedy approach: select important coordinates first Attack-space dimension reduction + hierarchical attack y pixel coordinate y pixel coordinate y pixel coordinate x pixel coordinate x pixel coordinate x pixel coordinate 0.3

29 Black-box Attack on MNIST & CIFAR10 100% success rate, similar distortion with white box attack. MNIST Success Rate Avg. L 2 Avg. Time (per attack) White-box 100 % min Black-box (Substitute Model) % min ( min) Proposed black-box (ZOO) 98.9 % min CIFAR10 Success Rate Avg. L 2 Avg. Time (per attack) White-box 100 % min Black-box (Substitute Model) 5.3 % min ( min) Proposed Black-box (ZOO) 96.8 % min

30 Black-box Attack on ImageNet (Inception-v3)

31 Research Questions Questions: Attack: How to attack? Verification: Evaluate the robustness of your model Defense: how to improve robustness?

32 Measuring Robustness How to measure the robustness of a neural network?

33 Measuring Robustness How to measure the robustness of a neural network? The measurement should be independent to attack algorithms!

34 Measuring Robustness How to measure the robustness of a neural network? The measurement should be independent to attack algorithms! Robustness: For an example x 0, find r such that f (x 0 + ) is correct for all < r

35 Measuring Robustness Theorem: For a model f ( ) and an example x 0, f (x 0 + ) = y for all < min j c f y (x 0 ) f j (x 0 ) L j, where L j = max{ f y (x) f j (x) : x S} Evaluating the robustness of neural network: an extreme value theory approach. [W, Z, Y, G, H, D], ICLR, 2018.

36 How to evaluate L j? L j = max{ h(x) : x S} First attempt: Solve as an optimization problem

37 How to evaluate L j? L j = max{ h(x) : x S} First attempt: Solve as an optimization problem Doesn t work : f ( ) is non-convex and often non-smooth

38 Sampling Approach L j = max{ h(x) : x S} Naive Sampling Algorithm Sample x 1, x 2,, x T S and select max

39 Sampling Approach L j = max{ h(x) : x S} Naive Sampling Algorithm Sample x 1, x 2,, x T S and select max Dimension is too high Non-meaningful solution even with > 1 million samples

40 Extreme Value Theory Extreme Value Theorem (Fisher-Tippett-Gnedenko Theorem) Let X 1,, X n be iid random variables and M n = max{x 1,, X n }. Define the cdf of M n : P(M n x) = G n (x). When n, G n belongs to either the Gumbel, Frechet, or Reverse Weibull family.

41 Evaluate robustness Computing CLEVER score P {ϕ} For i = 1, 2,, M Randomly sample x 1,, x N S Compute b max i=1,,n h(x i ) P P b â MLE of location parameter of reverse Weibull distribution on S r h(x 0) h(x 0) L j â (CLEVER score)

42 Experiments CW Attack: upper bound of r CLEVER: estimated lower-bound of r SLOPE: previous algorithm for estimating r CW Attack CLEVER SLOPE MNIST-CNN MNIST-DD MNIST-BReLU CIFAR-CNN CIFAR-DD CIFAR-BReLU Resnet MobileNet DD, BReLU: defensive algorithms

43 Not a certificated bound Estimated lower bound can be sometimes larger than r A safe zone only in probability

44 Can we do better for ReLU Network? State-of-the-art networks use ReLU activation: θ(x) = max(0, x) Goal: provide a certified, meaningful and computational efficient lower bound of r

45 Previous Work ReLUplex (Katz et al., 2017): combinatorial optimization, Network with 300 neurons need 3 hours Linear Programming (Kolter & Wong, 2017): Network with 5120 neurons need 8 hours State-of-the-art networks have millions of nuorons

46 A linear time algorithm Assume perturbation is bounded: x i = x i + δ i, 0.1 δ 0.1 f 1 (x) = W 4 θ(w 3 θ(w 2 θ(w 1 (x + δ))))

47 A linear time algorithm Activated neuron: input 0 (for all perturbations) output = max(0, input) = input All neurons activated Linear neural network f 1 (x + δ) = W 4 W 3 W 2 W 1 (x + δ)

48 A linear time algorithm Inactivated neuron: input<0 for all perturbations Remove those nodes, still linear output = max(0, input) = 0

49 A linear time algorithm Unsure neuron: input can be positive or negative Deal with both cases exponential time algorithms (previous approaches)

50 A linear time algorithm Assume we know l input u, how to approximate?

51 A linear time algorithm Assume we know l input u, how to approximate? u input max(0, input) u u l (input l) u l

52 A linear time algorithm Replace by a linear function with an added bias term

53 A linear time algorithm Form the linear approximation layer by layer Now Upper/lower bounds of f j (x) can be easily computed

54 Moreover Same time complexity with forward-propagation!

55 Moreover Same time complexity with forward-propagation! We have another algorithm to compute Lipchitz constant (gradient upperbound) Similar to back-propagation!

56 Experiments Fast-Lin: Our method (python) ReluPlex: Exponential time algorithm for computing exact bound (Katz et al., 2017) LP: Linear programming for lower bound (Kolter and Wong, 2017) Network Method Time δ MNIST 2 20 Fast-Lin ReluPlex LP MNIST 3 20 Fast-Lin ReluPlex LP MNIST Fast-Lin LP CIFAR Fast-Lin

57 Research Questions Questions: Attack: How to attack? Verification: Evaluate the robustness of your model Defense: how to improve robustness?

58 Defense from Adversarial Attacks Adversarial retraining (Goodfellow et al., 2015): Adding adversarial examples into training set Robust optimization + BReLu (Zantedeschi et al., 2017) min w E (x,y) DE x N(0,σ 2 )loss(f w (x + x), y) Security community: detect adversarial examples MagNet: (Meng and Chen, 2017), (Li and Li, 2017),

59 Defense from Adversarial Attacks Adversarial retraining (Goodfellow et al., 2015): Adding adversarial examples into training set Robust optimization + BReLu (Zantedeschi et al., 2017) min w E (x,y) DE x N(0,σ 2 )loss(f w (x + x), y) Security community: detect adversarial examples MagNet: (Meng and Chen, 2017), (Li and Li, 2017), However, they are still vulnerable if we choose a better way to attack (Carlini and Wagner, 2017, MagNet and Efficient Defenses Against Adversarial Attacks are Not Robust to Adversarial Examples)

60 Adding randomness to the model All the attacks are based on gradient computation

61 Adding randomness to the model All the attacks are based on gradient computation Idea I: Adding randomness to fool the attackers f (x) f ɛ (x) where ɛ is random Resistant to Black-box attack: gradient cannot be estimated f ɛ1 (x + he i ) f ɛ2 (x he i ) 2h i f (x)

62 Adding randomness to the model Naive approach: adding randomness only in the testing phase prediction accuracy 87% 20% (Cifar-10 with VGG)

63 Ensemble How to boost the accuracy?

64 Ensemble How to boost the accuracy? Idea II: Ensemble can improve the robustness of model: ensemble T models f ɛ1 (x),, f ɛt (x) Our approach: Random + Ensemble

65 Our approach (RSE) Prediction: Ensemble of random models p = T f ɛj (w, x), j=1 and predict ŷ = arg max p k k

66 Our approach (RSE) Prediction: Ensemble of random models p = T f ɛj (w, x), j=1 and predict ŷ = arg max p k k Training: minimizing an upper bound E (x,y) D E ɛ N loss(f ɛ (x), y) E (x,y) D loss(e ɛ N f ɛ (x), y) Training by SGD: sample (x, y), ɛ and conduct w w η w loss(f ɛ (x), y)

67 Our approach (RSE) Prediction: Ensemble of random models p = T f ɛj (w, x), j=1 and predict ŷ = arg max p k k Training: minimizing an upper bound E (x,y) D E ɛ N loss(f ɛ (x), y) E (x,y) D loss(e ɛ N f ɛ (x), y) Training by SGD: sample (x, y), ɛ and conduct w w η w loss(f ɛ (x), y) Implicit regularizing Lipchitz constant: E ɛ N(0,σ 2 )loss(f ɛ (w, x i ), y i ) loss(f 0 (w, x i ), y i ) + σ2 2 L l f 0,

68 Experiments We fix σ init = 0.4 and σ inner = 0.1 for all the datasets/networks Accuracy (%) STL-10 + ModelA No defense Adv retraining Robust Opt+BReLU RSE Distill CIFAR10 + VGG16 No defense Adv retraining Robust Opt+BReLU RSE Distill CIFAR10 + ResNeXt No defense Adv retraining Robust Opt+BReLU RSE Distill Avg. Distortion C No defense Adv retraining Robust Opt+BReLU RSE Distill C C 1.2 No defense Adv retraining 1.0 Robust Opt+BReLU 0.8 RSE Distill C C 1.0 No defense Adv retraining 0.8 Robust Opt+BReLU RSE 0.6 Distill C

69 Targeted attack Original image: Targeted attack

70 References [1] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, C.-J. Hsieh. ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models. AISec, [2] H. Chen, H. Zhang, P.-Y. Chen, J. Yi, C.-J. Hsieh. Show-and-Fool: Crafting Adversarial Examples for Neural Image Captioning. Arxiv [3] X. Liu, M. Cheng, H. Zhang, C.-J. Hsieh. Towards Robust Neural Networks via Random Self-ensemble. Arxiv, [4] P.-Y. Chen, Y. Sharma, H. Zhang, J. Yi, C.-J. Hsieh. EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples. AAAI [5] T.-W. Weng, H. Zhang, P.-Y. Chen, D. Su, Y. Gao, J. Yi, C.-J. Hsieh, L. Daniel. Evaluating the Robustness of Neural Networks: An Extreme Value Theory Approach. ICLR, [6] N. Carlini, D. Wagner.Towards evaluating the robustness of neural networks. SC [7] I. Goodfellow, J. Shlens, C. Szegedy. Explaining and Harnessing Adversarial Examples. ICLR [8] V. Zantedeschi, M. Nicolae, A. Rawat. Efficient Defenses Against Adversarial Attacks. Arxiv, [9] D. Meng, H. Chen. MagNet: a Two-Pronged Defense against Adversarial Examples. CCS, [10] N. Carlini, D. Wagner. MagNet and Efficient Defenses Against Adversarial Attacks are Not Robust to Adversarial Examples. Arxiv, 2017.

Delving into Transferable Adversarial Examples and Black-box Attacks

Delving into Transferable Adversarial Examples and Black-box Attacks Delving into Transferable Adversarial Examples and Black-box Attacks Yanpei Liu, Xinyun Chen 1 Chang Liu, Dawn Song 2 1 Shanghai JiaoTong University 2 University of the California, Berkeley ICLR 2017/

More information

ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models

ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models Pin-Yu Chen AI Foundations Group IBM T. J. Watson Research Center Yorktown Heights, NY

More information

Countering Adversarial Images using Input Transformations

Countering Adversarial Images using Input Transformations Countering Adversarial Images using Input Transformations Chuan Guo, Mayank Rana, Moustapha Cisse, Laurens Van Der Maaten Presented by Hari Venugopalan, Zainul Abi Din Motivation: Why is this a hard problem

More information

Properties of adv 1 Adversarials of Adversarials

Properties of adv 1 Adversarials of Adversarials Properties of adv 1 Adversarials of Adversarials Nils Worzyk and Oliver Kramer University of Oldenburg - Dept. of Computing Science Oldenburg - Germany Abstract. Neural networks are very successful in

More information

arxiv: v1 [cs.cv] 27 Dec 2017

arxiv: v1 [cs.cv] 27 Dec 2017 Adversarial Patch Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer {tombrown,dandelion,aurkor,abadi,gilmer}@google.com arxiv:1712.09665v1 [cs.cv] 27 Dec 2017 Abstract We present a method

More information

arxiv: v4 [cs.cv] 30 Jan 2019

arxiv: v4 [cs.cv] 30 Jan 2019 AutoZOOM: Autoencoder-based Zeroth Order Optimization Method for Attacking Black-box Neural Networks Chun-Chen Tu 1, Paishun Ting 1, Pin-Yu Chen, Sijia Liu, Huan Zhang 3, Jinfeng Yi 4, Cho-Jui Hsieh 3,

More information

arxiv: v2 [cs.cv] 6 Apr 2018

arxiv: v2 [cs.cv] 6 Apr 2018 Query-efficient Black-box Adversarial Examples Andrew Ilyas 12, Logan Engstrom 12, Anish Athalye 12, Jessy Lin 12 1 Massachusetts Institute of Technology, 2 LabSix {ailyas,engstrom,aathalye,lnj}@mit.edu

More information

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser SUPPLEMENTARY MATERIALS

Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser SUPPLEMENTARY MATERIALS Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser SUPPLEMENTARY MATERIALS Fangzhou Liao, Ming Liang, Yinpeng Dong, Tianyu Pang, Xiaolin Hu, Jun Zhu Department of Computer

More information

Understanding Adversarial Space through the lens of Attribution

Understanding Adversarial Space through the lens of Attribution Understanding Adversarial Space through the lens of Attribution Mayank Singh 1 [0000 0001 7261 6347], Nupur Kumari 1 [0000 0003 1799 1069], Abhishek Sinha 1 [0000 0002 3598 480X], and Balaji Krishnamurthy

More information

Practical Black-box Attacks on Deep Neural Networks using Efficient Query Mechanisms

Practical Black-box Attacks on Deep Neural Networks using Efficient Query Mechanisms Practical Black-box Attacks on Deep Neural Networks using Efficient Query Mechanisms Arjun Nitin Bhagoji 1, Warren He 2, Bo Li 3, and Dawn Song 2 1 Princeton University 2 University of California, Berkeley

More information

arxiv: v1 [cs.cv] 22 Nov 2018

arxiv: v1 [cs.cv] 22 Nov 2018 Distorting Neural Representations to Generate Highly Transferable Adversarial Examples Muzammal Naseer, Salman H. Khan, Shafin Rahman, Fatih Porikli Australian National University, Data61-CSIRO, Inception

More information

arxiv: v1 [cs.lg] 28 Nov 2018

arxiv: v1 [cs.lg] 28 Nov 2018 A randomized gradient-free attack on ReLU networks Francesco Croce 1 and Matthias Hein 2 arxiv:1811.11493v1 [cs.lg] 28 Nov 2018 1 Department of Mathematics and Computer Science, Saarland University, Germany

More information

Enhanced Attacks on Defensively Distilled Deep Neural Networks

Enhanced Attacks on Defensively Distilled Deep Neural Networks Enhanced Attacks on Defensively Distilled Deep Neural Networks Yujia Liu, Weiming Zhang, Shaohua Li, Nenghai Yu University of Science and Technology of China Email: yjcaihon@mail.ustc.edu.cn arxiv:1711.05934v1

More information

Dimensionality reduction as a defense against evasion attacks on machine learning classifiers

Dimensionality reduction as a defense against evasion attacks on machine learning classifiers Dimensionality reduction as a defense against evasion attacks on machine learning classifiers Arjun Nitin Bhagoji and Prateek Mittal Princeton University DC-Area Anonymity, Privacy, and Security Seminar,

More information

In Network and Distributed Systems Security Symposium (NDSS) 2018, San Diego, February Feature Squeezing:

In Network and Distributed Systems Security Symposium (NDSS) 2018, San Diego, February Feature Squeezing: In Network and Distributed Systems Security Symposium (NDSS) 2018, San Diego, February 2018 Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks Weilin Xu, David Evans, Yanjun Qi University

More information

Cascade Adversarial Machine Learning Regularized with a Unified Embedding

Cascade Adversarial Machine Learning Regularized with a Unified Embedding Cascade Adversarial Machine Learning Regularized with a Unified Embedding Taesik Na, Jong Hwan Ko, and Saibal Mukhopadhyay Georgia Institute of Technology Atlanta, GA 3332 Abstract Injecting adversarial

More information

Adversarial Patch. Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer

Adversarial Patch. Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer Adversarial Patch Tom B. Brown, Dandelion Mané, Aurko Roy, Martín Abadi, Justin Gilmer {tombrown,dandelion,aurkor,abadi,gilmer}@google.com Abstract We present a method to create universal, robust, targeted

More information

arxiv: v2 [cs.lg] 22 Mar 2018

arxiv: v2 [cs.lg] 22 Mar 2018 arxiv:1712.00699v2 [cs.lg] 22 Mar 2018 Improving Network Robustness against Adversarial Attacks with Compact Convolution Rajeev Ranjan, Swami Sankaranarayanan, Carlos D. Castillo, and Rama Chellappa Center

More information

arxiv: v1 [cs.cv] 16 Mar 2018

arxiv: v1 [cs.cv] 16 Mar 2018 Semantic Adversarial Examples Hossein Hosseini Radha Poovendran Network Security Lab (NSL) Department of Electrical Engineering, University of Washington, Seattle, WA arxiv:1804.00499v1 [cs.cv] 16 Mar

More information

arxiv: v2 [cs.cv] 30 Oct 2018

arxiv: v2 [cs.cv] 30 Oct 2018 Adversarial Noise Layer: Regularize Neural Network By Adding Noise Zhonghui You, Jinmian Ye, Kunming Li, Zenglin Xu, Ping Wang School of Electronics Engineering and Computer Science, Peking University

More information

EXPLORING AND ENHANCING THE TRANSFERABILITY

EXPLORING AND ENHANCING THE TRANSFERABILITY EXPLORING AND ENHANCING THE TRANSFERABILITY OF ADVERSARIAL EXAMPLES Anonymous authors Paper under double-blind review ABSTRACT State-of-the-art deep neural networks are vulnerable to adversarial examples,

More information

Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope

Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope Eric Wong 1 J. Zico Kolter 2 Abstract We propose a method to learn deep ReLU-based classifiers that are provably

More information

arxiv: v1 [stat.ml] 28 Jan 2019

arxiv: v1 [stat.ml] 28 Jan 2019 Sanjay Kariyappa 1 Moinuddin K. Qureshi 1 arxiv:1901.09981v1 [stat.ml] 28 Jan 2019 Abstract Deep Neural Networks are vulnerable to adversarial attacks even in settings where the attacker has no direct

More information

Safety verification for deep neural networks

Safety verification for deep neural networks Safety verification for deep neural networks Marta Kwiatkowska Department of Computer Science, University of Oxford UC Berkeley, 8 th November 2016 Setting the scene Deep neural networks have achieved

More information

DEEP LEARNING WITH DIFFERENTIAL PRIVACY Martin Abadi, Andy Chu, Ian Goodfellow*, Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang Google * Open

DEEP LEARNING WITH DIFFERENTIAL PRIVACY Martin Abadi, Andy Chu, Ian Goodfellow*, Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang Google * Open DEEP LEARNING WITH DIFFERENTIAL PRIVACY Martin Abadi, Andy Chu, Ian Goodfellow*, Brendan McMahan, Ilya Mironov, Kunal Talwar, Li Zhang Google * Open AI 2 3 Deep Learning Fashion Cognitive tasks: speech,

More information

Vulnerability of machine learning models to adversarial examples

Vulnerability of machine learning models to adversarial examples Vulnerability of machine learning models to adversarial examples Petra Vidnerová Institute of Computer Science The Czech Academy of Sciences Hora Informaticae 1 Outline Introduction Works on adversarial

More information

Convolutional Neural Networks and Supervised Learning

Convolutional Neural Networks and Supervised Learning Convolutional Neural Networks and Supervised Learning Eilif Solberg August 30, 2018 Outline Convolutional Architectures Convolutional neural networks Training Loss Optimization Regularization Hyperparameter

More information

ADVERSARIAL DEFENSE VIA DATA DEPENDENT AC-

ADVERSARIAL DEFENSE VIA DATA DEPENDENT AC- ADVERSARIAL DEFENSE VIA DATA DEPENDENT AC- TIVATION FUNCTION AND TOTAL VARIATION MINI- MIZATION Anonymous authors Paper under double-blind review ABSTRACT We improve the robustness of deep neural nets

More information

arxiv: v1 [cs.lg] 17 Oct 2018

arxiv: v1 [cs.lg] 17 Oct 2018 Provable Robustness of ReLU networks via Maximization of Linear Regions Francesco Croce Maksym Andriushchenko Matthias Hein Saarland University, Germany Saarland University, Germany University of Tübingen,

More information

arxiv: v3 [stat.ml] 15 Nov 2017

arxiv: v3 [stat.ml] 15 Nov 2017 Reuben Feinman 1 Ryan R. Curtin 1 Saurabh Shintre 2 Andrew B. Gardner 1 arxiv:1703.00410v3 [stat.ml] 15 Nov 2017 Abstract Deep neural networks (DNNs) are powerful nonlinear architectures that are known

More information

Slides credited from Dr. David Silver & Hung-Yi Lee

Slides credited from Dr. David Silver & Hung-Yi Lee Slides credited from Dr. David Silver & Hung-Yi Lee Review Reinforcement Learning 2 Reinforcement Learning RL is a general purpose framework for decision making RL is for an agent with the capacity to

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Oct 4, 2016 Outline Multi-core v.s. multi-processor Parallel Gradient Descent Parallel Stochastic Gradient Parallel Coordinate Descent Parallel

More information

SEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic

SEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic SEMANTIC COMPUTING Lecture 8: Introduction to Deep Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 7 December 2018 Overview Introduction Deep Learning General Neural Networks

More information

Adversarial Attacks on Image Recognition*

Adversarial Attacks on Image Recognition* Adversarial Attacks on Image Recognition* Masha Itkina, Yu Wu, and Bahman Bahmani 3 Abstract This project extends the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 7. Image Processing COMP9444 17s2 Image Processing 1 Outline Image Datasets and Tasks Convolution in Detail AlexNet Weight Initialization Batch Normalization

More information

arxiv: v1 [cs.lg] 15 Jan 2018

arxiv: v1 [cs.lg] 15 Jan 2018 Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks Bo Luo, Yannan Liu, Lingxiao Wei, Qiang Xu Department of Computer Science & Engineering The Chinese University of Hong

More information

11. Neural Network Regularization

11. Neural Network Regularization 11. Neural Network Regularization CS 519 Deep Learning, Winter 2016 Fuxin Li With materials from Andrej Karpathy, Zsolt Kira Preventing overfitting Approach 1: Get more data! Always best if possible! If

More information

Como funciona o Deep Learning

Como funciona o Deep Learning Como funciona o Deep Learning Moacir Ponti (com ajuda de Gabriel Paranhos da Costa) ICMC, Universidade de São Paulo Contact: www.icmc.usp.br/~moacir moacir@icmc.usp.br Uberlandia-MG/Brazil October, 2017

More information

Deep Generative Models and a Probabilistic Programming Library

Deep Generative Models and a Probabilistic Programming Library Deep Generative Models and a Probabilistic Programming Library Discriminative (Deep) Learning Learn a (differentiable) function mapping from input to output x f(x; θ) y Gradient back-propagation Generative

More information

arxiv: v1 [cs.cv] 16 Apr 2018

arxiv: v1 [cs.cv] 16 Apr 2018 Robust Physical Adversarial Attack on Faster R-CNN Object Detector Shang-Tse Chen 1, Cory Cornelius 2, Jason Martin 2, and Duen Horng (Polo) Chau 1 arxiv:1804.05810v1 [cs.cv] 16 Apr 2018 1 Georgia Institute

More information

Project report - COS 598B Physical adversarial examples for semantic image segmentation

Project report - COS 598B Physical adversarial examples for semantic image segmentation Project report - COS 598B Physical adversarial examples for semantic image segmentation Vikash Sehwag Princeton University vvikash@princeton.edu Abstract In this project, we work on the generation of physical

More information

Machine Learning. The Breadth of ML Neural Networks & Deep Learning. Marc Toussaint. Duy Nguyen-Tuong. University of Stuttgart

Machine Learning. The Breadth of ML Neural Networks & Deep Learning. Marc Toussaint. Duy Nguyen-Tuong. University of Stuttgart Machine Learning The Breadth of ML Neural Networks & Deep Learning Marc Toussaint University of Stuttgart Duy Nguyen-Tuong Bosch Center for Artificial Intelligence Summer 2017 Neural Networks Consider

More information

The State of Physical Attacks on Deep Learning Systems

The State of Physical Attacks on Deep Learning Systems The State of Physical Attacks on Deep Learning Systems Earlence Fernandes Collaborators: Ivan Evtimov, Kevin Eykholt, Chaowei Xiao, Amir Rahmati, Florian Tramer, Bo Li, Atul Prakash, Tadayoshi Kohno, Dawn

More information

Deep Learning with Tensorflow AlexNet

Deep Learning with Tensorflow   AlexNet Machine Learning and Computer Vision Group Deep Learning with Tensorflow http://cvml.ist.ac.at/courses/dlwt_w17/ AlexNet Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, "Imagenet classification

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences

More information

Residual Networks And Attention Models. cs273b Recitation 11/11/2016. Anna Shcherbina

Residual Networks And Attention Models. cs273b Recitation 11/11/2016. Anna Shcherbina Residual Networks And Attention Models cs273b Recitation 11/11/2016 Anna Shcherbina Introduction to ResNets Introduced in 2015 by Microsoft Research Deep Residual Learning for Image Recognition (He, Zhang,

More information

Generative Modeling with Convolutional Neural Networks. Denis Dus Data Scientist at InData Labs

Generative Modeling with Convolutional Neural Networks. Denis Dus Data Scientist at InData Labs Generative Modeling with Convolutional Neural Networks Denis Dus Data Scientist at InData Labs What we will discuss 1. 2. 3. 4. Discriminative vs Generative modeling Convolutional Neural Networks How to

More information

Encoder-Decoder Networks for Semantic Segmentation. Sachin Mehta

Encoder-Decoder Networks for Semantic Segmentation. Sachin Mehta Encoder-Decoder Networks for Semantic Segmentation Sachin Mehta Outline > Overview of Semantic Segmentation > Encoder-Decoder Networks > Results What is Semantic Segmentation? Input: RGB Image Output:

More information

arxiv: v3 [cs.cv] 15 Sep 2018

arxiv: v3 [cs.cv] 15 Sep 2018 DPATCH: An Adversarial Patch Attack on Object Detectors Xin Liu1, Huanrui Yang1, Ziwei Liu2, Linghao Song1, Hai Li1, Yiran Chen1, arxiv:1806.02299v3 [cs.cv] 15 Sep 2018 2 1 Duke University The Chinese

More information

ADAPTIVE DATA AUGMENTATION FOR IMAGE CLASSIFICATION

ADAPTIVE DATA AUGMENTATION FOR IMAGE CLASSIFICATION ADAPTIVE DATA AUGMENTATION FOR IMAGE CLASSIFICATION Alhussein Fawzi, Horst Samulowitz, Deepak Turaga, Pascal Frossard EPFL, Switzerland & IBM Watson Research Center, USA ABSTRACT Data augmentation is the

More information

Imperial College London. Department of Electrical and Electronic Engineering. Final Year Project Report 2018

Imperial College London. Department of Electrical and Electronic Engineering. Final Year Project Report 2018 Imperial College London Department of Electrical and Electronic Engineering Final Year Project Report 2018 Project Title: Fooling Neural Networks using Adversarial Image Perturbations Student: Guillaume

More information

arxiv: v1 [cs.cv] 7 Sep 2018

arxiv: v1 [cs.cv] 7 Sep 2018 Open Set Adversarial Examples Zhedong Zheng 1, Liang Zheng 2, Zhilan Hu 3, Yi Yang 1 1 CAI, University of Technology Sydney 2 Australian National University 3 Huawei Technologies {zdzheng12,liangzheng06,yee.i.yang}@gmail.com,

More information

Feature Visualization

Feature Visualization CreativeAI: Deep Learning for Graphics Feature Visualization Niloy Mitra Iasonas Kokkinos Paul Guerrero Nils Thuerey Tobias Ritschel UCL UCL UCL TU Munich UCL Timetable Theory and Basics State of the Art

More information

An Abstract Domain for Certifying Neural Networks. Department of Computer Science

An Abstract Domain for Certifying Neural Networks. Department of Computer Science An Abstract Domain for Certifying Neural Networks Gagandeep Singh Timon Gehr Markus Püschel Martin Vechev Department of Computer Science Adversarial input perturbations Neural network f 8 I " Neural network

More information

Inception and Residual Networks. Hantao Zhang. Deep Learning with Python.

Inception and Residual Networks. Hantao Zhang. Deep Learning with Python. Inception and Residual Networks Hantao Zhang Deep Learning with Python https://en.wikipedia.org/wiki/residual_neural_network Deep Neural Network Progress from Large Scale Visual Recognition Challenge (ILSVRC)

More information

Adversarial Examples and Adversarial Training. Ian Goodfellow, Staff Research Scientist, Google Brain CS 231n, Stanford University,

Adversarial Examples and Adversarial Training. Ian Goodfellow, Staff Research Scientist, Google Brain CS 231n, Stanford University, Adversarial Examples and Adversarial Training Ian Goodfellow, Staff Research Scientist, Google Brain CS 231n, Stanford University, 2017-05-30 Overview What are adversarial examples? Why do they happen?

More information

arxiv: v3 [cs.cv] 28 Feb 2018

arxiv: v3 [cs.cv] 28 Feb 2018 MITIGATING ADVERSARIAL EFFECTS THROUGH RAN- DOMIZATION Cihang Xie, Zhishuai Zhang & Alan L. Yuille Department of Computer Science The Johns Hopkins University Baltimore, MD 21218 USA {cihangxie306, zhshuai.zhang,

More information

DPATCH: An Adversarial Patch Attack on Object Detectors

DPATCH: An Adversarial Patch Attack on Object Detectors DPATCH: An Adversarial Patch Attack on Object Detectors Xin Liu1, Huanrui Yang1, Ziwei Liu2, Linghao Song1, Hai Li1, Yiran Chen1 1 Duke 1 University, 2 The Chinese University of Hong Kong {xin.liu4, huanrui.yang,

More information

Convolution Neural Nets meet

Convolution Neural Nets meet Convolution Neural Nets meet PDE s Eldad Haber Lars Ruthotto SIAM CS&E 2017 Convolution Neural Networks (CNN) Meet PDE s Optimization Multiscale Example Future work CNN - A quick overview Neural Networks

More information

arxiv: v2 [cs.lg] 29 Mar 2018

arxiv: v2 [cs.lg] 29 Mar 2018 Sumanth Dathathri * 1 Stephan Zheng * 1 Richard M. Murray 1 Yisong Yue 1 arxiv:1803.03870v2 [cs.lg] 29 Mar 2018 Abstract Deep neural networks are vulnerable to adversarial examples, which dramatically

More information

arxiv: v2 [cs.cv] 15 Aug 2017

arxiv: v2 [cs.cv] 15 Aug 2017 SafetyNet: Detecting and Rejecting Adversarial Examples Robustly arxiv:704.0003v2 [cs.cv] 5 Aug 207 Jiajun Lu, Theerasit Issaranon, David Forsyth University of Illinois at Urbana Champaign {jlu23, issaran,

More information

AI AND CYBERSECURITY APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN SECURITY UNDERSTANDING AND DEFENDING AGAINST ADVERSARIAL AI

AI AND CYBERSECURITY APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN SECURITY UNDERSTANDING AND DEFENDING AGAINST ADVERSARIAL AI SESSION ID: SPO2-T07 AI AND CYBERSECURITY APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN SECURITY UNDERSTANDING AND DEFENDING AGAINST ADVERSARIAL AI Sridhar Muppidi IBM Fellow and VP Technology IBM Security

More information

Safety verification for deep neural networks

Safety verification for deep neural networks Safety verification for deep neural networks Marta Kwiatkowska Department of Computer Science, University of Oxford Based on joint work with Xiaowei Huang, Sen Wang, Min Wu and Matthew Wicker CAV, Heidelberg,

More information

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models

One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models One Network to Solve Them All Solving Linear Inverse Problems using Deep Projection Models [Supplemental Materials] 1. Network Architecture b ref b ref +1 We now describe the architecture of the networks

More information

arxiv: v3 [cs.cv] 26 Sep 2017

arxiv: v3 [cs.cv] 26 Sep 2017 APE-GAN: Adversarial Perturbation Elimination with GAN arxiv:1707.05474v3 [cs.cv] 26 Sep 2017 Shiwei Shen ICT shenshiwei@ict.ac.cn Abstract Guoqing Jin ICT jinguoqing@ict.ac.cn Although neural networks

More information

Bilevel Sparse Coding

Bilevel Sparse Coding Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional

More information

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks Chang Song, Hsin-Pai Cheng, Huanrui Yang, Sicheng Li, Chunpeng Wu, Qing Wu, Yiran Chen, Hai Li Department of Electrical

More information

Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech

Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech Convolutional Neural Networks Computer Vision Jia-Bin Huang, Virginia Tech Today s class Overview Convolutional Neural Network (CNN) Training CNN Understanding and Visualizing CNN Image Categorization:

More information

Fei-Fei Li & Justin Johnson & Serena Yeung

Fei-Fei Li & Justin Johnson & Serena Yeung Lecture 9-1 Administrative A2 due Wed May 2 Midterm: In-class Tue May 8. Covers material through Lecture 10 (Thu May 3). Sample midterm released on piazza. Midterm review session: Fri May 4 discussion

More information

arxiv: v1 [cs.lg] 16 Nov 2018

arxiv: v1 [cs.lg] 16 Nov 2018 DARCCC: Detecting Adversaries by Reconstruction from Class Conditional Capsules arxiv:1811.06969v1 [cs.lg] 16 Nov 2018 Nicholas Frosst, Sara Sabour, Geoffrey Hinton {frosst, sasabour, geoffhinton}@google.com

More information

arxiv: v2 [cs.cv] 11 Feb 2017

arxiv: v2 [cs.cv] 11 Feb 2017 ADVERSARIAL MACHINE LEARNING AT SCALE Alexey Kurakin Google Brain kurakin@google.com Ian J. Goodfellow OpenAI ian@openai.com Samy Bengio Google Brain bengio@google.com ABSTRACT arxiv:1611.01236v2 [cs.cv]

More information

CSE 559A: Computer Vision

CSE 559A: Computer Vision CSE 559A: Computer Vision Fall 2018: T-R: 11:30-1pm @ Lopata 101 Instructor: Ayan Chakrabarti (ayan@wustl.edu). Course Staff: Zhihao Xia, Charlie Wu, Han Liu http://www.cse.wustl.edu/~ayan/courses/cse559a/

More information

CENG 783. Special topics in. Deep Learning. AlchemyAPI. Week 11. Sinan Kalkan

CENG 783. Special topics in. Deep Learning. AlchemyAPI. Week 11. Sinan Kalkan CENG 783 Special topics in Deep Learning AlchemyAPI Week 11 Sinan Kalkan TRAINING A CNN Fig: http://www.robots.ox.ac.uk/~vgg/practicals/cnn/ Feed-forward pass Note that this is written in terms of the

More information

arxiv: v1 [cs.cr] 1 Aug 2017

arxiv: v1 [cs.cr] 1 Aug 2017 ADVERSARIAL-PLAYGROUND: A Visualization Suite Showing How Adversarial Examples Fool Deep Learning Andrew P. Norton * Yanjun Qi Department of Computer Science, University of Virginia arxiv:1708.00807v1

More information

Global Optimality in Neural Network Training

Global Optimality in Neural Network Training Global Optimality in Neural Network Training Benjamin D. Haeffele and René Vidal Johns Hopkins University, Center for Imaging Science. Baltimore, USA Questions in Deep Learning Architecture Design Optimization

More information

arxiv: v1 [cs.lg] 28 Jan 2019

arxiv: v1 [cs.lg] 28 Jan 2019 in Deep Networks Thomas Gebhart * 1 Paul Schrater * 1 arxiv:1901.09496v1 [cs.lg] 28 Jan 2019 Abstract It is currently unclear why adversarial examples are easy to construct for deep networks that are otherwise

More information

Vulnerability of machine learning models to adversarial examples

Vulnerability of machine learning models to adversarial examples ITAT 216 Proceedings, CEUR Workshop Proceedings Vol. 1649, pp. 187 194 http://ceur-ws.org/vol-1649, Series ISSN 1613-73, c 216 P. Vidnerová, R. Neruda Vulnerability of machine learning models to adversarial

More information

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li

Learning to Match. Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li Learning to Match Jun Xu, Zhengdong Lu, Tianqi Chen, Hang Li 1. Introduction The main tasks in many applications can be formalized as matching between heterogeneous objects, including search, recommendation,

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

DECISION TREES & RANDOM FORESTS X CONVOLUTIONAL NEURAL NETWORKS

DECISION TREES & RANDOM FORESTS X CONVOLUTIONAL NEURAL NETWORKS DECISION TREES & RANDOM FORESTS X CONVOLUTIONAL NEURAL NETWORKS Deep Neural Decision Forests Microsoft Research Cambridge UK, ICCV 2015 Decision Forests, Convolutional Networks and the Models in-between

More information

Keras: Handwritten Digit Recognition using MNIST Dataset

Keras: Handwritten Digit Recognition using MNIST Dataset Keras: Handwritten Digit Recognition using MNIST Dataset IIT PATNA January 31, 2018 1 / 30 OUTLINE 1 Keras: Introduction 2 Installing Keras 3 Keras: Building, Testing, Improving A Simple Network 2 / 30

More information

SPATIALLY TRANSFORMED ADVERSARIAL EXAMPLES

SPATIALLY TRANSFORMED ADVERSARIAL EXAMPLES SPATIALLY TRANSFORMED ADVERSARIAL EXAMPLES Chaowei Xiao University of Michigan Jun-Yan Zhu MIT CSAIL Bo Li UC Berkeley Warren He UC Berkeley Mingyan Liu University of Michigan Dawn Song UC Berkeley ABSTRACT

More information

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network Received June 26, 2018, accepted August 8, 2018, date of publication August 20, 2018, date of current version September 7, 2018. Digital Object Identifier 10.1109/ACCESS.2018.2866197 Multi-Targeted Adversarial

More information

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition

Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition Noname manuscript No. (will be inserted by the editor) Detecting and Mitigating Adversarial Perturbations for Robust Face Recognition Gaurav Goswami Akshay Agarwal Nalini Ratha Richa Singh Mayank Vatsa

More information

Image Transformation can make Neural Networks more robust against Adversarial Examples

Image Transformation can make Neural Networks more robust against Adversarial Examples Image Transformation can make Neural Networks more robust against Adversarial Examples Dang Duy Thang Information Security Department Institute of Information Security Yokohama, Japan dgs174101@iisec.ac.jp

More information

arxiv: v1 [cs.cv] 26 Dec 2017

arxiv: v1 [cs.cv] 26 Dec 2017 The Robust Manifold Defense: Adversarial Training using Generative Models arxiv:1712.09196v1 [cs.cv] 26 Dec 2017 Andrew Ilyas ailyas@mit.edu MIT EECS Constantinos Daskalakis costis@mit.edu MIT EECS Ajil

More information

Deep neural networks II

Deep neural networks II Deep neural networks II May 31 st, 2018 Yong Jae Lee UC Davis Many slides from Rob Fergus, Svetlana Lazebnik, Jia-Bin Huang, Derek Hoiem, Adriana Kovashka, Why (convolutional) neural networks? State of

More information

arxiv: v1 [stat.ml] 15 Jan 2019

arxiv: v1 [stat.ml] 15 Jan 2019 THE LIMITATIONS OF ADVERSARIAL TRAINING AND THE BLIND-SPOT ATTACK arxiv:191.4684v1 [stat.ml] 1 Jan 219 Huan Zhang 1, Hongge Chen 2, Zhao Song 3, Duane Boning 2, Inderjit Dhillon 3, Cho-Jui Hsieh 1 1 UCLA,

More information

GAN Frontiers/Related Methods

GAN Frontiers/Related Methods GAN Frontiers/Related Methods Improving GAN Training Improved Techniques for Training GANs (Salimans, et. al 2016) CSC 2541 (07/10/2016) Robin Swanson (robin@cs.toronto.edu) Training GANs is Difficult

More information

Extreme Value Theory in (Hourly) Precipitation

Extreme Value Theory in (Hourly) Precipitation Extreme Value Theory in (Hourly) Precipitation Uli Schneider Geophysical Statistics Project, NCAR GSP Miniseries at CSU November 17, 2003 Outline Project overview Extreme value theory 101 Applying extreme

More information

Multi-Glance Attention Models For Image Classification

Multi-Glance Attention Models For Image Classification Multi-Glance Attention Models For Image Classification Chinmay Duvedi Stanford University Stanford, CA cduvedi@stanford.edu Pararth Shah Stanford University Stanford, CA pararth@stanford.edu Abstract We

More information

Deep Learning for Embedded Security Evaluation

Deep Learning for Embedded Security Evaluation Deep Learning for Embedded Security Evaluation Emmanuel Prouff 1 1 Laboratoire de Sécurité des Composants, ANSSI, France April 2018, CISCO April 2018, CISCO E. Prouff 1/22 Contents 1. Context and Motivation

More information

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17

(University Improving of Montreal) Generative Adversarial Networks with Denoising Feature Matching / 17 Improving Generative Adversarial Networks with Denoising Feature Matching David Warde-Farley 1 Yoshua Bengio 1 1 University of Montreal, ICLR,2017 Presenter: Bargav Jayaraman Outline 1 Introduction 2 Background

More information

Neural Nets & Deep Learning

Neural Nets & Deep Learning Neural Nets & Deep Learning The Inspiration Inputs Outputs Our brains are pretty amazing, what if we could do something similar with computers? Image Source: http://ib.bioninja.com.au/_media/neuron _med.jpeg

More information

arxiv: v4 [stat.ml] 22 Jul 2018

arxiv: v4 [stat.ml] 22 Jul 2018 ENSEMBLE ADVERSARIAL TRAINING: ATTACKS AND DEFENSES Florian Tramèr Stanford University tramer@cs.stanford.edu Alexey Kurakin Google Brain kurakin@google.com Nicolas Papernot Pennsylvania State University

More information

Regularization. EE807: Recent Advances in Deep Learning Lecture 3. Slide made by Jongheon Jeong and Insu Han KAIST EE

Regularization. EE807: Recent Advances in Deep Learning Lecture 3. Slide made by Jongheon Jeong and Insu Han KAIST EE Regularization EE807: Recent Advances in Deep Learning Lecture 3 Slide made by Jongheon Jeong and Insu Han KAIST EE What is regularization? Any modification we make to a learning algorithm that is intended

More information

Convolutional Neural Networks: Applications and a short timeline. 7th Deep Learning Meetup Kornel Kis Vienna,

Convolutional Neural Networks: Applications and a short timeline. 7th Deep Learning Meetup Kornel Kis Vienna, Convolutional Neural Networks: Applications and a short timeline 7th Deep Learning Meetup Kornel Kis Vienna, 1.12.2016. Introduction Currently a master student Master thesis at BME SmartLab Started deep

More information

CS489/698: Intro to ML

CS489/698: Intro to ML CS489/698: Intro to ML Lecture 14: Training of Deep NNs Instructor: Sun Sun 1 Outline Activation functions Regularization Gradient-based optimization 2 Examples of activation functions 3 5/28/18 Sun Sun

More information

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks

An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks An Explainable Adversarial Robustness Metric for Deep Learning Neural Networks Chirag Agarwal 1, Bo Dong 2, Dan Schonfeld 1, Anthony Hoogs 2 1 Department of Electrical and Computer Engineering, University

More information