# Introduction to Deep Learning

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 ENEE698A : Machine Learning Seminar Introduction to Deep Learning Raviteja Vemulapalli Image credit: [LeCun 1998]

2 Resources Unsupervised feature learning and deep learning (UFLDL) tutorial ( Y. Bengio, Learning Deep Architectures for AI, Foundations and Trends in Machine Learning, Vol. 2, No. 1, pp , 2009, Now publishers.

3 Overview Linear, Logistic and Softmax regression Feed forward neural networks Why deep models? Why is training deep networks (except CNNs) difficult? What makes convolutional neural networks (CNNs) relatively easier to train? How are deep networks trained today? Autoencoders and Denoising Autoencoders

4 Linear Regression Input xx RR dd, Output yy RR Parameters: θθ = {ww, bb} Model: yy = wwww + bb Squared error cost function : l θ = NN jj=1 yy jj (wwxx jj + bb) 2 2 Cost function is convex and differentiable.

5 Logistic Regression Used for binary classification. Input xx RR dd, Output yy {0,1} Parameters: θθ = {ww, bb} Model: PP yy = 1 xx, θθ = 1 1+ee (wwxx+bb) NLL cost function: l θ = log NN jj=1 PP yy = yy jj xx jj, θθ = NN jj=1 log PP yy = yy jj xx jj, θθ Cost function is convex and differentiable.

6 Softmax Regression Used for multi-class classification. Input xx RR dd, Output yy {1, 2,, CC} CC Parameters: θθ = {(ww ii, bb ii )} ii=1 Model: PP yy = ii xx, θθ) = ee (ww ii xx+ bb ii ) CC kk=1 ee (ww kk xx+ bb kk ) NLL cost function: l θ = log NN jj=1 PP yy = yy jj xx jj, θθ = NN jj=1 log PP yy = yy jj xx jj, θθ Cost function is convex and differentiable.

7 Regularization & Optimization l 2 -Regularization (Weight decay) Regularization helps to avoid overfitting. min ww,bb [l ww, bb + λλ ww 2 2 ] l 2 -Regularization is equivalent to MAP estimation with zero-mean Gaussian prior on ww. Optimization [l ww, bb + λλ ww 2 2 ] is convex and differentiable in the case of linear, logistic and softmax regression. Gradient based optimization techniques can be used to find the global optimum.

8 Feed Forward Neural Networks

9 Feed Forward Neural Networks Artificial neuron h ww,bb (xx) = gg(wwxx + bb) gg is a non-linear function referred to as activation function. Logistic and hyperbolic tangent functions are two commonly used activation functions. Logistic function Hyperbolic tangent function Image credit: UFLDL Tutorial

10 Feed Forward Neural Networks Neural network with 2 hidden layers Output layer Application Regression Model Linear regression Input layer (3 inputs) Hidden layer 1 (3 units) Output layer (2 units) Hidden layer 2 (2 units) Binary classification Multi-class classification Logistic regression Softmax regression Non-linear (logistic or tanh) Note that if the activation function is linear, then the entire neural network is effectively equivalent to the output layer. Image credit: UFLDL Tutorial

11 Feed Forward Neural Networks - Prediction Forward propagation L Number of layers (including input and output layers) nn ii - Number of units in ii ttt layer aa (ii) RR nn ii - Output of ii ttt layer WW (ii) RR nn ii+1 nn ii, bb (ii) RR nn ii+1 - Parameters of the mapping function from layer ii to ii + 1. SS - Output layer model (linear, logistic or softmax) aa (1) aa (2) aa (3) aa (4) WW (1), bb (1) WW (2), bb (2) WW (3), bb (3) aa (1) = xx aa (2) = gg(ww 1 aa 1 + bb 1 ) aa (LL 1) = gg(ww LL 2 aa LL 2 + bb LL 2 )... h WW,bb xx = aa (LL) = SS(WW LL 1 aa LL 1 + bb LL 1 ) Image credit: UFLDL Tutorial

12 Feed Forward Neural Networks - Learning NN Training samples: { xx ii, yy ii } ii=1 Parameters: WW- Weights, bb - bias Cost function Regression Linear output layer: JJ WW, bb, xx ii, yy ii = h WW,bb xx ii yy ii 2 2 PP yy ii = 1 xx ii, WW, bb) Binary classification Logistic output layer JJ WW, bb, xx ii, yy ii = [yy ii log(h WW,bb xx ii ) + (1 yy ii ) log(1 h WW,bb xx ii )] Multi-class classification Softmax output layer JJ WW, bb, xx ii, yy ii CC = kk=1 kk II(yy ii = kk) log( h WW,bb xx ii ) PP yy ii = kk xx ii, WW, bb)

13 Feed Forward Neural Networks - Learning NN min WW,bb ii=1 JJ WW, bb, xx ii, yy ii + λλ WW 2 2 Regularization J is non-convex but differentiable. Gradient of the cost function with respect to WW and bb can be computed using backpropagation algorithm. Stochastic gradient descent is commonly used to find a local optimum. The weights are randomly initialized to small values around zero.

14 How powerful are neural networks?

15 Expressive Power of Neural Networks Professor Kurt Hornik

16 Expressive Power of Neural Networks Universal Approximation Theorem [Hornik (1991)] Let gg be any continuous, bounded, non-constant function. Let XX RR mm be a closed and bounded set. Let CC(XX) denote the space of continuous functions on XX. Then, given any function ff CC XX and εε > 0, there exists an integer NN, and real constants αα ii, bb ii RR, ww ii RR mm for ii = 1,, NN such that satisfies NN FF xx = αα ii gg(ww ii xx + bb ii ) ii=1 FF xx ff xx < εε, xx XX. Hidden layer outputs Kurt Hornik, Approximation capabilities of multilayer feed forward networks, Neural Networks, 4: , 1991.

17 Deep Neural Network A deep neural network is a neural network with many hidden layers.

18 Why Deep Networks? Universal approximation theorem says that one hidden layer is enough to represent any continuous function. Then why deep networks??

19 Why Deep Networks? Computational efficiency Functions that can be compactly (number of computational units polynomial in the number of inputs) represented by a depth k architecture might require an exponentially large number of computational units in order to be represented by a shallower (depth < k) architecture. In other words, for the same number of computational units, deep architectures have more expressive power than shallow architectures.

20 Why Deep Networks? Biological motivation Visual cortex is organized in a deep architecture (with 5-10 levels) with a given input percept represented at multiple levels of abstraction, each level corresponding to a different area of cortex. [Serre 2007] T. Serre, G. Kreiman, M. Kouh, C. Cadieu, U. Knoblich, and T. Poggio, A quantitative theory of immediate visual recognition, Progress in Brain Research, Computational Neuroscience: Theoretical Insights into Brain Function, vol. 165, pp , 2007.

21 Training Deep Neural Networks Prior to 2006, the standard approach to train a neural network was to use gradient based techniques starting from a random initialization. Researches have reported positive experimental results with typically one or two hidden layers, but supervised training of deeper networks using random initialization consistently yielded poor results (except for convolutional neural networks). Why is training deep neural networks difficult?

22 Training Deep Neural Networks Lack of sufficient labeled data Deep neural networks represent very complex data transformations. Supervised training of such complex models requires a large number of labeled training samples. Local optima Supervised training of deep networks involves optimizing a cost function that is highly non-convex in its parameters. Gradient based approaches with random initialization usually converge to a very poor local minima.

23 Training Deep Neural Networks Diffusion of gradients The gradients that are propagated backwards rapidly diminish in magnitude as the depth of the network increases. The weights of the earlier layers change very slowly and the earlier layers fail to learn well. When earlier layers do not learn well, a deep network is equivalent to a shallow network (consisting of top layers) on corrupted input (output of the bad earlier layers), and hence may perform badly compared to a shallow network on original input.

24 Training Deep Neural Networks What to do??

25 Training Deep Neural Networks Lots of free unlabeled data on internet these days!! Can we use unlabeled data to train deep networks?

26 Unsupervised Layer-wise Pre-training In 2006, Hinton et. al. introduced an unsupervised training approach known as pre-training. The main idea of pre-training is to train the layers of the network one at a time, in an unsupervised manner. Various unsupervised pre-training approaches have been proposed in the recent past based on restricted Boltzmann machines [Hinton 2006], autoencoders [Bengio 2006], sparse autoencoders [Ranzato 2006], denoising autoencoders [Vincent 2008], etc. After unsupervised training of each layer, the learned weights are used for initialization and the entire deep network is fine-tuned in a supervised manner using the labeled training data. G. E. Hinton, S. Osindero, and Y. Teh, A fast learning algorithm for deep belief nets, Neural Computation, vol. 18, pp , Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, Greedy layer-wise training of deep networks, NIPS, M. Ranzato, C. Poultney, S. Chopra, and Y. LeCun, Efficient learning of sparse representations with an energy-based model, NIPS, P. Vincent, H. Larochelle, Y. Bengio, and P.-A. Manzagol, Extracting and composing robust features with denoising autoencoders, ICML, 2008.

27 Unsupervised Layer-wise Pre-training Unlabeled data Unsupervised layer-wise pre-training Learned weights Supervised finetuning using backpropagation Trained network Labeled data The weights learned using the layer-wise unsupervised training approach provide a very good initialization compared to random weights. Supervised fine-tuning with this initialization usually produces a good local optimum for the entire deep network, because the unlabeled data has already provided a significant amount of prior information.

28 Convolutional Neural Networks (CNNs) Fully connected Local connections with different weights Local connections sharing the weights Equivalent to convolution by a filter Convolutional layer Multiple convolutions using different filters Image credit: CVPR12 Tutorial on deep learning

29 Convolutional Neural Networks Subsampling or Pooling Reduces computational complexity for upper layers. Pooling layers make the output of convolution networks more robust to local translations. Convolution Pooling Max or Average of the inputs Image credit: CVPR12 Tutorial on deep learning

30 Convolutional Neural Networks CNNs were inspired by the visual system s architecture. Modern understanding of the functioning of the visual system is consistent with the processing style found in CNNs [Serre 2007]. LeNet architecture used for character/digit recognition [LeCun 1998]. T. Serre, G. Kreiman, M. Kouh, C. Cadieu, U. Knoblich, and T. Poggio, A quantitative theory of immediate visual recognition, Progress in Brain Research, Computational Neuroscience: Theoretical Insights into Brain Function, vol. 165, pp , 2007.

31 Training Deep CNNs Supervised training of deep neural networks always yielded poor results except for CNNs. Long before 2006, researchers were successful in supervised training of deep CNNs (5-6 hidden layers) without any pre-training. What makes CNNs easier to train? The small fan-in of the neurons in a CNN could be helping the gradients to propagate through many layers without diffusing so much as to become useless. The local connectivity structure is a very strong prior that is particularly appropriate for vision tasks, and sets the parameters of the whole network in a favorable region (with all non-connections corresponding to zero weight) from which gradient based optimization works well.

32 Unsupervised Pre-training

33 Pre-training Unlabeled data Unsupervised layer-wise pre-training Learned weights Supervised finetuning using backpropagation Trained network Unsupervised approaches Restricted Boltzmann machines (RBM) Autoencoders (AE) Sparse autoencoders (SAE) Denoising autoencoders (DAE) Labeled data In this talk we will focus on autoencoders and denoising autoencoders.

34 Autoencoders An autoencoder neural network is an unsupervised learning algorithm that applies backpropagation, setting the target values to be equal to the inputs. Encoder (WW ee, bb ee ) Input xx [0,1] dd If WW dd = WW ee TT, then the autoencoder is said to have tied weights. Decoder (WW dd, bb dd ) Image credit: UFLDL Tutorial

35 Autoencoders Encoder ff Decoder h xx xx = h ff xx = gg(ww dd ff(xx) + bb dd ) ff xx = gg WW ee xx + bb ee Autoencoders are trained using backpropagation. Two common cost functions In the case of real-valued inputs: NN ii=1 xx ii xx ii 2 2 In the case of binary inputs: NN dd xx ii jj log xx ii jj 1 xx ii jj log 1 xx ii jj ii=1 jj=1

36 Autoencoders An autoencoder encodes the input xx into some representation ff(xx) so that the input can be reconstructed from that representation. If we use neurons with linear activation function and train the network using mean squared error criterion, then autoencoder is similar to PCA. When the number of hidden units is smaller than the number of inputs, an autoencoder learns a compressed representation of the data by exploiting the correlations among the input variables. An autoencoder can be interpreted as a non-linear dimensionality reduction algorithm. An autoencoder can be used for denoising by training it with noisy input.

37 Unsupervised Pre-training with AEs Neural network with 2 hidden layers Image credit: UFLDL Tutorial

38 Step 1 Unsupervised Pre-training with AEs (1) (1) (1) (1) Train an autoencoder on the raw input to learn (WW ee, bbee ) and (WWdd, bbdd ). Compute the primary features h 1 (xx) for each input xx using the learned parameters (1) (1) (WW ee, bbee ). Image credit: UFLDL Tutorial

39 Step 2 Unsupervised Pre-training with AEs Use the primary features h 1 (xx) as input and train another autoencoder to learn (2) (2) (2) (2) (WW ee, bbee ) and (WWdd, bbdd ). Compute the secondary features h 2 (xx) for each input h 1 xx using the learned (2) (2) parameters (WW ee, bbee ). Image credit: UFLDL Tutorial

40 Supervised Fine-tuning Step 3 (1) (1) Use (WW ee, bbee ) to initialize the parameters between input and first hidden layer. (2) (2) Use (WW ee, bbee ) to initialize the parameters between first and second hidden layers. Randomly initialize the parameters of final classification layer. Train the entire network in a supervised manner using labeled training data. Image credit: UFLDL Tutorial

41 Denoising Autoencoders A denoising autoencoder is nothing but an autoencoder trained to reconstruct a clean input from a corrupted one. Motivation Humans can recognize patterns from partially occluded or corrupted data. A good representation should be robust to partial destruction of the data. Corrupt the input xx to get a partially destroyed version xx by means of some stochastic mapping xx ~ qq( xx xx) and train an autoencoder to recover xx from xx. Pre-training deep networks using denoising autoencoders was shown to work better than pre-training using ordinary autoencoders on MNIST digit dataset when images were corrupted by randomly choosing some of the pixels and setting their values to zero.

42 Thank You Yaniv Taigman, Ming Yang, Marc Aurelio Ranzato and Lior Wolf, DeepFace: Closing the Gap to Human-Level performance in Face Verification, CVPR 2014.

### Autoencoders, denoising autoencoders, and learning deep networks

4 th CiFAR Summer School on Learning and Vision in Biology and Engineering Toronto, August 5-9 2008 Autoencoders, denoising autoencoders, and learning deep networks Part II joint work with Hugo Larochelle,

### Convolutional Neural Networks

Lecturer: Barnabas Poczos Introduction to Machine Learning (Lecture Notes) Convolutional Neural Networks Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.

### An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation

An Empirical Evaluation of Deep Architectures on Problems with Many Factors of Variation Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio Université de Montréal 13/06/2007

### Deep Learning Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD.

Deep Learning 861.061 Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD asan.agibetov@meduniwien.ac.at Medical University of Vienna Center for Medical Statistics,

### Deep Learning for Computer Vision II

IIIT Hyderabad Deep Learning for Computer Vision II C. V. Jawahar Paradigm Shift Feature Extraction (SIFT, HoG, ) Part Models / Encoding Classifier Sparrow Feature Learning Classifier Sparrow L 1 L 2 L

### Neural Network Neurons

Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

### ImageNet Classification with Deep Convolutional Neural Networks

ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky Ilya Sutskever Geoffrey Hinton University of Toronto Canada Paper with same name to appear in NIPS 2012 Main idea Architecture

### 3D Object Recognition with Deep Belief Nets

3D Object Recognition with Deep Belief Nets Vinod Nair and Geoffrey E. Hinton Department of Computer Science, University of Toronto 10 King s College Road, Toronto, M5S 3G5 Canada {vnair,hinton}@cs.toronto.edu

### Deep Learning. Volker Tresp Summer 2014

Deep Learning Volker Tresp Summer 2014 1 Neural Network Winter and Revival While Machine Learning was flourishing, there was a Neural Network winter (late 1990 s until late 2000 s) Around 2010 there

### To be Bernoulli or to be Gaussian, for a Restricted Boltzmann Machine

2014 22nd International Conference on Pattern Recognition To be Bernoulli or to be Gaussian, for a Restricted Boltzmann Machine Takayoshi Yamashita, Masayuki Tanaka, Eiji Yoshida, Yuji Yamauchi and Hironobu

### Day 3 Lecture 1. Unsupervised Learning

Day 3 Lecture 1 Unsupervised Learning Semi-supervised and transfer learning Myth: you can t do deep learning unless you have a million labelled examples for your problem. Reality You can learn useful representations

### CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks

CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks Part IV 1 Function approximation MLP is both a pattern classifier and a function approximator As a function approximator,

### Contractive Auto-Encoders: Explicit Invariance During Feature Extraction

: Explicit Invariance During Feature Extraction Salah Rifai (1) Pascal Vincent (1) Xavier Muller (1) Xavier Glorot (1) Yoshua Bengio (1) (1) Dept. IRO, Université de Montréal. Montréal (QC), H3C 3J7, Canada

### Feature Extraction and Learning for RSSI based Indoor Device Localization

Feature Extraction and Learning for RSSI based Indoor Device Localization Stavros Timotheatos, Grigorios Tsagkatakis, Panagiotis Tsakalides and Panos Trahanias Institute of Computer Science Foundation

### Deep Learning. Deep Learning provided breakthrough results in speech recognition and image classification. Why?

Data Mining Deep Learning Deep Learning provided breakthrough results in speech recognition and image classification. Why? Because Speech recognition and image classification are two basic examples of

### Convolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech

Convolutional Neural Networks Computer Vision Jia-Bin Huang, Virginia Tech Today s class Overview Convolutional Neural Network (CNN) Training CNN Understanding and Visualizing CNN Image Categorization:

### Deep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group

Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies

### CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning

CS231A Course Project Final Report Sign Language Recognition with Unsupervised Feature Learning Justin Chen Stanford University justinkchen@stanford.edu Abstract This paper focuses on experimenting with

### Deep Similarity Learning for Multimodal Medical Images

Deep Similarity Learning for Multimodal Medical Images Xi Cheng, Li Zhang, and Yefeng Zheng Siemens Corporation, Corporate Technology, Princeton, NJ, USA Abstract. An effective similarity measure for multi-modal

### Artificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu

Artificial Neural Networks Introduction to Computational Neuroscience Ardi Tampuu 7.0.206 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition

### Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

### Deep Learning for Vision: Tricks of the Trade

Deep Learning for Vision: Tricks of the Trade Marc'Aurelio Ranzato Facebook, AI Group www.cs.toronto.edu/~ranzato BAVM Friday, 4 October 2013 Ideal Features Ideal Feature Extractor - window, right - chair,

### Concept of Curve Fitting Difference with Interpolation

Curve Fitting Content Concept of Curve Fitting Difference with Interpolation Estimation of Linear Parameters by Least Squares Curve Fitting by Polynomial Least Squares Estimation of Non-linear Parameters

### Using Machine Learning to Optimize Storage Systems

Using Machine Learning to Optimize Storage Systems Dr. Kiran Gunnam 1 Outline 1. Overview 2. Building Flash Models using Logistic Regression. 3. Storage Object classification 4. Storage Allocation recommendation

### Deep Learning for Vision

Deep Learning for Vision Presented by Kevin Matzen Quick Intro - DNN Feed-forward Sparse connectivity (layer to layer) Different layer types Recently popularized for vision [Krizhevsky, et. al. NIPS 2012]

### CS 1674: Intro to Computer Vision. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh November 16, 2016

CS 1674: Intro to Computer Vision Neural Networks Prof. Adriana Kovashka University of Pittsburgh November 16, 2016 Announcements Please watch the videos I sent you, if you haven t yet (that s your reading)

### LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

### CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2017

CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2017 Assignment 3: 2 late days to hand in tonight. Admin Assignment 4: Due Friday of next week. Last Time: MAP Estimation MAP

### OBJECT RECOGNITION ALGORITHM FOR MOBILE DEVICES

Image Processing & Communication, vol. 18,no. 1, pp.31-36 DOI: 10.2478/v10248-012-0088-x 31 OBJECT RECOGNITION ALGORITHM FOR MOBILE DEVICES RAFAŁ KOZIK ADAM MARCHEWKA Institute of Telecommunications, University

### Parallel Implementation of Deep Learning Using MPI

Parallel Implementation of Deep Learning Using MPI CSE633 Parallel Algorithms (Spring 2014) Instructor: Prof. Russ Miller Team #13: Tianle Ma Email: tianlema@buffalo.edu May 7, 2014 Content Introduction

### Label-denoising Auto-Encoder for Classification with Inaccurate Supervision Information

Label-denoising Auto-Encoder for Classification with Inaccurate Supervision Information Dong Wang Department of Computer Science and Technology Nanjing University of Aeronautics and Astronautics #9 Yudao

### Neural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Neural Networks These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these slides

### Autoencoder Using Kernel Method

2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC) Banff Center, Banff, Canada, October 5-8, 2017 Autoencoder Using Kernel Method Yan Pei Computer Science Division University of

### Stacks of Convolutional Restricted Boltzmann Machines for Shift-Invariant Feature Learning

Stacks of Convolutional Restricted Boltzmann Machines for Shift-Invariant Feature Learning Mohammad Norouzi, Mani Ranjbar, and Greg Mori School of Computing Science Simon Fraser University Burnaby, BC

### Using neural nets to recognize hand-written digits. Srikumar Ramalingam School of Computing University of Utah

Using neural nets to recognize hand-written digits Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the first chapter of the online book by Michael

### Learning Invariant Representations with Local Transformations

Kihyuk Sohn kihyuks@umich.edu Honglak Lee honglak@eecs.umich.edu Dept. of Electrical Engineering and Computer Science, University of Michigan, Ann Arbor, MI 48109, USA Abstract Learning invariant representations

### BRANDING AND STYLE GUIDELINES

BRANDING AND STYLE GUIDELINES INTRODUCTION The Dodd family brand is designed for clarity of communication and consistency within departments. Bold colors and photographs are set on simple and clean backdrops

### Neural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Neural Networks These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these slides

### Novel Lossy Compression Algorithms with Stacked Autoencoders

Novel Lossy Compression Algorithms with Stacked Autoencoders Anand Atreya and Daniel O Shea {aatreya, djoshea}@stanford.edu 11 December 2009 1. Introduction 1.1. Lossy compression Lossy compression is

### IMPLEMENTING DEEP LEARNING USING CUDNN 이예하 VUNO INC.

IMPLEMENTING DEEP LEARNING USING CUDNN 이예하 VUNO INC. CONTENTS Deep Learning Review Implementation on GPU using cudnn Optimization Issues Introduction to VUNO-Net DEEP LEARNING REVIEW BRIEF HISTORY OF NEURAL

### Single Image Depth Estimation via Deep Learning

Single Image Depth Estimation via Deep Learning Wei Song Stanford University Stanford, CA Abstract The goal of the project is to apply direct supervised deep learning to the problem of monocular depth

### Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters

### ECE 6504: Deep Learning for Perception

ECE 6504: Deep Learning for Perception Topics: (Finish) Backprop Convolutional Neural Nets Dhruv Batra Virginia Tech Administrativia Presentation Assignments https://docs.google.com/spreadsheets/d/ 1m76E4mC0wfRjc4HRBWFdAlXKPIzlEwfw1-u7rBw9TJ8/

### ORT EP R RCH A ESE R P A IDI! " #\$\$% &' (# \$!"

R E S E A R C H R E P O R T IDIAP A Parallel Mixture of SVMs for Very Large Scale Problems Ronan Collobert a b Yoshua Bengio b IDIAP RR 01-12 April 26, 2002 Samy Bengio a published in Neural Computation,

### Study of Residual Networks for Image Recognition

Study of Residual Networks for Image Recognition Mohammad Sadegh Ebrahimi Stanford University sadegh@stanford.edu Hossein Karkeh Abadi Stanford University hosseink@stanford.edu Abstract Deep neural networks

### SPE MS. Abstract. Introduction. Autoencoders

SPE-174015-MS Autoencoder-derived Features as Inputs to Classification Algorithms for Predicting Well Failures Jeremy Liu, ISI USC, Ayush Jaiswal, USC, Ke-Thia Yao, ISI USC, Cauligi S.Raghavendra, USC

### Perceptron: This is convolution!

Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image

### Convolution Neural Networks for Chinese Handwriting Recognition

Convolution Neural Networks for Chinese Handwriting Recognition Xu Chen Stanford University 450 Serra Mall, Stanford, CA 94305 xchen91@stanford.edu Abstract Convolutional neural networks have been proven

### The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

### IMAGE DE-NOISING USING DEEP NEURAL NETWORK

IMAGE DE-NOISING USING DEEP NEURAL NETWORK Jason Wayne Department of Computer Engineering, University of Sydney, Sydney, Australia ABSTRACT Deep neural network as a part of deep learning algorithm is a

### DEEP NEURAL NETWORKS FOR OBJECT DETECTION

DEEP NEURAL NETWORKS FOR OBJECT DETECTION Sergey Nikolenko Steklov Institute of Mathematics at St. Petersburg October 21, 2017, St. Petersburg, Russia Outline Bird s eye overview of deep learning Convolutional

### Slides adapted from Marshall Tappen and Bryan Russell. Algorithms in Nature. Non-negative matrix factorization

Slides adapted from Marshall Tappen and Bryan Russell Algorithms in Nature Non-negative matrix factorization Dimensionality Reduction The curse of dimensionality: Too many features makes it difficult to

### Constrained Convolutional Neural Networks for Weakly Supervised Segmentation. Deepak Pathak, Philipp Krähenbühl and Trevor Darrell

Constrained Convolutional Neural Networks for Weakly Supervised Segmentation Deepak Pathak, Philipp Krähenbühl and Trevor Darrell 1 Multi-class Image Segmentation Assign a class label to each pixel in

### Multi-view object segmentation in space and time. Abdelaziz Djelouah, Jean Sebastien Franco, Edmond Boyer

Multi-view object segmentation in space and time Abdelaziz Djelouah, Jean Sebastien Franco, Edmond Boyer Outline Addressed problem Method Results and Conclusion Outline Addressed problem Method Results

### Machine Learning With Python. Bin Chen Nov. 7, 2017 Research Computing Center

Machine Learning With Python Bin Chen Nov. 7, 2017 Research Computing Center Outline Introduction to Machine Learning (ML) Introduction to Neural Network (NN) Introduction to Deep Learning NN Introduction

### A Deep Learning Approach to the Classification of 3D Models under BIM Environment

, pp.179-188 http//dx.doi.org/10.14257/ijca.2016.9.7.17 A Deep Learning Approach to the Classification of 3D Models under BIM Environment Li Wang *, a, Zhikai Zhao b and Xuefeng Wu c a School of Mechanics

### Deep Learning Based Large Scale Handwritten Devanagari Character Recognition

Deep Learning Based Large Scale Handwritten Devanagari Character Recognition Ashok Kumar Pant (M.Sc.) Institute of Science and Technology TU Kirtipur, Nepal Email: ashokpant87@gmail.com Prashnna Kumar

### Convolutional Deep Belief Networks on CIFAR-10

Convolutional Deep Belief Networks on CIFAR-10 Alex Krizhevsky kriz@cs.toronto.edu 1 Introduction We describe how to train a two-layer convolutional Deep Belief Network (DBN) on the 1.6 million tiny images

### An Analysis of Single-Layer Networks in Unsupervised Feature Learning

An Analysis of Single-Layer Networks in Unsupervised Feature Learning Adam Coates Honglak Lee Andrew Y. Ng Stanford University Computer Science Dept. 353 Serra Mall Stanford, CA 94305 University of Michigan

### On the Effectiveness of Neural Networks Classifying the MNIST Dataset

On the Effectiveness of Neural Networks Classifying the MNIST Dataset Carter W. Blum March 2017 1 Abstract Convolutional Neural Networks (CNNs) are the primary driver of the explosion of computer vision.

### Does the Brain do Inverse Graphics?

Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto How to learn many layers of features

### 6. Convolutional Neural Networks

6. Convolutional Neural Networks CS 519 Deep Learning, Winter 2017 Fuxin Li With materials from Zsolt Kira Quiz coming up Next Thursday (2/2) 20 minutes Topics: Optimization Basic neural networks No Convolutional

### Lecture #11: The Perceptron

Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

### Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks

Volumetric Medical Image Segmentation with Deep Convolutional Neural Networks Manvel Avetisian Lomonosov Moscow State University avetisian@gmail.com Abstract. This paper presents a neural network architecture

### Introduction to Neural Networks

ECE 5775 (Fall 17) High-Level Digital Design Automation Introduction to Neural Networks Ritchie Zhao, Zhiru Zhang School of Electrical and Computer Engineering Rise of the Machines Neural networks have

### Deep Learning for Remote Sensing Data

Deep Learning for Remote Sensing Data A technical tutorial on the state of the art image licensed by ingram publishing Advances in Machine Learning for Remote Sensing and Geosciences LIANGPEI ZHANG, LEFEI

### Deep Learning Shape Priors for Object Segmentation

2013 IEEE Conference on Computer Vision and Pattern Recognition Deep Learning Shape Priors for Object Segmentation Fei Chen a,c Huimin Yu a,b Roland Hu a Xunxun Zeng d a Department of Information Science

### Tranont Mission Statement. Tranont Vision Statement. Change the world s economy, one household at a time.

STYLE GUIDE Tranont Mission Statement Change the world s economy, one household at a time. Tranont Vision Statement We offer individuals world class financial education and training, financial management

### Online Incremental Feature Learning with Denoising Autoencoders

Guanyu Zhou Kihyuk Sohn Honglak Lee Department of EECS University of Michigan Ann Arbor, MI 489 guanyuz@umich.edu Department of EECS University of Michigan Ann Arbor, MI 489 kihyuks@umich.edu Department

### Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

### arxiv: v1 [cs.cv] 8 Feb 2018

DEEP IMAGE SUPER RESOLUTION VIA NATURAL IMAGE PRIORS Hojjat S. Mousavi, Tiantong Guo, Vishal Monga Dept. of Electrical Engineering, The Pennsylvania State University arxiv:802.0272v [cs.cv] 8 Feb 208 ABSTRACT

### Deep Convolutional Inverse Graphics Network

Deep Convolutional Inverse Graphics Network Tejas D. Kulkarni* 1, William F. Whitney* 2, Pushmeet Kohli 3, Joshua B. Tenenbaum 4 1,2,4 Massachusetts Institute of Technology, Cambridge, USA 3 Microsoft

### Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons

Linear Separability Input space in the two-dimensional case (n = ): - - - - - - w =, w =, = - - - - - - w = -, w =, = - - - - - - w = -, w =, = Linear Separability So by varying the weights and the threshold,

### Modeling Hebb Learning Rule for Unsupervised Learning

Modeling Hebb Learning Rule for Unsupervised Learning Jia Liu 1, Maoguo Gong 1, Qiguang Miao 2 1 Key Laboratory of Intelligent Perception and Image Understanding, Xidian University, Xi an, China 2 School

### Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15

Perceptrons and Backpropagation Fabio Zachert Cognitive Modelling WiSe 2014/15 Content History Mathematical View of Perceptrons Network Structures Gradient Descent Backpropagation (Single-Layer-, Multilayer-Networks)

### CS 179 Lecture 16. Logistic Regression & Parallel SGD

CS 179 Lecture 16 Logistic Regression & Parallel SGD 1 Outline logistic regression (stochastic) gradient descent parallelizing SGD for neural nets (with emphasis on Google s distributed neural net implementation)

CS249: ADVANCED DATA MINING Classification Evaluation and Practical Issues Instructor: Yizhou Sun yzsun@cs.ucla.edu April 24, 2017 Homework 2 out Announcements Due May 3 rd (11:59pm) Course project proposal

### Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

### Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters

Scalable Gradient-Based Tuning of Continuous Regularization Hyperparameters Jelena Luketina 1 Mathias Berglund 1 Klaus Greff 2 Tapani Raiko 1 1 Department of Computer Science, Aalto University, Finland

### Overlapping Swarm Intelligence for Training Artificial Neural Networks

Overlapping Swarm Intelligence for Training Artificial Neural Networks Karthik Ganesan Pillai Department of Computer Science Montana State University EPS 357, PO Box 173880 Bozeman, MT 59717-3880 k.ganesanpillai@cs.montana.edu

### Deep Convolutional Neural Networks. Nov. 20th, 2015 Bruce Draper

Deep Convolutional Neural Networks Nov. 20th, 2015 Bruce Draper Background: Fully-connected single layer neural networks Feed-forward classification Trained through back-propagation Example Computer Vision

### Auto-Encoding Variational Bayes

Auto-Encoding Variational Bayes Diederik P (Durk) Kingma, Max Welling University of Amsterdam Ph.D. Candidate, advised by Max Durk Kingma D.P. Kingma Max Welling Problem class Directed graphical model:

### 1.2 Round-off Errors and Computer Arithmetic

1.2 Round-off Errors and Computer Arithmetic 1 In a computer model, a memory storage unit word is used to store a number. A word has only a finite number of bits. These facts imply: 1. Only a small set

### All You Want To Know About CNNs. Yukun Zhu

All You Want To Know About CNNs Yukun Zhu Deep Learning Deep Learning Image from http://imgur.com/ Deep Learning Image from http://imgur.com/ Deep Learning Image from http://imgur.com/ Deep Learning Image

### A Study on the Neural Network Model for Finger Print Recognition

A Study on the Neural Network Model for Finger Print Recognition Vijaya Sathiaraj Dept of Computer science and Engineering Bharathidasan University, Trichirappalli-23 Abstract: Finger Print Recognition

### Facial Expression Classification with Random Filters Feature Extraction

Facial Expression Classification with Random Filters Feature Extraction Mengye Ren Facial Monkey mren@cs.toronto.edu Zhi Hao Luo It s Me lzh@cs.toronto.edu I. ABSTRACT In our work, we attempted to tackle

### Analyzing Stochastic Gradient Descent for Some Non- Convex Problems

Analyzing Stochastic Gradient Descent for Some Non- Convex Problems Christopher De Sa Soon at Cornell University cdesa@stanford.edu stanford.edu/~cdesa Kunle Olukotun Christopher Ré Stanford University

### arxiv: v2 [cs.dl] 29 Jul 2017

Author Name Disambiguation by Using Deep Neural Network Hung Nghiep Tran, Tin Huynh, Tien Do University of Information Technology - Vietnam, Km 20, Hanoi Highway, Linh Trung Ward, Thu Duc District, HCMC,

### Residual Networks And Attention Models. cs273b Recitation 11/11/2016. Anna Shcherbina

Residual Networks And Attention Models cs273b Recitation 11/11/2016 Anna Shcherbina Introduction to ResNets Introduced in 2015 by Microsoft Research Deep Residual Learning for Image Recognition (He, Zhang,

### A HMAX with LLC for Visual Recognition

A HMAX with LLC for Visual Recognition Kean Hong Lau, Yong Haur Tay, Fook Loong Lo Centre for Computing and Intelligent System Universiti Tunku Abdul Rahman, Kuala Lumpur, Malaysia {laukh,tayyh,lofl}@utar.edu.my

### Multi-Task Learning of Facial Landmarks and Expression

Multi-Task Learning of Facial Landmarks and Expression Terrance Devries 1, Kumar Biswaranjan 2, and Graham W. Taylor 1 1 School of Engineering, University of Guelph, Guelph, Canada N1G 2W1 2 Department

### Minimizing Computation in Convolutional Neural Networks

Minimizing Computation in Convolutional Neural Networks Jason Cong and Bingjun Xiao Computer Science Department, University of California, Los Angeles, CA 90095, USA {cong,xiao}@cs.ucla.edu Abstract. Convolutional

### CSE 250B Project Assignment 4

CSE 250B Project Assignment 4 Hani Altwary haltwa@cs.ucsd.edu Kuen-Han Lin kul016@ucsd.edu Toshiro Yamada toyamada@ucsd.edu Abstract The goal of this project is to implement the Semi-Supervised Recursive

### Music Genre Classification

Music Genre Classification Matthew Creme, Charles Burlin, Raphael Lenain Stanford University December 15, 2016 Abstract What exactly is it that makes us, humans, able to tell apart two songs of different

### The ABC s of Web Site Evaluation

Aa Bb Cc Dd Ee Ff Gg Hh Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Ss Tt Uu Vv Ww Xx Yy Zz The ABC s of Web Site Evaluation by Kathy Schrock Digital Literacy by Paul Gilster Digital literacy is the ability to understand

### ECE 285 Final Project

ECE 285 Final Project Michael Threet mthreet@ucsd.edu Chenyin Liu chl586@ucsd.edu Rui Guo rug009@eng.ucsd.edu Abstract Source localization allows for range finding in underwater acoustics. Traditionally,

### Blind Image Quality Assessment on Real Distorted Images using Deep Belief Nets

Blind Image Quality Assessment on Real Distorted Images using Deep Belief Nets Deepti Ghadiyaram The University of Texas at Austin Alan C. Bovik The University of Texas at Austin Abstract We present a

### Learning Multiple Non-Linear Sub-Spaces using K-RBMs

2013 IEEE Conference on Computer Vision and Pattern Recognition Learning Multiple Non-Linear Sub-Spaces using K-RBMs Siddhartha Chandra 1 Shailesh Kumar 2 C. V. Jawahar 1 1 CVIT, IIIT Hyderabad, 2 Google,

### LSTM for Language Translation and Image Captioning. Tel Aviv University Deep Learning Seminar Oran Gafni & Noa Yedidia

1 LSTM for Language Translation and Image Captioning Tel Aviv University Deep Learning Seminar Oran Gafni & Noa Yedidia 2 Part I LSTM for Language Translation Motivation Background (RNNs, LSTMs) Model