Deep Learning. Deep Learning provided breakthrough results in speech recognition and image classification. Why?
|
|
- Antony Bryan
- 6 years ago
- Views:
Transcription
1 Data Mining
2 Deep Learning Deep Learning provided breakthrough results in speech recognition and image classification. Why? Because Speech recognition and image classification are two basic examples of problems where learning is extremely hard, due to the huge amount of parameter space dimension and of possible ranges in their features.
3 Deep Learning So, 1. what exactly is deep learning? And, 2. why is it generally better than other methods on image, speech and certain other types of data? Answers 1. Deep Learning means using a neural network with several layers of nodes between input and output 2. the series of layers between input & output do feature identification and processing in a series of stages, just as our brains seem to.
4 Deep Learning OK, but: 3. multilayer neural networks have been around for 25 years. What s actually new? we have always had good algorithms for learning the weights in networks with 1 (max 2) hidden layer(s) but these algorithms are not good at learning the weights for networks with more than 2 hidden layers what s new is: algorithms for training manylayer networks
5 DNN W1-2.5 W2 f(x) W3 1.4
6 DNN f(x) x = =
7 A dataset Fields class etc DNN
8 Training the neural network Fields class etc DNN
9 Training data Fields class etc DNN Initialise with random weights
10 Training data Fields class etc DNN Present a training pattern
11 Training data Fields class etc DNN Feed it through to get output
12 Training data Fields class etc DNN 1.4 Compare with target output error 0.8
13 Training data Fields class etc DNN Adjust weights based on error
14 Training data Fields class etc DNN Present a training pattern
15 Training data Fields class etc DNN Feed it through to get output
16 Training data Fields class etc DNN Compare with target output
17 Training data Fields class etc DNN Adjust weights based on error
18 Training data Fields class etc DNN And so on. Repeat this thousands, maybe millions of times each time taking a random training instance, and making slight weight adjustments Algorithms for weight adjustment are designed to make changes that will reduce the error
19 DNN The decision boundary perspective Initial random weights
20 DNN The decision boundary perspective Present a training instance / adjust the weights
21 DNN The decision boundary perspective Present a training instance / adjust the weights
22 DNN The decision boundary perspective Present a training instance / adjust the weights
23 DNN The decision boundary perspective Present a training instance / adjust the weights
24 DNN The decision boundary perspective Eventually.
25 DNN The point I am trying to make weight-learning algorithms for NNs are silent they work by making thousands and thousands of tiny adjustments, each making the network do better at the most recent pattern, but perhaps a little worse on many others but, hopefully this tends to be good enough to learn effective classifiers for many real applications If f(x) is non-linear, a network with 1 hidden layer can, in theory, learn perfectly any classification problem. A set of weights exists that can produce the targets from the inputs. The problem is finding them.
26 DNN NNs use nonlinear f(x) so they can draw complex boundaries, but keep the data unchanged SVMs only draw straight lines, but they transform the data first in a way that makes that OK
27 The virtually impossible
28 DNN text mining Feature detectors 28
29 DNN text mining What is unit doing? 29
30 DNN text mining Hidden layer units become self-organised feature detectors strong value weight low/zero weight 63 30
31 DNN text mining strong value weight low/zero weight 63 What does this unit detect? 31
32 DNN text mining strong value weight low/zero weight it will send strong signal for a horizontal line in the top row, ignoring everywhere else 63 32
33 DNN text mining strong value weight low/zero weight 63 What does this unit detect? 33
34 DNN text mining strong value weight low/zero weight Strong signal for a dark area in the top left corner 63 34
35 DNN text mining What features might you expect a good NN to learn, when trained with data like this? 35
36 DNN text mining DNN vertical lines 36
37 DNN text mining Horizontal lines 37
38 DNN text mining Small circles 38
39 DNN text mining Small circles But what about position invariance??? our example unit detectors were related to specific parts of the image - lezione 4 39
40 DNN text mining successive layers can learn higher-level features detect lines in Specific positions etc Higher level detectors ( horizontal line, vertical lines upper loop, etc ) v etc 40
41 DNN So: multiple layers make sense Your brain works that way 41
42 DNN Many-layer neural network architectures should be capable of learning the true underlying features and feature logic, and therefore generalise very well But, until very recently, our weight-learning algorithms simply did not work on multi-layer architectures 42
43 Deeper is better? Layer X Size Word Error Rate (%) 1 X 2k X 2k X 2k X 2k 17.8 Layer X Size Word Error Rate (%) Not surprised, more parameters, better performance 5 X 2k X X 2k X X 16k 22.1 Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription Using Context-Dependent Deep Neural Networks." Interspeech
44 Universality Theorem Any continuous function f f : R N R M Can be realized by a network with one hidden layer (given enough hidden neurons) Why Deep neural network not Fat neural network? 44
45 Fat+Short vs. Thin+Tall The same number of parameters Which one is better? x1 x 2 x N x1 x 2 x N Shallow Deep 45
46 Fat+Short vs. Thin+Tall Layer X Size Word Error Rate (%) 1 X 2k X 2k X 2k X 2k 17.8 Layer X Size Word Error Rate (%) 5 X 2k X X 2k X X 16k 22.1 Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription Using Context-Dependent Deep Neural Networks." Interspeech
47 Why deep? Deep Modularization Classifier 1 Girls with long hair Image Classifier 2 Classifier 3 weak Boys with long hair Girls with short hair Little examples Classifier 4 Boys with short hair 47
48 Why deep? Deep Modularization can be trained by little data Classifier 1 Girls with long hair Image Boy or Girl? Basic Classifier Long or short? Sharing by the following classifiers as module Classifier 2 fine Classifier 3 Classifier 4 Boys with long Little hair data Girls with short hair Boys with short hair 48
49 DNN The new way to train multi-layer NNs Train this layer first 49
50 DNN The new way to train multi-layer NNs Train this layer first then this layer 50
51 DNN The new way to train multi-layer NNs Train this layer first then this layer then this layer 51
52 DNN Train this layer first then this layer then this layer then this layer 52
53 DNN The new way to train multi-layer NNs Train this layer first then this layer then this layer then this layer finally this layer 53
54 DNN The new way to train multi-layer NNs EACH of the (non-output) layers is trained to be an auto-encoder Basically, it is forced to learn good features that describe what comes from the previous layer - lezione 4 54
55 DNN auto-encoders an auto-encoder is trained, with an absolutely standard weightadjustment algorithm to reproduce the input 55
56 DNN auto-encoders an auto-encoder is trained, with an absolutely standard weightadjustment algorithm to reproduce the input By making this happen with (many) fewer units than the inputs, this forces the hidden layer units to become good feature detectors 56
57 DNN auto-encoders intermediate layers are each trained to be auto encoders (or similar) 57
58 DNN auto-encoders Final layer trained to predict class based on outputs from previous layers 58
59 DNN That s the basic idea There are many many types of deep learning, different kinds of autoencoder, variations on architectures and training algorithms, etc Very fast growing research area 59
60 DNN 60
61 DNN 61
62 DNN 62
63 Convolutional DNN 63
64 Convolutional DNN History In 1995, Yann LeCun and Yoshua Bengio introduced the concept of convolutional neural networks. Yann LeCun Yoshua Bengio 64
65 Convolutional DNN CNN: input image convolved with N trainable filters and biases to produce N feature maps at C1 level. Each group of pixels in the feature maps are added, weighted, combined with a bias and passed through a sigmoid function to produce the N feature maps at S2. These are again filtered to produce C3 level. The hierarchy then produces S4 in the same way of S2. Finally these pixels are presented as a single vector input to the conventional neural network at the output. C layers are convolutions, S layers pool/sample Often starts with fairly raw features at initial input and lets CNN discover improved feature layer for final supervised learner eg. MLP/BP - lezione
66 Convolutional DNN Recap of Convnet Neural network with specialized connectivity structure Feed-forward: - Convolve input - Non-linearity (Rectified Linear Unit or ReLU) - Pooling (local max, or min, average, median, etc.) Supervised Train convolutional filters by back-propagating classification error Feature maps Pooling Non-linearity Convolution (Learned) Input image 66
67 Convolutional DNN Connectivity & weight sharing depends on layer All different weights All different weights Shared weights Convolution layer has much smaller number of parameters by local connection and weight sharing 67
68 Convolutional DNN Convolution layer Detect the same feature at different positions in the input image features Filter (kernel) Input Feature map 68
69 Convolutional DNN Non-linearity Tanh Sigmoid: 1/(1+exp(-x)) Rectified linear (ReLU) : max(0,x) - Simplifies backprop - Makes learning faster - Make feature sparse 69
70 Convolutional DNN Sub-sampling layer Spatial Pooling - usually average or max Role of Pooling - Invariance to small transformations - reduce the effect of noises and shift or distortion Max Av. 70
71 Convolutional DNN Normalization Contrast normalization (between/across feature map) - Equalizes the features map Feature maps Feature maps after contrast normalization 71
72 Convolutional DNN For each pixel in the input image, we encode the pixel's intensity as the value for a corresponding neuron in the input layer. For pixel images, this means our network has 784 (28 28) input neurons. We then train the network's weights and biases. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. Let's look at each of these ideas in turn. Local receptive fields: In the fully-connected layers shown earlier, the inputs were depicted as a vertical line of neurons. In a convolutional net, it'll help to think instead of the inputs as a square of neurons, whose values correspond to the pixel intensities we're using as inputs. As per usual, we'll connect the input pixels to a layer of hidden neurons. But we won't connect every input pixel to every hidden neuron. Instead, we only make connections in small, localized regions of the input image. 72
73 Convolutional DNN To be more precise, each neuron in the first hidden layer will be connected to a small region of the input neurons, say, for example, a 5 5 region, corresponding to 25x25 input pixels. So, for a particular hidden neuron, we might have connections that look like in figure above. That region in the input image is called the local receptive field for the hidden neuron. It's a little window on the input pixels. Each connection learns a weight. And the hidden neuron learns an overall bias as well. You can think of that particular hidden neuron as learning to analyze its particular local receptive field. 73
74 Convolutional DNN We then slide the local receptive field across the entire input image. For each local receptive field, there is a different hidden neuron in the first hidden layer. 74
75 Convolutional DNN We then slide the local receptive field across the entire input image. For each local receptive field, there is a different hidden neuron in the first hidden layer. If we have a input image, and 5 5 local receptive fields, then there will be neurons in the hidden layer. This is because we can only move the local receptive field 23x23 neurons across (or 23x23 neurons down), before colliding with the right-hand side (or bottom) of the input image. 75
76 Convolutional DNN Shared weights and biases: I've said that each hidden neuron has a bias and 5 5 weights connected to its local receptive field. What I did not yet mention is that we're going to use the same weights and bias for each of the hidden neurons. In other words, for the j, kth hidden neuron, the output is: Here, σ is the neural activation function such as the sigmoid function. b is the shared value for the bias. w l,m is a 5 5 array of shared weights. And, finally, we use a j,k to denote the input activation at position j,k. This means that all the neurons in the first hidden layer detect exactly the same feature. Informally, think of the feature detected by a hidden neuron as the kind of input pattern that will cause the neuron to activate: it might be an edge in the image, or other type of shape. 76
77 Convolutional DNN Pooling layers: In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is simplify the information in the output from the convolutional layer. 77
78 Convolutional DNN Pooling layers: In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is simplify the information in the output from the convolutional layer. A pooling layer takes each feature map output from the convolutional layer and prepares a condensed feature map. Each unit in the pooling layer may summarize a region of 2 2 neurons in the previous layer. As a concrete example, one common procedure for pooling is known as max-pooling. In max-pooling, a pooling unit outputs the maximum activation in the 2 2 input region. 78
79 Convolutional DNN Putting it all together: We can now put all these ideas together to form a complete convolutional neural network. The network begins with input neurons, which are used to encode the pixel intensities for the input image. This is then followed by a convolutional layer using a 5 5 local receptive field and 3x3 feature maps. The result is a layer of hidden feature neurons. The next step is a max-pooling layer, applied to 2 2 regions, across each of the 3x3 feature maps. The result is a layer of hidden feature neurons. The final layer of connections in the network is a fully-connected layer. That is, this layer connects every neuron from the max-pooled layer to every one of the 10 output neurons 79
80 Softmax classification Softmax layer as the output layer Ordinary Layer z 1 z 2 y y 1 z 1 2 z 2 In general, the output of network can be any value. z 3 y 3 z 3 May not be easy to interpret 80
81 Softmax classification Softmax layer as the output layer Softmax Layer Probability: 1 > y i > 0 σ i y i = 1 z 1 z 2 z 3 3 e 1 e e e z 1 20 e z 2 e z 3 3 j z j e 0.88 y 0.12 y 0 y e e e z z z j 1 3 j 1 3 j 1 81 e e e z z z j j j
82 Softmax classification - example x 1 θ = W 1, b 1, W 2, b 2, W L, b L y x 16 = 256 Ink 1 No ink 0 x 2 x 256 Softmax y y 10 is 2 82
83 Convolutional DNN in Astrophysics convolutional neural networks for galaxy morphology prediction Dieleman et al. 2015, MNRAS, 450, 2 Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the SDSS have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is timeconsuming and does not scale to large (> 10 4 ) numbers of images. 83
84 Convolutional DNN in Astrophysics 84
85 Convolutional DNN One method for classifying galaxy morphology exploits the rotational symmetry of galaxy images; however, there are other invariances and symmetries (besides translational) that may be exploited for convolutional neural networks. The idea of deep learning is to build models that represent data at multiple levels of abstraction, and can discover accurate representations autonomously from the data itself. Deep learning models consist of several layers of processing that form a hierarchy: each subsequent layer extracts a progressively more abstract representation of the input data and builds upon the representation from the previous layer, typically by computing a non-linear transformation of its input. The parameters of these transformations are optimized by training the model on a dataset. 85
86 Convolutional DNN To determine how the parameters should be changed to reduce the prediction error across the dataset, the gradient descent is used: Convolutional neural networks contain two types of layers with restricted connectivity: convolutional layers and pooling layers. A convolutional layer takes a stack of feature maps (e.g. the colour channels of an image) as input and convolves each of these with a set of learnable filters to produce a stack of output feature maps The output feature maps are represented as follows 86
87 Convolutional DNN Here * represents the two-dimensional convolution operation, the matrices W n represent the filters of (l) layer n, and b n represents the bias for each feature map. Note that a feature map X n is obtained by computing a sum of K convolutions with the feature maps of the previous layer. By replacing the matrix product with a sum of convolutions, the connectivity of the layer is effectively restricted to take advantage of the input structure and to reduce the number of parameters. Each unit is only connected to a local subset of the units in the layer below, and each unit is replicated across the entire input. This means that each unit can be seen as detecting a particular feature across the input (for example, an oriented edge in an image). 87
88 Convolutional DNN Because convolutional layers are only able to model local correlations in the input, the dimensionality of the feature maps is often reduced between convolutional layers by inserting pooling layers. This allows higher layers to model correlations across a larger part of the input, with a lower resolution. A pooling layer reduces the dimensionality of a feature map by computing some aggregation function (typically the maximum or the mean) across small local regions of the input. By alternating convolutional and pooling layers, higher layers in the network see a progressive coarse representation of the input. As a result, these layers are able to model higher-level abstractions more easily because each unit is able to see a larger part of the input. This also makes the model invariant to small translations of the input, which is a desirable property for modelling images and many other types of data. Unlike convolutional layers, pooling layers typically do not have any trainable parameters. 88
89 Convolutional DNN The restricted connectivity patterns used in convolutional neural networks drastically reduce the number of parameters required to model large images, by exploiting translational symmetry. However, there are many other types of symmetries that occur in images. For images of galaxies, rotating an image should not affect its morphological classification. This rotational symmetry is exploited by applying the same set of feature detectors to various rotated versions of the input. Rotating an image by an angle not a multiple of 90 requires interpolation and results in an image whose edges are not aligned with the rows and columns of the pixel grid. These complications make exploiting rotational symmetry more challenging. 89
90 Convolutional DNN We compute rotated and flipped versions of the input images, which are referred to as viewpoints, and process each of these separately with the same convolutional network architecture, consisting of alternating convolutional layers and pooling layers. The output feature maps of this network for the different viewpoints are then concatenated, and one or more dense layers are stacked on top. This arrangement allows the dense layers to aggregate high-level features extracted from different viewpoints. In practice, we crop the top left part of each viewpoint image to reduce redundancy between viewpoints and the size of the input images (and hence computation time). Images are cropped in such a way that each one contains the center of the galaxy, part of the image that is very informative. 90
91 Convolutional DNN - lezione 4 91
92 Convolutional DNN In terms of preprocessing, images are first cropped and rescaled to reduce the dimensionality of the input. It was useful to crop the images because the object of interest is in the middle of the image with a large amount of sky background, and typically fits within a square with a side of approximately half the image height. We then rescaled the images to speed up training, with little to no effect on predictive performance. Images were cropped from 424 x 424 pixels to 207 x 207, and then downscaled 3 times to 69 x 69 pixels. Sometimes it may occur that cropping could partially remove interesting parts of the image. In such cases an analysis of Petrosian radii and position of the object can help to recenter and rescaling of the cropped images. 92
93 Convolutional DNN Due to the limited size of the training set, performing data augmentation to artificially increase the number of training examples is instrumental. Each training example was randomly perturbed in five ways: rotation: random rotation with an angle sampled uniformly between 0 and 360, to exploit rotational symmetry in the images. translation: random shift sampled uniformly between -4 and +4 pixels (relative to the original image size of 424 by 424) in the x and y direction. The size of the shift is limited to ensure that the object of interest remains in the center of the image. scaling: random rescaling with a factor sampled log-uniformly between and 1.3. flipping: the image is flipped with a probability of 0.5. brightness adjustment: the colour of the image is adjusted as described by Krizhevsky et al. (2012): the standard deviation for the scale factor is set to = 0.5. In practice, this amounts to a brightness adjustment. 93
94 Convolutional DNN After preprocessing and augmentation, we performed the viewpoint extraction by rotating, flipping and cropping the input images. We extracted 16 different viewpoints for each image: first, two square-shaped crops were extracted from an input image, one at 0 and one at 45. Both were also flipped horizontally to obtain 4 crops in total. Each of these crops is 69 x 69 pixels in size. Then, four overlapping corner patches of 45 x 45 pixels were extracted from each crop, and rotated so that the center of the galaxy is in the bottom right corner of each patch. These 16 rotated patches constitute the viewpoints. 94
95 Convolutional DNN All viewpoints were presented to the network as 45 by 45 by 3 arrays of RGB values, scaled to the interval [0; 1], and processed by the same convolutional architecture. The resulting feature maps were then concatenated and processed by a stack of three fully connected layers to map them to the 37 answer probabilities. 95
96 Convolutional DNN 96
97 Convolutional DNN 97
98 Convolutional DNN - lezione 4 98
Machine Learning 13. week
Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of
More informationDEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla
DEEP LEARNING REVIEW Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature 2015 -Presented by Divya Chitimalla What is deep learning Deep learning allows computational models that are composed of multiple
More informationMachine Learning. Deep Learning. Eric Xing (and Pengtao Xie) , Fall Lecture 8, October 6, Eric CMU,
Machine Learning 10-701, Fall 2015 Deep Learning Eric Xing (and Pengtao Xie) Lecture 8, October 6, 2015 Eric Xing @ CMU, 2015 1 A perennial challenge in computer vision: feature engineering SIFT Spin image
More informationCMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 18: Deep learning and Vision: Convolutional neural networks Teacher: Gianni A. Di Caro DEEP, SHALLOW, CONNECTED, SPARSE? Fully connected multi-layer feed-forward perceptrons: More powerful
More informationNatural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu
Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward
More informationDeep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies
http://blog.csdn.net/zouxy09/article/details/8775360 Automatic Colorization of Black and White Images Automatically Adding Sounds To Silent Movies Traditionally this was done by hand with human effort
More informationDeep Learning with Tensorflow AlexNet
Machine Learning and Computer Vision Group Deep Learning with Tensorflow http://cvml.ist.ac.at/courses/dlwt_w17/ AlexNet Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton, "Imagenet classification
More informationKnow your data - many types of networks
Architectures Know your data - many types of networks Fixed length representation Variable length representation Online video sequences, or samples of different sizes Images Specific architectures for
More informationRotation Invariance Neural Network
Rotation Invariance Neural Network Shiyuan Li Abstract Rotation invariance and translate invariance have great values in image recognition. In this paper, we bring a new architecture in convolutional neural
More informationDeep Learning. Visualizing and Understanding Convolutional Networks. Christopher Funk. Pennsylvania State University.
Visualizing and Understanding Convolutional Networks Christopher Pennsylvania State University February 23, 2015 Some Slide Information taken from Pierre Sermanet (Google) presentation on and Computer
More informationCOMP 551 Applied Machine Learning Lecture 16: Deep Learning
COMP 551 Applied Machine Learning Lecture 16: Deep Learning Instructor: Ryan Lowe (ryan.lowe@cs.mcgill.ca) Slides mostly by: Class web page: www.cs.mcgill.ca/~hvanho2/comp551 Unless otherwise noted, all
More informationConvolutional Neural Networks
Lecturer: Barnabas Poczos Introduction to Machine Learning (Lecture Notes) Convolutional Neural Networks Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications.
More informationVisual object classification by sparse convolutional neural networks
Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.
More informationDeep (1) Matthieu Cord LIP6 / UPMC Paris 6
Deep (1) Matthieu Cord LIP6 / UPMC Paris 6 Syllabus 1. Whole traditional (old) visual recognition pipeline 2. Introduction to Neural Nets 3. Deep Nets for image classification To do : Voir la leçon inaugurale
More informationCS 2750: Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh April 13, 2016
CS 2750: Machine Learning Neural Networks Prof. Adriana Kovashka University of Pittsburgh April 13, 2016 Plan for today Neural network definition and examples Training neural networks (backprop) Convolutional
More informationDeep Learning for Computer Vision II
IIIT Hyderabad Deep Learning for Computer Vision II C. V. Jawahar Paradigm Shift Feature Extraction (SIFT, HoG, ) Part Models / Encoding Classifier Sparrow Feature Learning Classifier Sparrow L 1 L 2 L
More informationDeep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group
Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies
More informationNeural Networks: What can a network represent. Deep Learning, Fall 2018
Neural Networks: What can a network represent Deep Learning, Fall 2018 1 Recap : Neural networks have taken over AI Tasks that are made possible by NNs, aka deep learning 2 Recap : NNets and the brain
More informationNeural Networks: What can a network represent. Deep Learning, Spring 2018
Neural Networks: What can a network represent Deep Learning, Spring 2018 1 Recap : Neural networks have taken over AI Tasks that are made possible by NNs, aka deep learning 2 Recap : NNets and the brain
More informationSEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic
SEMANTIC COMPUTING Lecture 8: Introduction to Deep Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 7 December 2018 Overview Introduction Deep Learning General Neural Networks
More informationSu et al. Shape Descriptors - III
Su et al. Shape Descriptors - III Siddhartha Chaudhuri http://www.cse.iitb.ac.in/~cs749 Funkhouser; Feng, Liu, Gong Recap Global A shape descriptor is a set of numbers that describes a shape in a way that
More informationDeep Convolutional Neural Networks. Nov. 20th, 2015 Bruce Draper
Deep Convolutional Neural Networks Nov. 20th, 2015 Bruce Draper Background: Fully-connected single layer neural networks Feed-forward classification Trained through back-propagation Example Computer Vision
More informationAdvanced Introduction to Machine Learning, CMU-10715
Advanced Introduction to Machine Learning, CMU-10715 Deep Learning Barnabás Póczos, Sept 17 Credits Many of the pictures, results, and other materials are taken from: Ruslan Salakhutdinov Joshua Bengio
More informationIntroduction to Neural Networks
Introduction to Neural Networks Jakob Verbeek 2017-2018 Biological motivation Neuron is basic computational unit of the brain about 10^11 neurons in human brain Simplified neuron model as linear threshold
More informationClassifying Depositional Environments in Satellite Images
Classifying Depositional Environments in Satellite Images Alex Miltenberger and Rayan Kanfar Department of Geophysics School of Earth, Energy, and Environmental Sciences Stanford University 1 Introduction
More informationStudy of Residual Networks for Image Recognition
Study of Residual Networks for Image Recognition Mohammad Sadegh Ebrahimi Stanford University sadegh@stanford.edu Hossein Karkeh Abadi Stanford University hosseink@stanford.edu Abstract Deep neural networks
More informationConvolutional Neural Networks. Computer Vision Jia-Bin Huang, Virginia Tech
Convolutional Neural Networks Computer Vision Jia-Bin Huang, Virginia Tech Today s class Overview Convolutional Neural Network (CNN) Training CNN Understanding and Visualizing CNN Image Categorization:
More informationINTRODUCTION TO DEEP LEARNING
INTRODUCTION TO DEEP LEARNING CONTENTS Introduction to deep learning Contents 1. Examples 2. Machine learning 3. Neural networks 4. Deep learning 5. Convolutional neural networks 6. Conclusion 7. Additional
More information6. Convolutional Neural Networks
6. Convolutional Neural Networks CS 519 Deep Learning, Winter 2017 Fuxin Li With materials from Zsolt Kira Quiz coming up Next Thursday (2/2) 20 minutes Topics: Optimization Basic neural networks No Convolutional
More informationStacked Denoising Autoencoders for Face Pose Normalization
Stacked Denoising Autoencoders for Face Pose Normalization Yoonseop Kang 1, Kang-Tae Lee 2,JihyunEun 2, Sung Eun Park 2 and Seungjin Choi 1 1 Department of Computer Science and Engineering Pohang University
More informationComputer Vision Lecture 16
Computer Vision Lecture 16 Deep Learning for Object Categorization 14.01.2016 Bastian Leibe RWTH Aachen http://www.vision.rwth-aachen.de leibe@vision.rwth-aachen.de Announcements Seminar registration period
More informationRyerson University CP8208. Soft Computing and Machine Intelligence. Naive Road-Detection using CNNS. Authors: Sarah Asiri - Domenic Curro
Ryerson University CP8208 Soft Computing and Machine Intelligence Naive Road-Detection using CNNS Authors: Sarah Asiri - Domenic Curro April 24 2016 Contents 1 Abstract 2 2 Introduction 2 3 Motivation
More informationCOMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017
COMP9444 Neural Networks and Deep Learning 7. Image Processing COMP9444 17s2 Image Processing 1 Outline Image Datasets and Tasks Convolution in Detail AlexNet Weight Initialization Batch Normalization
More informationFacial Expression Classification with Random Filters Feature Extraction
Facial Expression Classification with Random Filters Feature Extraction Mengye Ren Facial Monkey mren@cs.toronto.edu Zhi Hao Luo It s Me lzh@cs.toronto.edu I. ABSTRACT In our work, we attempted to tackle
More informationCharacter Recognition Using Convolutional Neural Networks
Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract
More informationCapsule Networks. Eric Mintun
Capsule Networks Eric Mintun Motivation An improvement* to regular Convolutional Neural Networks. Two goals: Replace max-pooling operation with something more intuitive. Keep more info about an activated
More informationLecture 20: Neural Networks for NLP. Zubin Pahuja
Lecture 20: Neural Networks for NLP Zubin Pahuja zpahuja2@illinois.edu courses.engr.illinois.edu/cs447 CS447: Natural Language Processing 1 Today s Lecture Feed-forward neural networks as classifiers simple
More informationNeural Network Neurons
Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given
More informationDynamic Routing Between Capsules
Report Explainable Machine Learning Dynamic Routing Between Capsules Author: Michael Dorkenwald Supervisor: Dr. Ullrich Köthe 28. Juni 2018 Inhaltsverzeichnis 1 Introduction 2 2 Motivation 2 3 CapusleNet
More informationResearch on Pruning Convolutional Neural Network, Autoencoder and Capsule Network
Research on Pruning Convolutional Neural Network, Autoencoder and Capsule Network Tianyu Wang Australia National University, Colledge of Engineering and Computer Science u@anu.edu.au Abstract. Some tasks,
More informationStructured Prediction using Convolutional Neural Networks
Overview Structured Prediction using Convolutional Neural Networks Bohyung Han bhhan@postech.ac.kr Computer Vision Lab. Convolutional Neural Networks (CNNs) Structured predictions for low level computer
More informationNeural Networks for Machine Learning. Lecture 15a From Principal Components Analysis to Autoencoders
Neural Networks for Machine Learning Lecture 15a From Principal Components Analysis to Autoencoders Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Principal Components
More informationDeep Learning in Visual Recognition. Thanks Da Zhang for the slides
Deep Learning in Visual Recognition Thanks Da Zhang for the slides Deep Learning is Everywhere 2 Roadmap Introduction Convolutional Neural Network Application Image Classification Object Detection Object
More informationDoes the Brain do Inverse Graphics?
Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto How to learn many layers of features
More informationNeural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /10/2017
3/0/207 Neural Networks Emily Fox University of Washington March 0, 207 Slides adapted from Ali Farhadi (via Carlos Guestrin and Luke Zettlemoyer) Single-layer neural network 3/0/207 Perceptron as a neural
More informationCS 6501: Deep Learning for Computer Graphics. Training Neural Networks II. Connelly Barnes
CS 6501: Deep Learning for Computer Graphics Training Neural Networks II Connelly Barnes Overview Preprocessing Initialization Vanishing/exploding gradients problem Batch normalization Dropout Additional
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining
More informationLecture 37: ConvNets (Cont d) and Training
Lecture 37: ConvNets (Cont d) and Training CS 4670/5670 Sean Bell [http://bbabenko.tumblr.com/post/83319141207/convolutional-learnings-things-i-learned-by] (Unrelated) Dog vs Food [Karen Zack, @teenybiscuit]
More informationDeep Learning. Volker Tresp Summer 2014
Deep Learning Volker Tresp Summer 2014 1 Neural Network Winter and Revival While Machine Learning was flourishing, there was a Neural Network winter (late 1990 s until late 2000 s) Around 2010 there
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationA Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images
A Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images Marc Aurelio Ranzato Yann LeCun Courant Institute of Mathematical Sciences New York University - New York, NY 10003 Abstract
More informationWeighted Convolutional Neural Network. Ensemble.
Weighted Convolutional Neural Network Ensemble Xavier Frazão and Luís A. Alexandre Dept. of Informatics, Univ. Beira Interior and Instituto de Telecomunicações Covilhã, Portugal xavierfrazao@gmail.com
More informationObject Detection Lecture Introduction to deep learning (CNN) Idar Dyrdal
Object Detection Lecture 10.3 - Introduction to deep learning (CNN) Idar Dyrdal Deep Learning Labels Computational models composed of multiple processing layers (non-linear transformations) Used to learn
More informationVulnerability of machine learning models to adversarial examples
Vulnerability of machine learning models to adversarial examples Petra Vidnerová Institute of Computer Science The Czech Academy of Sciences Hora Informaticae 1 Outline Introduction Works on adversarial
More informationNeural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing
Neural Networks for unsupervised learning From Principal Components Analysis to Autoencoders to semantic hashing feature 3 PC 3 Beate Sick Many slides are taken form Hinton s great lecture on NN: https://www.coursera.org/course/neuralnets
More informationConvolution Neural Networks for Chinese Handwriting Recognition
Convolution Neural Networks for Chinese Handwriting Recognition Xu Chen Stanford University 450 Serra Mall, Stanford, CA 94305 xchen91@stanford.edu Abstract Convolutional neural networks have been proven
More informationCS 4510/9010 Applied Machine Learning. Deep Learning. Paula Matuszek Fall copyright Paula Matuszek 2016
CS 4510/9010 Applied Machine Learning 1 Deep Learning Paula Matuszek Fall 2016 Beyond Simple Neural Nets 2 In the last few ideas we have seen some surprisingly rapid progress in some areas of AI Image
More informationDisguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network. Nathan Sun CIS601
Disguised Face Identification (DFI) with Facial KeyPoints using Spatial Fusion Convolutional Network Nathan Sun CIS601 Introduction Face ID is complicated by alterations to an individual s appearance Beard,
More informationAccelerating Convolutional Neural Nets. Yunming Zhang
Accelerating Convolutional Neural Nets Yunming Zhang Focus Convolutional Neural Nets is the state of the art in classifying the images The models take days to train Difficult for the programmers to tune
More informationDeep Learning Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD.
Deep Learning 861.061 Basic Lecture - Complex Systems & Artificial Intelligence 2017/18 (VO) Asan Agibetov, PhD asan.agibetov@meduniwien.ac.at Medical University of Vienna Center for Medical Statistics,
More informationDeep Learning For Video Classification. Presented by Natalie Carlebach & Gil Sharon
Deep Learning For Video Classification Presented by Natalie Carlebach & Gil Sharon Overview Of Presentation Motivation Challenges of video classification Common datasets 4 different methods presented in
More informationPerceptron: This is convolution!
Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image
More informationDoes the Brain do Inverse Graphics?
Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto The representation used by the
More informationImageNet Classification with Deep Convolutional Neural Networks
ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky Ilya Sutskever Geoffrey Hinton University of Toronto Canada Paper with same name to appear in NIPS 2012 Main idea Architecture
More informationLarge-scale Video Classification with Convolutional Neural Networks
Large-scale Video Classification with Convolutional Neural Networks Andrej Karpathy, George Toderici, Sanketh Shetty, Thomas Leung, Rahul Sukthankar, Li Fei-Fei Note: Slide content mostly from : Bay Area
More informationDeep Learning. Architecture Design for. Sargur N. Srihari
Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation
More informationDeep Learning With Noise
Deep Learning With Noise Yixin Luo Computer Science Department Carnegie Mellon University yixinluo@cs.cmu.edu Fan Yang Department of Mathematical Sciences Carnegie Mellon University fanyang1@andrew.cmu.edu
More informationA Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images
A Sparse and Locally Shift Invariant Feature Extractor Applied to Document Images Marc Aurelio Ranzato Yann LeCun Courant Institute of Mathematical Sciences New York University - New York, NY 10003 Abstract
More informationNeural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina
Neural Network and Deep Learning Early history of deep learning Deep learning dates back to 1940s: known as cybernetics in the 1940s-60s, connectionism in the 1980s-90s, and under the current name starting
More informationLecture #11: The Perceptron
Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be
More informationLecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa
Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural
More informationRecurrent Convolutional Neural Networks for Scene Labeling
Recurrent Convolutional Neural Networks for Scene Labeling Pedro O. Pinheiro, Ronan Collobert Reviewed by Yizhe Zhang August 14, 2015 Scene labeling task Scene labeling: assign a class label to each pixel
More informationDeep Learning Benchmarks Mumtaz Vauhkonen, Quaizar Vohra, Saurabh Madaan Collaboration with Adam Coates, Stanford Unviersity
Deep Learning Benchmarks Mumtaz Vauhkonen, Quaizar Vohra, Saurabh Madaan Collaboration with Adam Coates, Stanford Unviersity Abstract: This project aims at creating a benchmark for Deep Learning (DL) algorithms
More informationCOMP9444 Neural Networks and Deep Learning 5. Geometry of Hidden Units
COMP9 8s Geometry of Hidden Units COMP9 Neural Networks and Deep Learning 5. Geometry of Hidden Units Outline Geometry of Hidden Unit Activations Limitations of -layer networks Alternative transfer functions
More informationDeep Learning on Graphs
Deep Learning on Graphs with Graph Convolutional Networks Hidden layer Hidden layer Input Output ReLU ReLU, 22 March 2017 joint work with Max Welling (University of Amsterdam) BDL Workshop @ NIPS 2016
More informationDeep Learning on Graphs
Deep Learning on Graphs with Graph Convolutional Networks Hidden layer Hidden layer Input Output ReLU ReLU, 6 April 2017 joint work with Max Welling (University of Amsterdam) The success story of deep
More informationDeep Learning Cook Book
Deep Learning Cook Book Robert Haschke (CITEC) Overview Input Representation Output Layer + Cost Function Hidden Layer Units Initialization Regularization Input representation Choose an input representation
More informationSome fast and compact neural network solutions for artificial intelligence applications
Some fast and compact neural network solutions for artificial intelligence applications Radu Dogaru, University Politehnica of Bucharest ETTI, Dept. of Applied Electronics and Info. Eng., Natural Computing
More information4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.
1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when
More informationCPSC 340: Machine Learning and Data Mining. Deep Learning Fall 2016
CPSC 340: Machine Learning and Data Mining Deep Learning Fall 2016 Assignment 5: Due Friday. Assignment 6: Due next Friday. Final: Admin December 12 (8:30am HEBB 100) Covers Assignments 1-6. Final from
More informationImproving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah
Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com
More informationIndex. Umberto Michelucci 2018 U. Michelucci, Applied Deep Learning,
A Acquisition function, 298, 301 Adam optimizer, 175 178 Anaconda navigator conda command, 3 Create button, 5 download and install, 1 installing packages, 8 Jupyter Notebook, 11 13 left navigation pane,
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationMachine Learning. MGS Lecture 3: Deep Learning
Dr Michel F. Valstar http://cs.nott.ac.uk/~mfv/ Machine Learning MGS Lecture 3: Deep Learning Dr Michel F. Valstar http://cs.nott.ac.uk/~mfv/ WHAT IS DEEP LEARNING? Shallow network: Only one hidden layer
More informationAssignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions
ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan
More informationWhy equivariance is better than premature invariance
1 Why equivariance is better than premature invariance Geoffrey Hinton Canadian Institute for Advanced Research & Department of Computer Science University of Toronto with contributions from Sida Wang
More informationFuzzy Set Theory in Computer Vision: Example 3, Part II
Fuzzy Set Theory in Computer Vision: Example 3, Part II Derek T. Anderson and James M. Keller FUZZ-IEEE, July 2017 Overview Resource; CS231n: Convolutional Neural Networks for Visual Recognition https://github.com/tuanavu/stanford-
More informationEE 511 Neural Networks
Slides adapted from Ali Farhadi, Mari Ostendorf, Pedro Domingos, Carlos Guestrin, and Luke Zettelmoyer, Andrei Karpathy EE 511 Neural Networks Instructor: Hanna Hajishirzi hannaneh@washington.edu Computational
More informationMulti-Task Learning of Facial Landmarks and Expression
Multi-Task Learning of Facial Landmarks and Expression Terrance Devries 1, Kumar Biswaranjan 2, and Graham W. Taylor 1 1 School of Engineering, University of Guelph, Guelph, Canada N1G 2W1 2 Department
More informationConvolutional Neural Networks: Applications and a short timeline. 7th Deep Learning Meetup Kornel Kis Vienna,
Convolutional Neural Networks: Applications and a short timeline 7th Deep Learning Meetup Kornel Kis Vienna, 1.12.2016. Introduction Currently a master student Master thesis at BME SmartLab Started deep
More informationArtificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )
Structure: 1. Introduction 2. Problem 3. Neural network approach a. Architecture b. Phases of CNN c. Results 4. HTM approach a. Architecture b. Setup c. Results 5. Conclusion 1.) Introduction Artificial
More informationSeminars in Artifiial Intelligenie and Robotiis
Seminars in Artifiial Intelligenie and Robotiis Computer Vision for Intelligent Robotiis Basiis and hints on CNNs Alberto Pretto What is a neural network? We start from the frst type of artifcal neuron,
More informationCS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016
CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation
More informationEfficient Algorithms may not be those we think
Efficient Algorithms may not be those we think Yann LeCun, Computational and Biological Learning Lab The Courant Institute of Mathematical Sciences New York University http://yann.lecun.com http://www.cs.nyu.edu/~yann
More informationECE 6504: Deep Learning for Perception
ECE 6504: Deep Learning for Perception Topics: (Finish) Backprop Convolutional Neural Nets Dhruv Batra Virginia Tech Administrativia Presentation Assignments https://docs.google.com/spreadsheets/d/ 1m76E4mC0wfRjc4HRBWFdAlXKPIzlEwfw1-u7rBw9TJ8/
More informationMore on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization
More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial
More informationClassification of objects from Video Data (Group 30)
Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time
More informationInception and Residual Networks. Hantao Zhang. Deep Learning with Python.
Inception and Residual Networks Hantao Zhang Deep Learning with Python https://en.wikipedia.org/wiki/residual_neural_network Deep Neural Network Progress from Large Scale Visual Recognition Challenge (ILSVRC)
More informationDeep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks
Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin
More informationKeras: Handwritten Digit Recognition using MNIST Dataset
Keras: Handwritten Digit Recognition using MNIST Dataset IIT PATNA February 9, 2017 1 / 24 OUTLINE 1 Introduction Keras: Deep Learning library for Theano and TensorFlow 2 Installing Keras Installation
More information