Data Mining
Deep Learning Deep Learning provided breakthrough results in speech recognition and image classification. Why? Because Speech recognition and image classification are two basic examples of problems where learning is extremely hard, due to the huge amount of parameter space dimension and of possible ranges in their features.
Deep Learning So, 1. what exactly is deep learning? And, 2. why is it generally better than other methods on image, speech and certain other types of data? Answers 1. Deep Learning means using a neural network with several layers of nodes between input and output 2. the series of layers between input & output do feature identification and processing in a series of stages, just as our brains seem to.
Deep Learning OK, but: 3. multilayer neural networks have been around for 25 years. What s actually new? we have always had good algorithms for learning the weights in networks with 1 (max 2) hidden layer(s) but these algorithms are not good at learning the weights for networks with more than 2 hidden layers what s new is: algorithms for training manylayer networks
DNN -0.06 W1-2.5 W2 f(x) W3 1.4
DNN -0.06 2.7-2.5-8.6 f(x) 0.002 x = -0.06 2.7 + 2.5 8.6 + 1.4 0.002 = 21.34 1.4
A dataset Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN
Training the neural network Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Initialise with random weights
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Present a training pattern 1.4 2.7 1.9
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Feed it through to get output 1.4 2.7 0.8 1.9
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN 1.4 Compare with target output 2.7 0.8 0 1.9 error 0.8
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Adjust weights based on error
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Present a training pattern 6.4 2.8 1.7
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Feed it through to get output
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Compare with target output
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN Adjust weights based on error
Training data Fields class 1.4 2.7 1.9 0 3.8 3.4 3.2 0 6.4 2.8 1.7 1 4.1 0.1 0.2 0 etc DNN And so on. Repeat this thousands, maybe millions of times each time taking a random training instance, and making slight weight adjustments Algorithms for weight adjustment are designed to make changes that will reduce the error
DNN The decision boundary perspective Initial random weights
DNN The decision boundary perspective Present a training instance / adjust the weights
DNN The decision boundary perspective Present a training instance / adjust the weights
DNN The decision boundary perspective Present a training instance / adjust the weights
DNN The decision boundary perspective Present a training instance / adjust the weights
DNN The decision boundary perspective Eventually.
DNN The point I am trying to make weight-learning algorithms for NNs are silent they work by making thousands and thousands of tiny adjustments, each making the network do better at the most recent pattern, but perhaps a little worse on many others but, hopefully this tends to be good enough to learn effective classifiers for many real applications If f(x) is non-linear, a network with 1 hidden layer can, in theory, learn perfectly any classification problem. A set of weights exists that can produce the targets from the inputs. The problem is finding them.
DNN NNs use nonlinear f(x) so they can draw complex boundaries, but keep the data unchanged SVMs only draw straight lines, but they transform the data first in a way that makes that OK
The virtually impossible
DNN text mining Feature detectors 28
DNN text mining What is unit doing? 29
DNN text mining Hidden layer units become self-organised feature detectors 1 5 10 15 20 25 1 strong value weight low/zero weight 63 30
DNN text mining 1 5 10 15 20 25 1 strong value weight low/zero weight 63 What does this unit detect? 31
DNN text mining 1 5 10 15 20 25 1 strong value weight low/zero weight it will send strong signal for a horizontal line in the top row, ignoring everywhere else 63 32
DNN text mining 1 5 10 15 20 25 1 strong value weight low/zero weight 63 What does this unit detect? 33
DNN text mining 1 5 10 15 20 25 1 strong value weight low/zero weight Strong signal for a dark area in the top left corner 63 34
DNN text mining What features might you expect a good NN to learn, when trained with data like this? 35
DNN text mining DNN vertical lines 36
DNN text mining Horizontal lines 37
DNN text mining Small circles 38
DNN text mining Small circles But what about position invariance??? our example unit detectors were related to specific parts of the image - lezione 4 39
DNN text mining successive layers can learn higher-level features detect lines in Specific positions etc Higher level detectors ( horizontal line, vertical lines upper loop, etc ) v etc 40
DNN So: multiple layers make sense Your brain works that way 41
DNN Many-layer neural network architectures should be capable of learning the true underlying features and feature logic, and therefore generalise very well But, until very recently, our weight-learning algorithms simply did not work on multi-layer architectures 42
Deeper is better? Layer X Size Word Error Rate (%) 1 X 2k 24.2 2 X 2k 20.4 3 X 2k 18.4 4 X 2k 17.8 Layer X Size Word Error Rate (%) Not surprised, more parameters, better performance 5 X 2k 17.2 1 X 3772 22.5 7 X 2k 17.1 1 X 4634 22.6 1 X 16k 22.1 Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription Using Context-Dependent Deep Neural Networks." Interspeech. 2011. 43
Universality Theorem Any continuous function f f : R N R M Can be realized by a network with one hidden layer (given enough hidden neurons) Why Deep neural network not Fat neural network? 44
Fat+Short vs. Thin+Tall The same number of parameters Which one is better? x1 x 2 x N x1 x 2 x N Shallow Deep 45
Fat+Short vs. Thin+Tall Layer X Size Word Error Rate (%) 1 X 2k 24.2 2 X 2k 20.4 3 X 2k 18.4 4 X 2k 17.8 Layer X Size Word Error Rate (%) 5 X 2k 17.2 1 X 3772 22.5 7 X 2k 17.1 1 X 4634 22.6 1 X 16k 22.1 Seide, Frank, Gang Li, and Dong Yu. "Conversational Speech Transcription Using Context-Dependent Deep Neural Networks." Interspeech. 2011. 46
Why deep? Deep Modularization Classifier 1 Girls with long hair Image Classifier 2 Classifier 3 weak Boys with long hair Girls with short hair Little examples Classifier 4 Boys with short hair 47
Why deep? Deep Modularization can be trained by little data Classifier 1 Girls with long hair Image Boy or Girl? Basic Classifier Long or short? Sharing by the following classifiers as module Classifier 2 fine Classifier 3 Classifier 4 Boys with long Little hair data Girls with short hair Boys with short hair 48
DNN The new way to train multi-layer NNs Train this layer first 49
DNN The new way to train multi-layer NNs Train this layer first then this layer 50
DNN The new way to train multi-layer NNs Train this layer first then this layer then this layer 51
DNN Train this layer first then this layer then this layer then this layer 52
DNN The new way to train multi-layer NNs Train this layer first then this layer then this layer then this layer finally this layer 53
DNN The new way to train multi-layer NNs EACH of the (non-output) layers is trained to be an auto-encoder Basically, it is forced to learn good features that describe what comes from the previous layer - lezione 4 54
DNN auto-encoders an auto-encoder is trained, with an absolutely standard weightadjustment algorithm to reproduce the input 55
DNN auto-encoders an auto-encoder is trained, with an absolutely standard weightadjustment algorithm to reproduce the input By making this happen with (many) fewer units than the inputs, this forces the hidden layer units to become good feature detectors 56
DNN auto-encoders intermediate layers are each trained to be auto encoders (or similar) 57
DNN auto-encoders Final layer trained to predict class based on outputs from previous layers 58
DNN That s the basic idea There are many many types of deep learning, different kinds of autoencoder, variations on architectures and training algorithms, etc Very fast growing research area 59
DNN 60
DNN 61
DNN 62
Convolutional DNN 63
Convolutional DNN History In 1995, Yann LeCun and Yoshua Bengio introduced the concept of convolutional neural networks. Yann LeCun Yoshua Bengio 64
Convolutional DNN CNN: input image convolved with N trainable filters and biases to produce N feature maps at C1 level. Each group of pixels in the feature maps are added, weighted, combined with a bias and passed through a sigmoid function to produce the N feature maps at S2. These are again filtered to produce C3 level. The hierarchy then produces S4 in the same way of S2. Finally these pixels are presented as a single vector input to the conventional neural network at the output. C layers are convolutions, S layers pool/sample Often starts with fairly raw features at initial input and lets CNN discover improved feature layer for final supervised learner eg. MLP/BP - lezione 4 65 65
Convolutional DNN Recap of Convnet Neural network with specialized connectivity structure Feed-forward: - Convolve input - Non-linearity (Rectified Linear Unit or ReLU) - Pooling (local max, or min, average, median, etc.) Supervised Train convolutional filters by back-propagating classification error Feature maps Pooling Non-linearity Convolution (Learned) Input image 66
Convolutional DNN Connectivity & weight sharing depends on layer All different weights All different weights Shared weights Convolution layer has much smaller number of parameters by local connection and weight sharing 67
Convolutional DNN Convolution layer Detect the same feature at different positions in the input image features Filter (kernel) Input Feature map 68
Convolutional DNN Non-linearity Tanh Sigmoid: 1/(1+exp(-x)) Rectified linear (ReLU) : max(0,x) - Simplifies backprop - Makes learning faster - Make feature sparse 69
Convolutional DNN Sub-sampling layer Spatial Pooling - usually average or max Role of Pooling - Invariance to small transformations - reduce the effect of noises and shift or distortion Max Av. 70
Convolutional DNN Normalization Contrast normalization (between/across feature map) - Equalizes the features map Feature maps Feature maps after contrast normalization 71
Convolutional DNN For each pixel in the input image, we encode the pixel's intensity as the value for a corresponding neuron in the input layer. For 28 28 pixel images, this means our network has 784 (28 28) input neurons. We then train the network's weights and biases. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. Let's look at each of these ideas in turn. Local receptive fields: In the fully-connected layers shown earlier, the inputs were depicted as a vertical line of neurons. In a convolutional net, it'll help to think instead of the inputs as a 28 28 square of neurons, whose values correspond to the 28 28 pixel intensities we're using as inputs. As per usual, we'll connect the input pixels to a layer of hidden neurons. But we won't connect every input pixel to every hidden neuron. Instead, we only make connections in small, localized regions of the input image. 72
Convolutional DNN To be more precise, each neuron in the first hidden layer will be connected to a small region of the input neurons, say, for example, a 5 5 region, corresponding to 25x25 input pixels. So, for a particular hidden neuron, we might have connections that look like in figure above. That region in the input image is called the local receptive field for the hidden neuron. It's a little window on the input pixels. Each connection learns a weight. And the hidden neuron learns an overall bias as well. You can think of that particular hidden neuron as learning to analyze its particular local receptive field. 73
Convolutional DNN We then slide the local receptive field across the entire input image. For each local receptive field, there is a different hidden neuron in the first hidden layer. 74
Convolutional DNN We then slide the local receptive field across the entire input image. For each local receptive field, there is a different hidden neuron in the first hidden layer. If we have a 28 28 input image, and 5 5 local receptive fields, then there will be 24 24 neurons in the hidden layer. This is because we can only move the local receptive field 23x23 neurons across (or 23x23 neurons down), before colliding with the right-hand side (or bottom) of the input image. 75
Convolutional DNN Shared weights and biases: I've said that each hidden neuron has a bias and 5 5 weights connected to its local receptive field. What I did not yet mention is that we're going to use the same weights and bias for each of the 24 24 hidden neurons. In other words, for the j, kth hidden neuron, the output is: Here, σ is the neural activation function such as the sigmoid function. b is the shared value for the bias. w l,m is a 5 5 array of shared weights. And, finally, we use a j,k to denote the input activation at position j,k. This means that all the neurons in the first hidden layer detect exactly the same feature. Informally, think of the feature detected by a hidden neuron as the kind of input pattern that will cause the neuron to activate: it might be an edge in the image, or other type of shape. 76
Convolutional DNN Pooling layers: In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is simplify the information in the output from the convolutional layer. 77
Convolutional DNN Pooling layers: In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is simplify the information in the output from the convolutional layer. A pooling layer takes each feature map output from the convolutional layer and prepares a condensed feature map. Each unit in the pooling layer may summarize a region of 2 2 neurons in the previous layer. As a concrete example, one common procedure for pooling is known as max-pooling. In max-pooling, a pooling unit outputs the maximum activation in the 2 2 input region. 78
Convolutional DNN Putting it all together: We can now put all these ideas together to form a complete convolutional neural network. The network begins with 28 28 input neurons, which are used to encode the pixel intensities for the input image. This is then followed by a convolutional layer using a 5 5 local receptive field and 3x3 feature maps. The result is a layer of 3 24 24 hidden feature neurons. The next step is a max-pooling layer, applied to 2 2 regions, across each of the 3x3 feature maps. The result is a layer of 3 12 12 hidden feature neurons. The final layer of connections in the network is a fully-connected layer. That is, this layer connects every neuron from the max-pooled layer to every one of the 10 output neurons 79
Softmax classification Softmax layer as the output layer Ordinary Layer z 1 z 2 y y 1 z 1 2 z 2 In general, the output of network can be any value. z 3 y 3 z 3 May not be easy to interpret 80
Softmax classification Softmax layer as the output layer Softmax Layer Probability: 1 > y i > 0 σ i y i = 1 z 1 z 2 z 3 3 e 1 e 2.7-3 e e z 1 20 e z 2 e z 3 3 j 1 0.05 z j e 0.88 y 0.12 y 0 y 1 2 3 e e e z z z 1 2 3 3 j 1 3 j 1 3 j 1 81 e e e z z z j j j
Softmax classification - example x 1 θ = W 1, b 1, W 2, b 2, W L, b L y 1 0.1 16 x 16 = 256 Ink 1 No ink 0 x 2 x 256 Softmax y 2 0.7 0.2 y 10 is 2 82
Convolutional DNN in Astrophysics convolutional neural networks for galaxy morphology prediction Dieleman et al. 2015, MNRAS, 450, 2 Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the SDSS have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is timeconsuming and does not scale to large (> 10 4 ) numbers of images. 83
Convolutional DNN in Astrophysics 84
Convolutional DNN One method for classifying galaxy morphology exploits the rotational symmetry of galaxy images; however, there are other invariances and symmetries (besides translational) that may be exploited for convolutional neural networks. The idea of deep learning is to build models that represent data at multiple levels of abstraction, and can discover accurate representations autonomously from the data itself. Deep learning models consist of several layers of processing that form a hierarchy: each subsequent layer extracts a progressively more abstract representation of the input data and builds upon the representation from the previous layer, typically by computing a non-linear transformation of its input. The parameters of these transformations are optimized by training the model on a dataset. 85
Convolutional DNN To determine how the parameters should be changed to reduce the prediction error across the dataset, the gradient descent is used: Convolutional neural networks contain two types of layers with restricted connectivity: convolutional layers and pooling layers. A convolutional layer takes a stack of feature maps (e.g. the colour channels of an image) as input and convolves each of these with a set of learnable filters to produce a stack of output feature maps The output feature maps are represented as follows 86
Convolutional DNN Here * represents the two-dimensional convolution operation, the matrices W n represent the filters of (l) layer n, and b n represents the bias for each feature map. Note that a feature map X n is obtained by computing a sum of K convolutions with the feature maps of the previous layer. By replacing the matrix product with a sum of convolutions, the connectivity of the layer is effectively restricted to take advantage of the input structure and to reduce the number of parameters. Each unit is only connected to a local subset of the units in the layer below, and each unit is replicated across the entire input. This means that each unit can be seen as detecting a particular feature across the input (for example, an oriented edge in an image). 87
Convolutional DNN Because convolutional layers are only able to model local correlations in the input, the dimensionality of the feature maps is often reduced between convolutional layers by inserting pooling layers. This allows higher layers to model correlations across a larger part of the input, with a lower resolution. A pooling layer reduces the dimensionality of a feature map by computing some aggregation function (typically the maximum or the mean) across small local regions of the input. By alternating convolutional and pooling layers, higher layers in the network see a progressive coarse representation of the input. As a result, these layers are able to model higher-level abstractions more easily because each unit is able to see a larger part of the input. This also makes the model invariant to small translations of the input, which is a desirable property for modelling images and many other types of data. Unlike convolutional layers, pooling layers typically do not have any trainable parameters. 88
Convolutional DNN The restricted connectivity patterns used in convolutional neural networks drastically reduce the number of parameters required to model large images, by exploiting translational symmetry. However, there are many other types of symmetries that occur in images. For images of galaxies, rotating an image should not affect its morphological classification. This rotational symmetry is exploited by applying the same set of feature detectors to various rotated versions of the input. Rotating an image by an angle not a multiple of 90 requires interpolation and results in an image whose edges are not aligned with the rows and columns of the pixel grid. These complications make exploiting rotational symmetry more challenging. 89
Convolutional DNN We compute rotated and flipped versions of the input images, which are referred to as viewpoints, and process each of these separately with the same convolutional network architecture, consisting of alternating convolutional layers and pooling layers. The output feature maps of this network for the different viewpoints are then concatenated, and one or more dense layers are stacked on top. This arrangement allows the dense layers to aggregate high-level features extracted from different viewpoints. In practice, we crop the top left part of each viewpoint image to reduce redundancy between viewpoints and the size of the input images (and hence computation time). Images are cropped in such a way that each one contains the center of the galaxy, part of the image that is very informative. 90
Convolutional DNN - lezione 4 91
Convolutional DNN In terms of preprocessing, images are first cropped and rescaled to reduce the dimensionality of the input. It was useful to crop the images because the object of interest is in the middle of the image with a large amount of sky background, and typically fits within a square with a side of approximately half the image height. We then rescaled the images to speed up training, with little to no effect on predictive performance. Images were cropped from 424 x 424 pixels to 207 x 207, and then downscaled 3 times to 69 x 69 pixels. Sometimes it may occur that cropping could partially remove interesting parts of the image. In such cases an analysis of Petrosian radii and position of the object can help to recenter and rescaling of the cropped images. 92
Convolutional DNN Due to the limited size of the training set, performing data augmentation to artificially increase the number of training examples is instrumental. Each training example was randomly perturbed in five ways: rotation: random rotation with an angle sampled uniformly between 0 and 360, to exploit rotational symmetry in the images. translation: random shift sampled uniformly between -4 and +4 pixels (relative to the original image size of 424 by 424) in the x and y direction. The size of the shift is limited to ensure that the object of interest remains in the center of the image. scaling: random rescaling with a factor sampled log-uniformly between 1.3-1 and 1.3. flipping: the image is flipped with a probability of 0.5. brightness adjustment: the colour of the image is adjusted as described by Krizhevsky et al. (2012): the standard deviation for the scale factor is set to = 0.5. In practice, this amounts to a brightness adjustment. 93
Convolutional DNN After preprocessing and augmentation, we performed the viewpoint extraction by rotating, flipping and cropping the input images. We extracted 16 different viewpoints for each image: first, two square-shaped crops were extracted from an input image, one at 0 and one at 45. Both were also flipped horizontally to obtain 4 crops in total. Each of these crops is 69 x 69 pixels in size. Then, four overlapping corner patches of 45 x 45 pixels were extracted from each crop, and rotated so that the center of the galaxy is in the bottom right corner of each patch. These 16 rotated patches constitute the viewpoints. 94
Convolutional DNN All viewpoints were presented to the network as 45 by 45 by 3 arrays of RGB values, scaled to the interval [0; 1], and processed by the same convolutional architecture. The resulting feature maps were then concatenated and processed by a stack of three fully connected layers to map them to the 37 answer probabilities. 95
Convolutional DNN 96
Convolutional DNN 97
Convolutional DNN - lezione 4 98