Data Mining. Neural Networks

Similar documents
Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

For Monday. Read chapter 18, sections Homework:

CMPT 882 Week 3 Summary

Neural Network Neurons

Neural Networks CMSC475/675

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

11/14/2010 Intelligent Systems and Soft Computing 1

Supervised Learning in Neural Networks (Part 2)

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

CS6220: DATA MINING TECHNIQUES

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

EECS 496 Statistical Language Models. Winter 2018

Yuki Osada Andrew Cannon

Introduction to Neural Networks

Supervised Learning (contd) Linear Separation. Mausam (based on slides by UW-AI faculty)

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

Multilayer Feed-forward networks

Design and Performance Analysis of and Gate using Synaptic Inputs for Neural Network Application

Opening the Black Box Data Driven Visualizaion of Neural N

Neural Networks (pp )

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples.

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

CS 1674: Intro to Computer Vision. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh November 16, 2016

COMPUTATIONAL INTELLIGENCE

Machine Learning in Biology

Neural Networks (Overview) Prof. Richard Zanibbi

Instructor: Jessica Wu Harvey Mudd College

Machine Learning 13. week

Pattern Classification Algorithms for Face Recognition

Artificial Neural Networks

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Lecture #11: The Perceptron

Character Recognition Using Convolutional Neural Networks

Back propagation Algorithm:

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15

CS 2750: Machine Learning. Neural Networks. Prof. Adriana Kovashka University of Pittsburgh April 13, 2016

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

NEURAL NETWORKS. Typeset by FoilTEX 1

Logical Rhythm - Class 3. August 27, 2018

Technical University of Munich. Exercise 7: Neural Network Basics

Notes on Multilayer, Feedforward Neural Networks

Machine Learning Classifiers and Boosting

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah

Lecture 17: Neural Networks and Deep Learning. Instructor: Saravanan Thirumuruganathan

Artificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu

Machine Learning. Deep Learning. Eric Xing (and Pengtao Xie) , Fall Lecture 8, October 6, Eric CMU,

Object Detection Lecture Introduction to deep learning (CNN) Idar Dyrdal

Neural Networks and Deep Learning

Optimization Methods for Machine Learning (OMML)

Week 3: Perceptron and Multi-layer Perceptron

Reservoir Computing with Emphasis on Liquid State Machines

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining

Neural Networks. Single-layer neural network. CSE 446: Machine Learning Emily Fox University of Washington March 10, /10/2017

SEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic

Climate Precipitation Prediction by Neural Network

Learning and Generalization in Single Layer Perceptrons

Future Image Prediction using Artificial Neural Networks

Deep Learning. Architecture Design for. Sargur N. Srihari

INTRODUCTION TO DEEP LEARNING

Neural Networks. Neural Network. Neural Network. Neural Network 2/21/2008. Andrew Kusiak. Intelligent Systems Laboratory Seamans Center

CP365 Artificial Intelligence

Multi-Layered Perceptrons (MLPs)

Neural Networks: What can a network represent. Deep Learning, Fall 2018

Neural Networks: What can a network represent. Deep Learning, Spring 2018

ECG782: Multidimensional Digital Signal Processing

Machine Learning Week 3

Image Compression: An Artificial Neural Network Approach

Artificial Neural Networks MLP, RBF & GMDH

CMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro

ARTIFICIAL NEURAL NETWORK CIRCUIT FOR SPECTRAL PATTERN RECOGNITION

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Introduction to Neural Networks: Structure and Training

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

Applying Supervised Learning

CS 4510/9010 Applied Machine Learning. Deep Learning. Paula Matuszek Fall copyright Paula Matuszek 2016

CPSC 340: Machine Learning and Data Mining. Deep Learning Fall 2018

Unit V. Neural Fuzzy System

Practical Tips for using Backpropagation

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Machine Learning. Topic 5: Linear Discriminants. Bryan Pardo, EECS 349 Machine Learning, 2013

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

Multi Layer Perceptron with Back Propagation. User Manual

Deep Learning. Volker Tresp Summer 2014

Introduction to Deep Learning

Multi-layer Perceptron Forward Pass Backpropagation. Lecture 11: Aykut Erdem November 2016 Hacettepe University

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

5 Learning hypothesis classes (16 points)

6.034 Notes: Section 7.1

Deep Learning for Computer Vision

CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)

Lecture 13. Deep Belief Networks. Michael Picheny, Bhuvana Ramabhadran, Stanley F. Chen

Deep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES

Deep Generative Models Variational Autoencoders

Transcription:

Data Mining Neural Networks

Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most appropriate Understand the strengths and weaknesses of neural network models A detailed understanding of how they work is deferred until Machine Learning 2

Biological Motivation Can we simulate the human learning process? Two schools modeling biological learning process obtain highly effective algorithms, independent of whether these algorithms mirror biological processes (this course) Biological learning system (brain) complex network of neurons ANN are loosely motivated by biological neural systems. However, many features of ANNs are inconsistent with biological systems 3

Neural Speed Constraints What do you think the differences are in speed between the elements in a computer and brain? How fast are neurons? Neurons have a switching time on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. How? Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers. Must be exploiting massive parallelism. Human brain has about 30 Billion neurons with an average of 10,000 connections each. 4

Artificial Neural Networks (ANN) ANN network of simple units real-valued inputs & outputs Many neuron-like threshold switching units Many weighted interconnections among units Highly parallel, distributed process Emphasis on tuning weights automatically 5

Real Neurons Do you know which way the signal flows? Answer on next slide 6

How Does our Brain Work? A neuron is connected to other neurons via its input and output links Dendrites receive the signal (input) Axons transmit signal (output) Axon close to about 10,000 dendrites and gap called synapse Chemical signals are used to cross synapses Each axon contains 50 different types of neurotransmitters Receptors keyed to each neurotransmitter on dendrites Can be excitatory or inhibitory 7

How does our Brain Work? Each incoming neuron has an activation value and each connection has an associated weight The neuron sums the incoming weighted values and this value is input to an activation function The output of the activation function is the output from the neuron 8

Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excitatory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential. 9

Real Neural Learning To model the brain we need to model a neuron Each neuron performs a simple computation It receives signals from its input links and it uses these to compute the activation level (or output) for the neuron. This value is passed to other neurons via its output links. 10

Neural Network Learning Learning approach based on modeling adaptation in biological neural systems Learning involves modifying the weights between neurons Note: In biological system one can add new connections Perceptron Initial algorithm for learning simple neural networks (single layer) developed in the 1950 s. Backpropagation More complex algorithm for learning multi-layer neural networks developed in the 1980 s. 11

Prototypical ANN Units interconnected in layers directed, acyclic graph (DAG) Network structure is fixed learning = weight adjustment backpropagation algorithm 12

Strengths and Weaknesses Instances: vectors of attributes discrete or real values Target function discrete, real, vector ANNs can handle classification & regression Noisy data Long training times acceptable Fast evaluation No need to be comprehensible or explainable Nearly impossible to interpret neural networks except for the simplest target functions As we will see can learn complex decision boundaries 13

Perceptrons The perceptron is a type of artificial neural network which can be seen as the simplest kind of feedforward neural network: a linear classifier Introduced in the late 50s Perceptron convergence theorem (Rosenblatt 1962): Perceptron will learn to classify any linearly separable set of inputs. Perceptron is a network: single-layer feed-forward: data only travels in one direction XOR function (no linear separation) 14

ALVINN drives 70 mph on highways See Alvinn video Alvinn Video link https://www.theverge.com/2016/11/27/13752 344/alvinn-self-driving-car-1989-cmu-navlab 15

Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, w ji Model net input to cell as net w j i ji o i 1 w 12 w w w w 16 15 13 14 2 3 4 5 6 Cell output is: o j 0 if 1if net net i j T T j j o j 1 (T j is threshold for unit j) 0 T j net j 16

Perceptron: Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, w ji The input value received of a neuron is calculated by summing the weighted input values from its input links n w i x i i0 threshold threshold function Vector notation: 17

Different Threshold Functions 0 1, 0 1, ) ( x w x w x o 0 1, 0 1, ) ( x w x w x o 18

Examples (step activation function) In1 In2 Out 0 0 0 0 1 0 1 0 0 1 1 1 In1 In2 Out 0 0 0 0 1 1 1 0 1 1 1 1 In Out 0 1 1 0 19

Neural Computation McCollough and Pitts (1943) showed how such model neurons could compute logical functions and be used to construct finitestate machines Can be used to simulate logic gates: AND: Let all w ji be T j /n, where n is the number of inputs. OR: Let all w ji be T j NOT: Let threshold be 0, single input with a negative weight. Can build arbitrary logic circuits, sequential machines, and computers with such gates 20

Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Goal: learn the weight vector (synaptic weights) that causes the perceptron to produce the correct +/- 1 values Perceptron uses iterative update algorithm to learn a correct set of weights Perceptron training rule Delta rule Both algorithms are guaranteed to converge to somewhat different acceptable hypotheses, under somewhat different conditions 21

Perceptron Training Rule Update weights by: w i w i w i w ( t o) w i i where η is the learning rate a small value (e.g., 0.1) sometimes is made to decay as the number of weight-tuning operations increases t target output for the current training example o linear unit output for the current training example 22

Perceptron Training Rule Equivalent to rules: If output is correct do nothing. If output is high, lower weights on active inputs If output is low, increase weights on active inputs Can prove it will converge if training data is linearly separable and η is small Works reasonably well when training data is not linearly separable 23

Perceptron Learning Algorithm Iteratively update weights until convergence. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output o j for E given its inputs Compare current output to target value, t j, for E Update synaptic weights and threshold using learning rule Each execution of the outer loop is typically called an epoch. 24

Perceptron as a Linear Separator Since perceptron uses linear threshold function it searches for a linear separator that discriminates the classes o 3?? Or hyperplane in n-dimensional space o 2 25

Concepts Perceptron Cannot Learn Cannot learn exclusive-or (XOR), or parity function in general o 3 1 +?? 0 1 + o 2 26

General Structure of an ANN x 1 x 2 x 3 x 4 x 5 Input Layer Input Neuron i Output Hidden Layer I 1 I 2 I 3 w i1 w i2 w i3 S i Activation function g(s i ) O i O i threshold, t Output Layer y Perceptrons have no hidden layers Multilayer perceptrons may have many 27

Learning Power of an ANN Perceptron is guaranteed to converge if data is linearly separable It will learn a hyperplane that separates the classes A mulitlayer ANN has no guarantee of convergence but can learn functions that are not linearly separable 28

Multilayer Network Example The decision surface is highly nonlinear 29

Sigmoid Threshold Unit Sigmoid is a unit whose output is a nonlinear function of its inputs, but whose output is also a differentiable function of its inputs We can derive gradient descent rules to train Sigmoid unit Multilayer networks of sigmoid units backpropagation 30

Multi-Layer Networks Multi-layer networks can represent arbitrary functions, but an effective learning algorithm for such networks was thought to be difficult A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward. output hidden activation input The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer 31

Logistic function Inputs Output Age 34 Gender 1.5.4 S 0.6 Probability Stage 4.8 of beingalive Independent variables Coefficients 32

Neural Network Model Inputs Age 34.6.4 Output Gender 2 Stage 4.1.3.2.7.2 S S.2.5.8 S 0.6 Probability of beingalive Independent variables Weights Hidden Layer Weights Dependent variable Prediction 33

Getting an answer from a NN Inputs Age 34.6 Output Gender 2 Stage 4.1.7.5.8 S 0.6 Probability of beingalive Independent variables Weights Hidden Layer Weights Dependent variable Prediction 34

Getting an answer from a NN Inputs Age 34 Gender 2 Stage 4.3.2.2.5.8 S Output 0.6 Probability of beingalive Independent variables Weights Hidden Layer Weights Dependent variable Prediction 35

Getting an answer from a NN Inputs Age 34 Gender 1 Stage 4.1.3.6.2.7.2.5.8 S Output 0.6 Probability of beingalive Independent variables Weights Hidden Layer Weights Dependent variable Prediction 36

Comments on Training Algorithm Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. In practice, does converge to low error for many large networks on real data. Many epochs (thousands) may be required, hours or days of training for large networks. To avoid local-minima problems run trials starting with different random weights (random restart). Take results of trial with lowest training set error. Build a committee of results from multiple trials (possibly weighting votes by training set accuracy). 37

Hidden Unit Representations Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space: key features of ANNs On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc.. However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature. 38

Expressive Power of ANNs Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network. Why not just use millions of hidden units? Efficiency (neural network training is slow) Overfitting 39

error Determining Best Number of Hidden Units Too few hidden units prevents the network from adequately fitting the data Too many hidden units can result in over-fitting on test data 0 # hidden units on training data Use internal cross-validation to empirically determine an optimal number of hidden units 40

error Over-Training Prevention Running too many epochs can result in over-fitting. on test data 0 # training epochs on training data Keep a hold-out validation set and test accuracy on it after every epoch. Stop training when additional epochs actually increase validation error. To avoid losing training data for validation: Use internal 10-fold CV on the training set to compute the average number of epochs that maximizes generalization accuracy. Train final network on complete training set for this many epochs. 41

Summary of Neural Networks When are Neural Networks useful? Instances represented by attribute-value pairs Particularly when attributes are real valued The target function is Discrete-valued Real-valued Vector-valued Training examples may contain errors Fast evaluation times are necessary but slow training is acceptable When not? Fast training times are necessary Understandability of the function is required 42

Successful Applications Text to Speech (NetTalk) Fraud detection Financial Applications HNC (eventually bought by Fair Isaac) Chemical Plant Control Pavillion Technologies Automated Vehicles Game Playing Neurogammon Handwriting recognition 43

Issues in Neural Nets Learning the proper network architecture Many choices and options There are alternative architectures we did not discuss Recurrent networks that use feedback 44

Deep Learning Deep Learning is currently a very hot topic Deep learning attempts to model high-level abstractions in data by using multiple processing layers, with complex structures or otherwise, composed of multiple non-linear transformations Probably requires more than 2 hidden layers and more than 10 for very deep learning Replace need for handcrafted features with automatic feature learning with multiple levels of abstraction Higher level features based on lower level features Autoencoder: have one hidden layer with fewer nodes than input and require it to regenerate the input as its output. This forces the hidden layer to learn more general features that can reconstruct the output Does not require labeled training data 45

Some Examples More about ANNs, Rerepresentation, and Deep Learning (5min, Andrew Ng, interesting) https://www.youtube.com/watch?v=he4t7zekob0 Deep Learning example: Quad Copter Navigation (5min, somewhat interesting) https://www.youtube.com/watch?v=umrdt3zggpu 46