CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016
Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation level Nodes of each layer receive inputs from previous layers; these are combined according to a set of weights. If the activation level is reached the node fires and sends inputs to the next level The initial layer is data from cases; the final layer is expected outcomes Learning is accomplished by modifying the weights to reduce the prediction error
Connectionist Systems 3 A neural net is an example of a connectionist system; we are looking at the connections among the neurons Neurons are also known as perceptrons; the Weka book calls these MultiLayer Perceptron systems The origin of NN systems is modeling human neurons A recent research topic is deep learning systems, which are layered NNs; earlier NNs are the inputs to later ones. They are being explored as providing an approach to modeling a richer, more trainable knowledge space or model.
How the Human Brain learns 4 In the human brain, a typical neuron collects signals from others through a host of fine structures called dendrites. The neuron sends out spikes of electrical activity through a long, thin stand known as an axon, which splits into thousands of branches. At the end of each branch, a structure called a synapse converts the activity from the axon into electrical effects that inhibit or excite activity in the connected neurons. https://eclass.teicrete.gr/modules/document/file.php/tpold101/ Neural Networks/NeuralNets_ch1-2_intro_Eng.ppt
A Typical Neuron 5 ANNs incorporate the two fundamental components of biological neural nets: Neurons -> Nodes Synapses -> Weights P1 P2 P3 Inputs Weights w1 w2 w3 Σ f Output Each neuron within the network is usually a simple processing unit which takes one or more inputs and produces an output. At each neuron, every input has an associated weight which modifies the strength of each input. The neuron simply adds together all the inputs and calculates an output to be passed on. https://eclass.teicrete.gr/modules/document/file.php/tpold101/neural %20Networks/NeuralNets_ch1-2_intro_Eng.ppt
A Typical Neural Network 6 Neural computing requires a number of neurons, to be connected together into a neural network. Neurons are arranged in layers. Weights Nodes Inputs Outputs There are always an input layer and an output layer. There may also be one or more hidden layers https://eclass.teicrete.gr/modules/document/file.php/tpold101/neural %20Networks/NeuralNets_ch1-2_intro_Eng.ppt
Network Layers 7 Input Layer - The activity of the input units represents the raw information that is fed into the network. Hidden Layer - The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units. Output Layer - The behavior of the output units depends on the activity of the hidden units and the weights between the hidden and output units. Weights between the input and hidden units determine when each hidden unit is active, and so by modifying these weights, a hidden unit can choose what it represents. Each layer can have a different number of nodes www.d.umn.edu/~alam0026/neuralnetwork.ppt
Training Basics 8 So now we have a network. How do we learn with it? The most basic method of training a neural network is trial and error. Set initial weights randomly. If the network isn't matching the outputs in the training instances, change the weighting of a random link by a random amount. If the accuracy of the network declines, undo the change and make a different one. Time-consuming, but it does learn. www.d.umn.edu/~alam0026/neuralnetwork.ppt
Training, Better 9 The typical method of modifying the weights is backpropagation success or failure at the output node is propagated back through the nodes which contributed to that output node Backprop consists of the repeated application of the following two passes: Forward pass: in this step the network is activated on one example and the error of (each neuron of) the output layer is computed. Backward pass: in this step the network error is used for updating the weights. Starting at the output layer, the error is propagated backwards through the network, layer by layer www.d.umn.edu/~alam0026/neuralnetwork.ppt
Back Propagation 10 Back-propagation training algorithm Network activation Forward Step Error propagation Backward Step Backprop adjusts the weights of the NN in order to minimize the network total mean squared error. www.d.umn.edu/~alam0026/neuralnetwork.ppt
More 11 Number of hidden nodes and layers is complicated too many = overfitting Typical is to try several and evaluate More than about 2 hidden layers has not in practice generally been useful vanishing gradient Number of connections can also be tweaked; we have been showing fully-connected networks No really good algorithm for this either These are feed forward networks; there are no loops or cycles.
Some NN Advantages and Disadvantages 12 Advantages Can learn complex patterns Works well for large problems involving pattern recognition Good for multiple classes Relatively insensitive to irrelevant attributes Disadvantages Can be very slow Needs a lot of examples to work well Very black box A lot of heuristics; results not identical every time
Example from AISpace 13 Mail Find the sample file and load it Set properties Initialize the parameters Solve How do we use it? Calculate output We are using the NN applet at aispace.org
Example: Which class to take? 14 Inputs? Outputs? Sample data
Some Examples 15 Example 1: 3 inputs, 1 output, all binary Example 2: same inputs, output inverted
Getting the right inputs 16 Example 3 Same inputs as 1 and 2 Same output as 1 Outcomes reversed for half the cases
Getting the right inputs 17 Example 3 Same inputs as 1 and 2 Same output as 1 Outcomes reversed for half the cases Network is not converging The output here cannot be predicted from these inputs. Whatever is determining whether to take the class, we haven t captured it
Unordered values 18 Example 4 nput variables here include professor Non-numeric, can t be ordered. Still need numeric values Solution is to treat n possible values as n separate binary values Applet does this for us
Variables with more values 19 Example 5 GPA and number of classes taken are integer values Takes considerably longer to solve Looks for a while like it s not converging Then it gets it
And Reals 20 Example 6 GPA is a real. Examples 5 and 6, without the is it a prereq attribute, and with interval data, depend more on the number of hidden nodes.
And multiple outputs 21 Small Car database from AIspace For any given input case, you will get a value for each possible outcome. Typical for, for instance, character recognition.
Training and Test Cases 22 The basic training approach will fit the training data as closely as possible. But we really want something that will generalize to other cases This is why we have test cases. The training cases are used to compute the weights The test cases tell us how well they generalize Both training and test cases should represent the overall population as well as possible.
So: 23 As for any classifier, getting a good NN involves understanding your domain and capturing knowledge about it choosing the right inputs and outputs choosing representative training and test set You can represent any kind of variable: numeric or not, ordered or not. Non-binary attributes become multiple yes-no attributes Not every set of variables and training cases will produce a net that can be trained.
Once it s trained... 24 When your NN is trained, you can feed it a specific set of inputs and get one or more outputs. These outputs are typically interpreted as some decision: take the class this is probably a 5 This car is most likely acceptable. The network itself is black box. If the situation changes the NN should be retrained new variables new values for some variables new patterns of cases
One last note 25 These have all been simple cases, as examples Most of my examples could in fact be predicted much more easily and cleanly with a decision tree, or even a couple of IF statements A more typical use for any connectionist system has many more inputs and many more training cases