Neural Networks. Prof. Dr. Rudolf Kruse. Computational Intelligence Group Faculty for Computer Science
|
|
- Melvin Walters
- 6 years ago
- Views:
Transcription
1 Neural Networks Prof. Dr. Rudolf Kruse Computational Intelligence Group Faculty for Computer Science Rudolf Kruse Neural Networks
2 Radial Basis Function Networks Rudolf Kruse Neural Networks
3 Radial Basis Funktion Networks Properties of Radial Basis Funktion Networks (RBF-Networks) RBF-Networks are strictly layered feed-forward neural networks consisting of exactly hidden layer. Radial Basis Functions are used for input and actiation function. This way a catchment area is assigned to eery neuron. The weights of eery connection between the input layer to a hidden neuron indicate the center of this cathment area. Rudolf Kruse Neural Networks 3
4 Radial Basis Function Networks A radial basis function network (RBFN) is a neural network with a graph G =(U, C) thatsatisfiesthefollowingconditions (i) U in U out =, (ii) C =(U in U hidden ) C, C (U hidden U out ) The network input function of each hidden neuron is a distance function of the input ector and the weight ector, i.e. u U hidden : f (u) net ( w u, in u )=d( w u, in u ), where d :IR n IR n IR + is a function satisfying x, y, z IRn : (i) d( x, y) = x = y, (ii) d( x, y) =d( y, x) (symmetry), (iii) d( x, z) d( x, y) +d( y, z) (triangle inequality). Rudolf Kruse Neural Networks 4
5 Distance Functions Illustration of distance functions d k ( x, y) = n i= (x i y i ) k Well-known special cases from this family are: k =: Manhattanorcityblockdistance, k =: Euclideandistance, k : maximum distance, i.e. d ( x, y) =max n i= x i y i. k = k = k k Rudolf Kruse Neural Networks 5
6 Radial Basis Function Networks The network input function of the output neurons is the weighted sum of their inputs, i.e. u U out : f (u) net ( w u, in u )= w u in u = pred (u) w u out. The actiation function of each hidden neuron is a so-called radial function, i.e.a monotonously decreasing function f :IR + [, ] with f() = and lim f(x) =. x The actiation function of each output neuron is a linear function, namely f (u) act (net u,θ u )=net u θ u. (The linear actiation function is important for the initialization.) Rudolf Kruse Neural Networks 6
7 Radial Actiation Functions rectangle function: f act (net,σ)= {, if net >σ,, otherwise. triangle function: f act (net,σ)= {, if net >σ, net σ, otherwise. σ net σ net cosine until zero: f act (net,σ)= {, if net > σ, cos( π σ net )+, otherwise. Gaussian function: f act (net,σ)=e net σ e σ σ net e σ σ net Rudolf Kruse Neural Networks 7
8 Radial Basis Function Networks: Examples Radial basis function networks for the conjunction x x x x y x x (,) is the center is the reference radius Euclidean distance rectengular function as the actiation function bias of in the output neuron Rudolf Kruse Neural Networks 8
9 Radial Basis Function Networks: Examples Radial basis function networks for the conjunction x x x x 6 5 y x x (,) is the center 6 5 is the reference radius Euclidean distance rectengular function as the actiation function bias of intheoutputneuron Rudolf Kruse Neural Networks 9
10 Radial Basis Function Networks: Examples Radial basis function networks for the biimplication x x Idea: logical decomposition x x (x x ) (x x ) x x y x x Rudolf Kruse Neural Networks
11 Radial Basis Function Networks: Function Approximation y y 3 y 4 y y 3 y 4 y y y y x x x x x 3 x 4 x x x 3 x 4 y 4 y 3 y y Approximation of the original function by a step function with each step being resembled by one of the RBF-Network s neurons. (compare MLPs) Rudolf Kruse Neural Networks
12 Radial Basis Function Networks: Function Approximation.. x σ. y x σ y x y x 3 σ y 3 x 4. σ. Rudolf Kruse Neural Networks y 4. σ = x = (x i+ x i ) RBF-Network calculating the step function shown on the preious slide and the partially linear function on the following slide. The only change is made to the actiation function of the hidden neurons.
13 Radial Basis Function Networks: Function Approximation y y 3 y 4 y y 3 y 4 y y y y x x x x x 3 x 4 x x x 3 x 4 Representation of a partially linear function by the weighted sum of a triangular function with centers x i. y 4 y 3 y y Rudolf Kruse Neural Networks 3
14 Radial Basis Function Networks: Function Approximation y y x x Approximation of a function by the sum of gaussians with radius σ =. w =,w =3andw 3 =. w w w 3 Rudolf Kruse Neural Networks 4
15 Radial Basis Function Networks: Function Approximation Radial basis function network for a sum of three Gaussian functions x 5 3 y 6 Rudolf Kruse Neural Networks 5
16 Training Radial Basis Function Networks Rudolf Kruse Neural Networks 6
17 Radial Basis Function Networks: Initialization Let L fixed = {l,...,l m } be a fixed learning task, consisting of m training patterns l =( ı (l), o (l) ). Simple radial basis function network: One hidden neuron k, k =,...,m,foreachtrainingpattern: k {,...,m} : w k = ı (l k). If the actiation function is the Gaussian function, the radii σ k are chosen heuristically k {,...,m} : σ k = d max m, where d max = max d ( ı (lj), ı (l ) k). l j,l k L fixed Rudolf Kruse Neural Networks 7
18 Radial Basis Function Networks: Initialization Initializing the connections from the hidden to the output neurons u : m k= w um out (l) m θ u = o (l) u or abbreiated A w u = o u, where o u =(o (l ) u,...,o (l m) u ) is the ector of desired outputs, θ u =,and A = out (l ) out (l )... out (l ) m out (l ) out (l )... out (l ) m... out (l m) out (l m)... out (l m) m This is a linear equation system, that can be soled by inerting the matrix A: w u = A o u.. Rudolf Kruse Neural Networks 8
19 RBFN Initialization: Example Simple radial basis function network for the biimplication x x x x y x x w w w 3 w 4 y Rudolf Kruse Neural Networks 9
20 RBFN Initialization: Example Simple radial basis function network for the biimplication x x where A = e e e 4 e e 4 e e e 4 e e 4 e e A = a D b D b D c D b D a D c D b D b D c D a D b D c D b D b D a D D = 4e 4 +6e 8 4e + e a = e 4 + e b = e +e 6 e.34 c = e 4 e 8 + e.77 w u = A o u = D a + c b b a + c Rudolf Kruse Neural Networks
21 RBFN Initialization: Example Simple radial basis function network for the biimplication x x act act y (,) x x x x x x single basis function all basis functions output Initialization leads already to a perfect solution of the learning task. Subsequent training is not necessary. Rudolf Kruse Neural Networks
22 Radial Basis Function Networks: Initialization Normal radial basis function networks: Select subset of k training patterns as centers. A = out (l ) out (l )... out (l ) k out (l ) out (l )... out (l ) k.... out (l m) out (l m)... out (l m) k A w u = o u Compute (Moore Penrose) pseudo inerse: A + =(A A) A. The weights can then be computed by w u = A + o u =(A A) A o u Rudolf Kruse Neural Networks
23 RBFN Initialization: Example Normal radial basis function network for the biimplication x x Select two training patterns: l =( ı (l ), o (l ) )=((, ), ()) l 4 =( ı (l 4), o (l 4) )=((, ), ()) x x w w θ y Rudolf Kruse Neural Networks 3
24 RBFN Initialization: Example Normal radial basis function network for the biimplication x x A = e 4 e e e e e 4 A + =(A A) A = a b b a c d d e e d d c where a.8, b.68, c.78, d.6688, e.594. Resulting weights: w u = θ w w = A + o u Rudolf Kruse Neural Networks 4
25 RBFN Initialization: Example Normal radial basis function network for the biimplication x x act act y (,) x x x x.36 basis function (,) basis function (,) output Initialization leads already to a perfect solution of the learning task. This is an accident, because the linear equation system is not oer-determined, due to linearly dependent equations. Rudolf Kruse Neural Networks 5
26 Radial Basis Function Networks: Initialization Finding appropriate centers for the radial basis functions One approach: k-means clustering Select randomly k training patterns as centers. Assign to each center those training patterns that are closest to it. Compute new centers as the center of graity of the assigned training patterns Repeat preious two steps until conergence, i.e., until the centers do not change anymore. Use resulting centers for the weight ectors of the hidden neurons. Alternatie approach: learning ector quantization Rudolf Kruse Neural Networks 6
27 Radial Basis Function Networks: Training Training radial basis function networks: Deriation of update rules is analogous to that of multilayer perceptrons. Weights from the hidden to the output neurons. Gradient: wu e (l) u = e(l) u = (o (l) u w u out (l) u ) in (l) u, Weight update rule: w u (l) = η 3 wu e (l) u = η 3 (o (l) u out (l) u ) in (l) u (Two more learning rates are needed for the center coordinates and the radii.) Rudolf Kruse Neural Networks 7
28 Radial Basis Function Networks: Training Training radial basis function networks: Center coordinates (weights from the input to the hidden neurons). Gradient: w e (l) = e(l) w = s succ() (o (l) s out (l) out (l) s )w su net (l) net (l) w Weight update rule: w (l) = η w e (l) = η s succ() (o (l) s out (l) out (l) s )w s net (l) net (l) w Rudolf Kruse Neural Networks 8
29 Radial Basis Function Networks: Training Training radial basis function networks: Center coordinates (weights from the input to the hidden neurons). Special case: Euclidean distance net (l) w = n i= (w pi out (l) p i ) ( w in (l) ). Special case: Gaussian actiation function out (l) net (l) = f act(net (l),σ ) net (l) = net (l) e ( net (l) σ ) = net(l) σ e ( net (l) ) σ. Rudolf Kruse Neural Networks 9
30 Radial Basis Function Networks: Training Training radial basis function networks: Radii of radial basis functions. Gradient: Weight update rule: e (l) σ = s succ() σ (l) = η e (l) = η σ (o (l) s s succ() Special case: Gaussian actiation function out (l) σ = σ e ( net (l) ) σ = out (l) out (l) s )w su. σ (o (l) s ( out (l) out (l) s )w s. σ net (l) σ 3 ) e ( net (l) ) σ. Rudolf Kruse Neural Networks 3
31 Radial Basis Function Networks: Generalization Generalization of the distance function Idea: Use anisotropic distance function. Example: Mahalanobis distance d( x, y) = ( x y) Σ ( x y). Example: biimplication x x y 3 Σ= ( ) x x Rudolf Kruse Neural Networks 3
32 Radial Basis Function Networks: Application Adantages simple feed-forward architecture easy adaption quick optimization and calculations Application continuous processes adapting to changes rapidly Approximation Pattern Recognition Control Engineering Below: Examples from [Schwenker et al. ] Rudolf Kruse Neural Networks 3
33 Example: Handwriting Recognition of digits dataset:. handwritten digits (. samples of each class) normalized in height and width represented by 6 6 greyscale alues G ij {,...,55} data source: EU-project StatLog [Michie et al. 994] Rudolf Kruse Neural Networks 33
34 Example: Results of RBF-Networks. training samples and. alidation samples (. for each class) training data for learning of classificators, alidation data for ealuation initialization by k-means-clustering with prototypes ( for each class) Method Explanation Test accuracy C4.5 Decision Tree 9.% RBF k-means for RBF-centers 96.94% MLP hiddenlayerwithneurons 97.59% here: median of test accuracy for three training and alidation runs Rudolf Kruse Neural Networks 34
35 Example: k-means-clustering for RBF-initialization shown: 6 cluster-centers of the digits after k-means-clustering separate run of the algorithm for eery digit, with k =6 centers were initialized by samples chosen from the training data set randomly Rudolf Kruse Neural Networks 35
36 Example: found RBF-centers after 3 runs shown: 6 RBF-centers of the digits after 3 backpropagation runs of the RBF- Network hardly a difference can be seen in comparison to k-means, but... Rudolf Kruse Neural Networks 36
37 Example: Comparison of k-means and RBF-Network here: Eucledian distance of the 6 RBF-centers before and after training Centers are sorted by class with the first 6 columns/rows representing the digit, the next 6 rows/digits representing the digit etc. Distances are coded as greyscale alues: The smaller, the darker. left: a lot of small distances between centers of different classes (e.g. 3 and 8 9) these small distance cause misclassifications right: after training there are no small distances anymore. Rudolf Kruse Neural Networks 37
COMPUTATIONAL INTELLIGENCE
COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation
More informationRadial Basis Function Networks
Radial Basis Function Networks As we have seen, one of the most common types of neural network is the multi-layer perceptron It does, however, have various disadvantages, including the slow speed in learning
More informationRadial Basis Function Neural Network Classifier
Recognition of Unconstrained Handwritten Numerals by a Radial Basis Function Neural Network Classifier Hwang, Young-Sup and Bang, Sung-Yang Department of Computer Science & Engineering Pohang University
More informationArtificial Neural Networks MLP, RBF & GMDH
Artificial Neural Networks MLP, RBF & GMDH Jan Drchal drchajan@fel.cvut.cz Computational Intelligence Group Department of Computer Science and Engineering Faculty of Electrical Engineering Czech Technical
More informationCLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS
CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of
More informationRadial Basis Function Networks: Algorithms
Radial Basis Function Networks: Algorithms Neural Computation : Lecture 14 John A. Bullinaria, 2015 1. The RBF Mapping 2. The RBF Network Architecture 3. Computational Power of RBF Networks 4. Training
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining
More informationSupervised Learning (contd) Linear Separation. Mausam (based on slides by UW-AI faculty)
Supervised Learning (contd) Linear Separation Mausam (based on slides by UW-AI faculty) Images as Vectors Binary handwritten characters Treat an image as a highdimensional vector (e.g., by reading pixel
More informationNeural Networks. Theory And Practice. Marco Del Vecchio 19/07/2017. Warwick Manufacturing Group University of Warwick
Neural Networks Theory And Practice Marco Del Vecchio marco@delvecchiomarco.com Warwick Manufacturing Group University of Warwick 19/07/2017 Outline I 1 Introduction 2 Linear Regression Models 3 Linear
More informationCSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks
CSE 5526: Introduction to Neural Networks Radial Basis Function (RBF) Networks Part IV 1 Function approximation MLP is both a pattern classifier and a function approximator As a function approximator,
More informationChannel Performance Improvement through FF and RBF Neural Network based Equalization
Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain
More informationWeek 3: Perceptron and Multi-layer Perceptron
Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,
More informationMachine Learning : Clustering, Self-Organizing Maps
Machine Learning Clustering, Self-Organizing Maps 12/12/2013 Machine Learning : Clustering, Self-Organizing Maps Clustering The task: partition a set of objects into meaningful subsets (clusters). The
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationNeural Networks (Overview) Prof. Richard Zanibbi
Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)
More information3 Nonlinear Regression
CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic
More informationCharacter Recognition Using Convolutional Neural Networks
Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract
More information3 Nonlinear Regression
3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear
More informationLecture #11: The Perceptron
Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be
More informationRadial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)
Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Yaobin Qin qinxx143@umn.edu Supervisor: Pro.lilja Department of Electrical and Computer Engineering Abstract
More informationSupport Vector Machines
Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining
More informationCMU Lecture 18: Deep learning and Vision: Convolutional neural networks. Teacher: Gianni A. Di Caro
CMU 15-781 Lecture 18: Deep learning and Vision: Convolutional neural networks Teacher: Gianni A. Di Caro DEEP, SHALLOW, CONNECTED, SPARSE? Fully connected multi-layer feed-forward perceptrons: More powerful
More informationPerceptron as a graph
Neural Networks Machine Learning 10701/15781 Carlos Guestrin Carnegie Mellon University October 10 th, 2007 2005-2007 Carlos Guestrin 1 Perceptron as a graph 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0-6 -4-2
More informationSupervised Learning in Neural Networks (Part 2)
Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning
More informationA Dendrogram. Bioinformatics (Lec 17)
A Dendrogram 3/15/05 1 Hierarchical Clustering [Johnson, SC, 1967] Given n points in R d, compute the distance between every pair of points While (not done) Pick closest pair of points s i and s j and
More informationMore on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization
More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial
More informationMore Learning. Ensembles Bayes Rule Neural Nets K-means Clustering EM Clustering WEKA
More Learning Ensembles Bayes Rule Neural Nets K-means Clustering EM Clustering WEKA 1 Ensembles An ensemble is a set of classifiers whose combined results give the final decision. test feature vector
More informationSupport Vector Machines
Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining
More informationImproving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah
Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com
More informationSupport vector machines
Support vector machines When the data is linearly separable, which of the many possible solutions should we prefer? SVM criterion: maximize the margin, or distance between the hyperplane and the closest
More informationCOMP 551 Applied Machine Learning Lecture 14: Neural Networks
COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course
More informationFace Detection Using Radial Basis Function Neural Networks With Fixed Spread Value
Detection Using Radial Basis Function Neural Networks With Fixed Value Khairul Azha A. Aziz Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Ayer Keroh, Melaka, Malaysia.
More informationPattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition
Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant
More informationESSENTIALLY, system modeling is the task of building
IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 53, NO. 4, AUGUST 2006 1269 An Algorithm for Extracting Fuzzy Rules Based on RBF Neural Network Wen Li and Yoichi Hori, Fellow, IEEE Abstract A four-layer
More informationNeural Network Neurons
Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given
More informationNeural Nets. General Model Building
Neural Nets To give you an idea of how new this material is, let s do a little history lesson. The origins of neural nets are typically dated back to the early 1940 s and work by two physiologists, McCulloch
More informationAccurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF Networks
Accurate modeling of SiGe HBT using artificial neural networks: Performance Comparison of the MLP and RBF etworks Malek Amiri Abdeboochali Department of Electrical Engineering Razi University Kermanshah,
More informationFor Monday. Read chapter 18, sections Homework:
For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron
More informationThe Automatic Musicologist
The Automatic Musicologist Douglas Turnbull Department of Computer Science and Engineering University of California, San Diego UCSD AI Seminar April 12, 2004 Based on the paper: Fast Recognition of Musical
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationCS 6501: Deep Learning for Computer Graphics. Training Neural Networks II. Connelly Barnes
CS 6501: Deep Learning for Computer Graphics Training Neural Networks II Connelly Barnes Overview Preprocessing Initialization Vanishing/exploding gradients problem Batch normalization Dropout Additional
More informationNonlinear dimensionality reduction of large datasets for data exploration
Data Mining VII: Data, Text and Web Mining and their Business Applications 3 Nonlinear dimensionality reduction of large datasets for data exploration V. Tomenko & V. Popov Wessex Institute of Technology,
More informationLecture 20: Neural Networks for NLP. Zubin Pahuja
Lecture 20: Neural Networks for NLP Zubin Pahuja zpahuja2@illinois.edu courses.engr.illinois.edu/cs447 CS447: Natural Language Processing 1 Today s Lecture Feed-forward neural networks as classifiers simple
More informationEfficient Object Tracking Using K means and Radial Basis Function
Efficient Object Tracing Using K means and Radial Basis Function Mr. Pradeep K. Deshmuh, Ms. Yogini Gholap University of Pune Department of Post Graduate Computer Engineering, JSPM S Rajarshi Shahu College
More informationMachine Learning 13. week
Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of
More informationCS 8520: Artificial Intelligence
CS 8520: Artificial Intelligence Machine Learning 2 Paula Matuszek Spring, 2013 1 Regression Classifiers We said earlier that the task of a supervised learning system can be viewed as learning a function
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationK-Means and Gaussian Mixture Models
K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser
More informationPattern Classification Algorithms for Face Recognition
Chapter 7 Pattern Classification Algorithms for Face Recognition 7.1 Introduction The best pattern recognizers in most instances are human beings. Yet we do not completely understand how the brain recognize
More informationFacial expression recognition using shape and texture information
1 Facial expression recognition using shape and texture information I. Kotsia 1 and I. Pitas 1 Aristotle University of Thessaloniki pitas@aiia.csd.auth.gr Department of Informatics Box 451 54124 Thessaloniki,
More informationCHAPTER IX Radial Basis Function Networks
CHAPTER IX Radial Basis Function Networks Radial basis function (RBF) networks are feed-forward networks trained using a supervised training algorithm. They are typically configured with a single hidden
More informationNeural networks for variable star classification
Neural networks for variable star classification Vasily Belokurov, IoA, Cambridge Supervised classification Multi-Layer Perceptron (MLP) Neural Networks for Pattern Recognition by C. Bishop Unsupervised
More informationIn this assignment, we investigated the use of neural networks for supervised classification
Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric
More informationPerformance Evaluation of a Radial Basis Function Neural Network Learning Algorithm for Function Approximation.
Performance Evaluation of a Radial Basis Function Neural Network Learning Algorithm for Function Approximation. A.A. Khurshid PCEA, Nagpur, India A.P.Gokhale VNIT, Nagpur, India Abstract: This paper presents
More informationCOMS 4771 Clustering. Nakul Verma
COMS 4771 Clustering Nakul Verma Supervised Learning Data: Supervised learning Assumption: there is a (relatively simple) function such that for most i Learning task: given n examples from the data, find
More informationFace Detection Using Radial Basis Function Neural Networks with Fixed Spread Value
IJCSES International Journal of Computer Sciences and Engineering Systems, Vol., No. 3, July 2011 CSES International 2011 ISSN 0973-06 Face Detection Using Radial Basis Function Neural Networks with Fixed
More informationSNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training
SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training John Sum and Kevin Ho Institute of Technology Management, National Chung Hsing University Taichung 4, Taiwan. pfsum@nchu.edu.tw
More informationLinear Models. Lecture Outline: Numeric Prediction: Linear Regression. Linear Classification. The Perceptron. Support Vector Machines
Linear Models Lecture Outline: Numeric Prediction: Linear Regression Linear Classification The Perceptron Support Vector Machines Reading: Chapter 4.6 Witten and Frank, 2nd ed. Chapter 4 of Mitchell Solving
More informationNeural Networks: What can a network represent. Deep Learning, Fall 2018
Neural Networks: What can a network represent Deep Learning, Fall 2018 1 Recap : Neural networks have taken over AI Tasks that are made possible by NNs, aka deep learning 2 Recap : NNets and the brain
More informationNeural Networks: What can a network represent. Deep Learning, Spring 2018
Neural Networks: What can a network represent Deep Learning, Spring 2018 1 Recap : Neural networks have taken over AI Tasks that are made possible by NNs, aka deep learning 2 Recap : NNets and the brain
More informationCS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek
CS 8520: Artificial Intelligence Machine Learning 2 Paula Matuszek Fall, 2015!1 Regression Classifiers We said earlier that the task of a supervised learning system can be viewed as learning a function
More informationLearning from Data: Adaptive Basis Functions
Learning from Data: Adaptive Basis Functions November 21, 2005 http://www.anc.ed.ac.uk/ amos/lfd/ Neural Networks Hidden to output layer - a linear parameter model But adapt the features of the model.
More informationThree learning phases for radial-basis-function networks
PERGAMON Neural Networks 14 (2001) 439±458 Contributed article Three learning phases for radial-basis-function networks Friedhelm Schwenker*, Hans A. Kestler, GuÈnther Palm Department of Neural Information
More informationMachine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves
Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves
More informationVulnerability of machine learning models to adversarial examples
Vulnerability of machine learning models to adversarial examples Petra Vidnerová Institute of Computer Science The Czech Academy of Sciences Hora Informaticae 1 Outline Introduction Works on adversarial
More informationCHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS
95 CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS 6.1 INTRODUCTION The concept of distance measure is used to associate the input and output pattern values. RBFs use
More informationCOMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS18. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions
COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS18 Lecture 2: Linear Regression Gradient Descent Non-linear basis functions LINEAR REGRESSION MOTIVATION Why Linear Regression? Simplest
More informationDeep Learning. Architecture Design for. Sargur N. Srihari
Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation
More informationSome fast and compact neural network solutions for artificial intelligence applications
Some fast and compact neural network solutions for artificial intelligence applications Radu Dogaru, University Politehnica of Bucharest ETTI, Dept. of Applied Electronics and Info. Eng., Natural Computing
More informationClassification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska
Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute
More informationArtificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu
Artificial Neural Networks Introduction to Computational Neuroscience Ardi Tampuu 7.0.206 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition
More informationLecture #17: Autoencoders and Random Forests with R. Mat Kallada Introduction to Data Mining with R
Lecture #17: Autoencoders and Random Forests with R Mat Kallada Introduction to Data Mining with R Assignment 4 Posted last Sunday Due next Monday! Autoencoders in R Firstly, what is an autoencoder? Autoencoders
More informationOptimum size of feed forward neural network for Iris data set.
Optimum size of feed forward neural network for Iris data set. Wojciech Masarczyk Faculty of Applied Mathematics Silesian University of Technology Gliwice, Poland Email: wojcmas0@polsl.pl Abstract This
More informationTWRBF Transductive RBF Neural Network with Weighted Data Normalization
TWRBF Transductive RBF eural etwork with Weighted Data ormalization Qun Song and ikola Kasabov Knowledge Engineering & Discovery Research Institute Auckland University of Technology Private Bag 9006, Auckland
More informationStatistical Learning Part 2 Nonparametric Learning: The Main Ideas. R. Moeller Hamburg University of Technology
Statistical Learning Part 2 Nonparametric Learning: The Main Ideas R. Moeller Hamburg University of Technology Instance-Based Learning So far we saw statistical learning as parameter learning, i.e., given
More informationAn Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.
An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar
More informationCS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016
CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation
More informationIMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS
IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS BOGDAN M.WILAMOWSKI University of Wyoming RICHARD C. JAEGER Auburn University ABSTRACT: It is shown that by introducing special
More informationVisual object classification by sparse convolutional neural networks
Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.
More information.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar..
.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar.. Machine Learning: Support Vector Machines: Linear Kernel Support Vector Machines Extending Perceptron Classifiers. There are two ways to
More informationGlobal Journal of Engineering Science and Research Management
A NOVEL HYBRID APPROACH FOR PREDICTION OF MISSING VALUES IN NUMERIC DATASET V.B.Kamble* 1, S.N.Deshmukh 2 * 1 Department of Computer Science and Engineering, P.E.S. College of Engineering, Aurangabad.
More informationAlex Waibel
Alex Waibel 815.11.2011 1 16.11.2011 Organisation Literatur: Introduction to The Theory of Neural Computation Hertz, Krogh, Palmer, Santa Fe Institute Neural Network Architectures An Introduction, Judith
More informationArtificial Neural Networks Unsupervised learning: SOM
Artificial Neural Networks Unsupervised learning: SOM 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001
More informationMini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class
Mini-project 2 CMPSCI 689 Spring 2015 Due: Tuesday, April 07, in class Guidelines Submission. Submit a hardcopy of the report containing all the figures and printouts of code in class. For readability
More informationNEURAL NETWORK-BASED SEGMENTATION OF TEXTURES USING GABOR FEATURES
NEURAL NETWORK-BASED SEGMENTATION OF TEXTURES USING GABOR FEATURES A. G. Ramakrishnan, S. Kumar Raja, and H. V. Raghu Ram Dept. of Electrical Engg., Indian Institute of Science Bangalore - 560 012, India
More informationLinguistic Rule Extraction From a Simplified RBF Neural Network
Computational Statistics (2001) 16:361-372 Physica-Verlag 2001 Linguistic Rule Extraction From a Simplified RBF Neural Network Xiuju Fu 1 and Lipo Wang 2 2 School of Electrical and Electronic Engineering
More informationFigure (5) Kohonen Self-Organized Map
2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;
More informationDeep Learning. Vladimir Golkov Technical University of Munich Computer Vision Group
Deep Learning Vladimir Golkov Technical University of Munich Computer Vision Group 1D Input, 1D Output target input 2 2D Input, 1D Output: Data Distribution Complexity Imagine many dimensions (data occupies
More informationExercise: Training Simple MLP by Backpropagation. Using Netlab.
Exercise: Training Simple MLP by Backpropagation. Using Netlab. Petr Pošík December, 27 File list This document is an explanation text to the following script: demomlpklin.m script implementing the beckpropagation
More informationLearning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation
Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters
More informationTechnical University of Munich. Exercise 7: Neural Network Basics
Technical University of Munich Chair for Bioinformatics and Computational Biology Protein Prediction I for Computer Scientists SoSe 2018 Prof. B. Rost M. Bernhofer, M. Heinzinger, D. Nechaev, L. Richter
More informationNeural Bag-of-Features Learning
Neural Bag-of-Features Learning Nikolaos Passalis, Anastasios Tefas Department of Informatics, Aristotle University of Thessaloniki Thessaloniki 54124, Greece Tel,Fax: +30-2310996304 Abstract In this paper,
More informationAssignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation
Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class
More informationNearest neighbor classifiers
Nearest Neighbor Classification Nearest neighbor classifiers Chapter e-6 HAPT Distance based methods NN(image):. Find the image in the training data which is closest to the query image.. Return its label.
More informationA Quick Guide on Training a neural network using Keras.
A Quick Guide on Training a neural network using Keras. TensorFlow and Keras Keras Open source High level, less flexible Easy to learn Perfect for quick implementations Starts by François Chollet from
More informationIEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 6, NOVEMBER 1999 1271 Inverting Feedforward Neural Networks Using Linear and Nonlinear Programming Bao-Liang Lu, Member, IEEE, Hajime Kita, and Yoshikazu
More informationResearch on Pruning Convolutional Neural Network, Autoencoder and Capsule Network
Research on Pruning Convolutional Neural Network, Autoencoder and Capsule Network Tianyu Wang Australia National University, Colledge of Engineering and Computer Science u@anu.edu.au Abstract. Some tasks,
More informationMachine Learning in Biology
Università degli studi di Padova Machine Learning in Biology Luca Silvestrin (Dottorando, XXIII ciclo) Supervised learning Contents Class-conditional probability density Linear and quadratic discriminant
More informationArtificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5
Artificial Neural Networks Lecture Notes Part 5 About this file: If you have trouble reading the contents of this file, or in case of transcription errors, email gi0062@bcmail.brooklyn.cuny.edu Acknowledgments:
More information^ Springer. Computational Intelligence. A Methodological Introduction. Rudolf Kruse Christian Borgelt. Matthias Steinbrecher Pascal Held
Rudolf Kruse Christian Borgelt Frank Klawonn Christian Moewes Matthias Steinbrecher Pascal Held Computational Intelligence A Methodological Introduction ^ Springer Contents 1 Introduction 1 1.1 Intelligent
More information