Yuki Osada Andrew Cannon

 Reginald Scott
 7 months ago
 Views:
Transcription
1 Yuki Osada Andrew Cannon 1
2 Humans are an intelligent species One feature is the ability to learn The ability to learn comes down to the brain The brain learns from experience Research shows that the brain stores information as patterns This information is stored in neurons 2
3 Neurons do not regenerate suggesting that these cells are what provide us with the abilities to: remember, think, and apply previous experiences Humans generally have between 80 and 120 million neurons Each neuron typically connect with 1000 to other neurons The human brain is a huge network of neurons  a neural network 3
4 The power of the human mind comes from the sheer number of these neurons, and its connections The individual neurons act as a function of their incoming signals Although neurons themselves are complicated, they don't exhibit complex behaviour on their own This is the key feature that makes it a viable computational intelligence approach 4
5 Artificial Neural Networks are a computational model inspired by the neural structure of the human brain, a biological neural network They attempt to replicate only the basic elements of this complicated, versatile, and powerful organism It consists of an interconnected group of artificial neurons It learns by changing its structure based on information that flows through the network They are used to model complex relationships between inputs and outputs, or to find patterns in data 5
6 Neurons are the fundamental processing elements of a neural network Jarosz, Q (2009), "Neuron Handtunedsvg" Retrieved 10 September, 2012, from Wikipedia, Neuron 6
7 A biological neuron basically: 1 receives inputs from other sources (dendrites), 2 merges them in some way (soma), 3 performs an operation on the result (axon), then 4 outputs the result  possibly to other neurons (axon terminals) Artificial neurons follow this basic approach 7
8 The basic structure of an artificial neuron consists of: 1 input connections (dendrites) with weights, 2 a summation function or input function (soma), 3 a transfer function or activation function (axon), and 4 output connections (axon terminals) It has no learning process as such 8
9 The function: 1 Input values enter the neuron via the connections 2 The inputs are multiplied with the weighting factor of their respective connection There is often a separate bias connection, which can act as a threshold for the neuron to produce some useful output 3 The modified inputs are fed into a summation function Usually just sums the products 4 The result from the summation function is sent to a transfer function Usually a step function, or a sigmoid function 5 The neuron outputs the result of the transfer function into other neurons, or to an outside connection 9
10 How can the neurons be clustered together? The structure used in these networks is a layering approach These layers are connected to each other in a linear fashion It's possible that a neuron may have an output connection to itself How these layers may be connected is generally problemdependent 10
11 Single layer neurons are the simplest networks Multiple input sources will be fed into the set of neurons, which produce the outputs to the neural network These are called perceptrons These perceptrons can only represent linearlyseparable functions We can make the system represent more complex functions by adding more layers 11
12 Multilayered neural networks are more powerful than singlelayered neural networks The cost is that these hidden layers increase the complexity and training time of these networks Networks with a single hidden layer can approximate any continuous function with arbitrary accuracy Networks with two hidden layers can represent discontinuous functions 12
13 JokerXtreme (2011), "Artificial_neural_networksvg" Retrieved 10 September, 2012, from Wikipedia, Artificial neural network 13
14 There are two main types of multilayered neural networks: 1 Feedforward A simple acyclic structure: Information always moves in one direction; it never goes backwards Stateless encoding; no information is accumulated 14
15 There are two main types of multilayered neural networks: 2 Recurrent A structure with cyclic feedback loops: Information may be sent to any layer; it can process arbitrary sequences of input, and produce more complex results Stateful encoding; introduces shortterm memory into the system, and allows dynamic temporal behaviour 15
16 Artificial neural networks are used to model complex systems that were not understood by the programmer We usually don't know how to construct a perfect neural network for a problem We must train them to produce better results We can only train aspects of a neural network 16
17 Training is the adjusting of parameters with the aim to minimise a measure of error, the cost function What parameters in the artificial neural network do we want to adjust? The weighting factors The link weights influence the function represented by the neural network In the case when we have no idea for the link weights, these might be randomly generated at initialisation 17
18 There are 2 main approaches to training: Supervised: The user provides sample input and output data The network adjusts its weights to match the expected results Unsupervised Only input data is supplied, and the neural network must find patterns on its own 18
19 G McNeil and D Anderson, Artificial Neural Networks Technology, The Data & Analysis Center for Software Technical Report, 1992 Leslie S Smith, "An Introduction to Neural Networks", Centre for Cognitive and Computational Neuroscience, Department of Computing and Mathematics, University of Stirling, 2008 Retrieved 10 September, 2012, Jarosz, Q (2009), "Neuron Handtunedsvg" Retrieved 10 September, 2012, from Wikipedia, Neuron JokerXtreme (2011), "Artificial_neural_networksvg" Retrieved 10 September, 2012, from Wikipedia, Artificial neural network 19
20 Language processing Character recognition Pattern recognition Signal processing Prediction 20
21 Supervised learning Perceptron Feedforward, backpropagation Unsupervised learning Self organising maps 21
22 Simplest type of neural network Introduced by Rosenblatt (1958) 1 2 Inputs n Output Adapted from Haykin, SS 2009 (p48) 22
23 Simplest type of neural network Introduced by Rosenblatt (1958) 1 2 Inputs n Output Adapted from Haykin, SS 2009 (p48) 23
24 Simplest type of neural network Introduced by Rosenblatt (1958) 1 2 Inputs n Output Adapted from Haykin, SS 2009 (p48) 24
25 Input is a real vector i = (i 1,i 2,,i n ) Calculate a weighted scalar s from inputs s = Σ j w j i j + b Calculate output r = sgn(s) 25
26 Categorises input vectors as being in one of two categories A single perceptron can be trained to separate inputs into two linearly separable categories Category 1 Category 2 26
27 Need a training set of input/output pairs Initialise weights and bias (randomly or to zero) Calculate output Adjust the weights and bias in proportion to the difference between actual and expected values 27
28 Repeat until termination criteria is reached Rosenblatt (1962) showed that the weights and bias will converge to fixed values after a finite number of iterations (if the categories are linearly separable) 28
29 We want to classify points in R 2 into those points for which y x+1 and those for which y<x+1 y y=x+1 x 29
30 Initialise bias/weight vector to (0,0,0) Input is the point (1,1) (below the line) expressed as (1,1,1) s = 0x1+0x1+0x1 = 0 Actual output is sgn(0) = +1 Expected output is 1 (below the line) 30
31 Error (expectedactual) is 2 Constant learning rate of 025 So new weight vector is (0,0,0) 31
32 Error (expectedactual) is 2 Constant learning rate of 025 So new weight vector is (0,0,0)
33 Error (expectedactual) is 2 Constant learning rate of 025 So new weight vector is (0,0,0) + 025(2) 33
34 Error (expectedactual) is 2 Constant learning rate of 025 So new weight vector is (0,0,0) + 025(2)(1,1,1) 34
35 Error (expectedactual) is 2 Constant learning rate of 025 So new weight vector is (0,0,0) + 025(2)(1,1,1) = (05,05,05) 35
36 New bias/weight vector is (05,05,05) Input is the point (0,2) (above the line) expressed as (1,0,2) s = 05x1+05x0+05x2 = 05 Actual output is sgn(05) = +1 Expected output is +1 (above the line) no change to weight 36
37 Eventually, this will converge to the correct answer of (a,a,a) for some a>0 Generally, we won t know the correct answer! 37
38 Feedforward network has no connections looping backwards Backpropagation algorithm allows for learning 38
39 Operates similarly to perceptron learning Input Output 39
40 Inputs are fed forward through the network Input Output 40
41 Inputs are fed forward through the network Input Output 41
42 Inputs are fed forward through the network Input Output 42
43 Inputs are fed forward through the network Input Output 43
44 Inputs are fed forward through the network Input Output Compare to expected 44
45 Errors are propagated back Input Output 45
46 Errors are propagated back Input Output 46
47 Errors are propagated back Input Output 47
48 Errors are propagated back Input Output 48
49 Adjust weights based on errors Input Output 49
50 Weights might be updated after each pass or after multiple passes 50
51 Need a comprehensive training set Network cannot be too large for the training set No guarantees the network will learn Network design and learning strategies impact the speed and effectiveness of learning 51
52 More powerful (if you can make it work) No external notion of correct/incorrect output the network uses internal rules to adjust its output in response to inputs 52
53 One or more inputs connected to a set of outputs 53
54 Output neurons form a lattice in (usually) two dimensional space 54
55 Measurable distance between output neurons d 55
56 Based on the network weights, for each input, each output neuron is excited to a different degree 56
57 Select best matching unit (BMU) 57
58 Identify a neighbourhood around BMU 58
59 Based on their levels of excitation, adjust the weights of each output neuron in this neighbourhood to more closely match the input Hope that output neurons diverge into stable (and distinct) categories allowing the input data to be classified 59
60 Adapted from: AIJunkie nd, Kohonen's Self Organizing Feature Maps, Available from: < [11 September 2012] 60
61 AIJunkie nd, Kohonen's Self Organizing Feature Maps, Available from: < [11 September 2012] Bose, NK & Liang, P 1996, Neural network fundamentals with graphs, algorithms, and applications, McGrawHill, New York (Chapters 4,5 and 9) Fausett, LV 1994, Fundamentals of neural networks : architectures, algorithms, and applications, PrenticeHall, Englewood Cliffs, NJ (Chapter 6) Haykin, SS 2009, Neural networks and learning machines, 3 rd edn, Prentice Hall, New York (Chapters 1, 4 and 9) Kartalopoulos, SV 1996, Understanding neural networks and fuzzy logic  basic concepts and applications, IEEE Press, New York (Sections 32, 35 and 314) McNeil, G & Anderson, D 1992, 'Artificial Neural Networks Technology', The Data & Analysis Center for Software Technical Report 61
62 62
6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION
6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm
More information11/14/2010 Intelligent Systems and Soft Computing 1
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationLecture #11: The Perceptron
Lecture #11: The Perceptron Mat Kallada STAT2450  Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be
More information11/14/2010 Intelligent Systems and Soft Computing 1
Lecture 8 Artificial neural networks: Unsupervised learning Introduction Hebbian learning Generalised Hebbian learning algorithm Competitive learning Selforganising computational map: Kohonen network
More informationIn this assignment, we investigated the use of neural networks for supervised classification
Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decisiontheoric
More informationSupervised Learning in Neural Networks (Part 2)
Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (backpropagation training algorithm) The input signals are propagated in a forward direction on a layerbylayer basis. Learning
More informationLearning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation
Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters
More informationCOMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS
COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationData Compression. The Encoder and PCA
Data Compression The Encoder and PCA Neural network techniques have been shown useful in the area of data compression. In general, data compression can be lossless compression or lossy compression. In
More informationImage Compression: An Artificial Neural Network Approach
Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and
More informationCLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS
CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of
More informationReview on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.618.12, 20.120.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationNeural Network Weight Selection Using Genetic Algorithms
Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks
More informationPerceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15
Perceptrons and Backpropagation Fabio Zachert Cognitive Modelling WiSe 2014/15 Content History Mathematical View of Perceptrons Network Structures Gradient Descent Backpropagation (SingleLayer, MultilayerNetworks)
More informationCOMPUTATIONAL INTELLIGENCE
COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence
More informationNeural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.
Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric
More informationIntroduction AL Neuronale Netzwerke. VL Algorithmisches Lernen, Teil 2b. Norman Hendrich
Introduction AL 64360 Neuronale Netzwerke VL Algorithmisches Lernen, Teil 2b Norman Hendrich University of Hamburg, Dept. of Informatics VogtKöllnStr. 30, D22527 Hamburg hendrich@informatik.unihamburg.de
More informationAssignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions
ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan
More informationSimulation of Back Propagation Neural Network for Iris Flower Classification
American Journal of Engineering Research (AJER) eissn: 23200847 pissn : 23200936 Volume6, Issue1, pp200205 www.ajer.org Research Paper Open Access Simulation of Back Propagation Neural Network
More informationTraffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers
Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane
More informationArtificial Neural Networks. Introduction to Computational Neuroscience Ardi Tampuu
Artificial Neural Networks Introduction to Computational Neuroscience Ardi Tampuu 7.0.206 Artificial neural network NB! Inspired by biology, not based on biology! Applications Automatic speech recognition
More informationA Dendrogram. Bioinformatics (Lec 17)
A Dendrogram 3/15/05 1 Hierarchical Clustering [Johnson, SC, 1967] Given n points in R d, compute the distance between every pair of points While (not done) Pick closest pair of points s i and s j and
More informationObject Detection Lecture Introduction to deep learning (CNN) Idar Dyrdal
Object Detection Lecture 10.3  Introduction to deep learning (CNN) Idar Dyrdal Deep Learning Labels Computational models composed of multiple processing layers (nonlinear transformations) Used to learn
More informationAlex Waibel
Alex Waibel 815.11.2011 1 16.11.2011 Organisation Literatur: Introduction to The Theory of Neural Computation Hertz, Krogh, Palmer, Santa Fe Institute Neural Network Architectures An Introduction, Judith
More informationClustering and Visualisation of Data
Clustering and Visualisation of Data Hiroshi Shimodaira JanuaryMarch 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some
More informationNeural Networks (Overview) Prof. Richard Zanibbi
Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)
More informationLearning and Generalization in Single Layer Perceptrons
Learning and Generalization in Single Layer Perceptrons Neural Computation : Lecture 4 John A. Bullinaria, 2015 1. What Can Perceptrons do? 2. Decision Boundaries The Two Dimensional Case 3. Decision Boundaries
More informationA modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems
A modified and fast Perceptron learning rule and its use for Tag Recommendations in Social Bookmarking Systems Anestis Gkanogiannis and Theodore Kalamboukis Department of Informatics Athens University
More informationSlide07 Haykin Chapter 9: SelfOrganizing Maps
Slide07 Haykin Chapter 9: SelfOrganizing Maps CPSC 636600 Instructor: Yoonsuck Choe Spring 2012 Introduction Selforganizing maps (SOM) is based on competitive learning, where output neurons compete
More informationAUTOMATIC PATTERN CLASSIFICATION BY UNSUPERVISED LEARNING USING DIMENSIONALITY REDUCTION OF DATA WITH MIRRORING NEURAL NETWORKS
AUTOMATIC PATTERN CLASSIFICATION BY UNSUPERVISED LEARNING USING DIMENSIONALITY REDUCTION OF DATA WITH MIRRORING NEURAL NETWORKS Name(s) Dasika Ratna Deepthi (1), G.R.Aditya Krishna (2) and K. Eswaran (3)
More informationKhmer Character Recognition using Artificial Neural Network
Khmer Character Recognition using Artificial Neural Network Hann Meng * and Daniel Morariu * Faculty of Engineering, Lucian Blaga University of Sibiu, Sibiu, Romania Email: meng.hann@rupp.edu.kh Tel:
More information4. Feedforward neural networks. 4.1 Feedforward neural network structure
4. Feedforward neural networks 4.1 Feedforward neural network structure Feedforward neural network is one of the most common network architectures. Its structure and some basic preprocessing issues required
More informationARTIFICIAL NEURONAL NETWORKS IN ENVIRONMENTAL ENGINEERING: THEORY AND APPLICATIONS
18 th International Conference on the Application of Computer Science and Mathematics in Architecture and Civil Engineering K. Gürlebeck and C. Könke (eds.) Weimar, Germany, 07 09 July 2009 ARTIFICIAL
More informationTime Series prediction with FeedForward Neural Networks A Beginners Guide and Tutorial for Neuroph. Laura E. CarterGreaves
http://neuroph.sourceforge.net 1 Introduction Time Series prediction with FeedForward Neural Networks A Beginners Guide and Tutorial for Neuroph Laura E. CarterGreaves Neural networks have been applied
More informationReservoir Computing for Neural Networks
Reservoir Computing for Neural Networks Felix Grezes CUNY Graduate Center fgrezes@gc.cuny.edu September 4, 2014 Felix Grezes (CUNY) Reservoir Computing September 4, 2014 1 / 33 Introduction The artificial
More informationFeed Forward Neural Network for Solid Waste Image Classification
Research Journal of Applied Sciences, Engineering and Technology 5(4): 14661470, 2013 ISSN: 20407459; eissn: 20407467 Maxwell Scientific Organization, 2013 Submitted: June 29, 2012 Accepted: August
More informationNeural Network in Data Mining
Neural Network in Data Mining Karan S Nayak Dept. of Electronics & Telecommunicaion, Dwarkadas J. Sanghvi College of Engg., Mumbai, India Abstract Data Mining means mine data from huge amount of data.
More informationNeural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com
Neural Networks These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these slides
More informationNeural Networks. Robot Image Credit: Viktoriya Sukhanova 123RF.com
Neural Networks These slides were assembled by Eric Eaton, with grateful acknowledgement of the many others who made their course materials freely available online. Feel free to reuse or adapt these slides
More informationDeep Learning. Volker Tresp Summer 2014
Deep Learning Volker Tresp Summer 2014 1 Neural Network Winter and Revival While Machine Learning was flourishing, there was a Neural Network winter (late 1990 s until late 2000 s) Around 2010 there
More informationCHAPTER IX Radial Basis Function Networks
CHAPTER IX Radial Basis Function Networks Radial basis function (RBF) networks are feedforward networks trained using a supervised training algorithm. They are typically configured with a single hidden
More informationPerformance Analysis of Data Mining Classification Techniques
Performance Analysis of Data Mining Classification Techniques Tejas Mehta 1, Dr. Dhaval Kathiriya 2 Ph.D. Student, School of Computer Science, Dr. Babasaheb Ambedkar Open University, Gujarat, India 1 Principal
More informationThe Traveling Salesman
Neural Network Approach To Solving The Traveling Salesman Problem The Traveling Salesman The shortest route for a salesman to visit every city, without stopping at the same city twice. 1 Random Methods
More informationNeural Networks In Data Mining
Neural Networks In Mining AbstractThe application of neural networks in the data mining has become wider. Although neural networks may have complex structure, long training time, and uneasily understandable
More information!!! Warning!!! Learning jargon is always painful even if the concepts behind the jargon are not hard. So, let s get used to it. In mathematics you don't understand things. You just get used to them. von
More informationA Rule Chaining Architecture Using a Correlation Matrix Memory. James Austin, Stephen Hobson, Nathan Burles, and Simon O Keefe
A Rule Chaining Architecture Using a Correlation Matrix Memory James Austin, Stephen Hobson, Nathan Burles, and Simon O Keefe Advanced Computer Architectures Group, Department of Computer Science, University
More informationAn Overview of Unsupervised Neural Networks By Dan Schlegel
Schlegel 1 An Overview of Unsupervised Neural Networks By Dan Schlegel Neural networks in a data retrieval system have been theorized for everything from high speed internet search, to desktop search applications,
More informationUnsupervised Image Segmentation with Neural Networks
Unsupervised Image Segmentation with Neural Networks J. Meuleman and C. van Kaam Wageningen Agricultural University, Department of Agricultural, Environmental and Systems Technology, Bomenweg 4, 6703 HD
More informationNeural Network Neurons
Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given
More informationSOMSN: An Effective Self Organizing Map for Clustering of Social Networks
SOMSN: An Effective Self Organizing Map for Clustering of Social Networks Fatemeh Ghaemmaghami Research Scholar, CSE and IT Dept. Shiraz University, Shiraz, Iran Reza Manouchehri Sarhadi Research Scholar,
More informationSlides adapted from Marshall Tappen and Bryan Russell. Algorithms in Nature. Nonnegative matrix factorization
Slides adapted from Marshall Tappen and Bryan Russell Algorithms in Nature Nonnegative matrix factorization Dimensionality Reduction The curse of dimensionality: Too many features makes it difficult to
More informationClustering algorithms and autoencoders for anomaly detection
Clustering algorithms and autoencoders for anomaly detection Alessia Saggio Lunch Seminars and Journal Clubs Université catholique de Louvain, Belgium 3rd March 2017 a Outline Introduction Clustering algorithms
More informationLecture 1 Notes. Outline. Machine Learning. What is it? Instructors: Parth Shah, Riju Pahwa
Instructors: Parth Shah, Riju Pahwa Lecture 1 Notes Outline 1. Machine Learning What is it? Classification vs. Regression Error Training Error vs. Test Error 2. Linear Classifiers Goals and Motivations
More informationProbabilistic & Machine Learning Applications. Joel Coburn Ilya Katsnelson Brad Schumitsch Jean Suh
Probabilistic & Machine Learning Applications Joel Coburn Ilya Katsnelson Brad Schumitsch Jean Suh Outline Genetic algorithms Functionality of learning algorithms Characteristics of neural networks Available
More informationData Mining and Analytics
Data Mining and Analytics Aik Choon Tan, Ph.D. Associate Professor of Bioinformatics Division of Medical Oncology Department of Medicine aikchoon.tan@ucdenver.edu 9/22/2017 http://tanlab.ucdenver.edu/labhomepage/teaching/bsbt6111/
More informationSegmentation of Images
Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a
More informationOMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network
2011 International Conference on Telecommunication Technology and Applications Proc.of CSIT vol.5 (2011) (2011) IACSIT Press, Singapore OMBP: Optic Modified BackPropagation training algorithm for fast
More informationCOMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT WITH VIEW ANGLE IN DEPTH ESTIMATION WITH VARYING BRIGHTNESS
International Journal of Advanced Trends in Computer Science and Engineering, Vol.2, No.1, Pages : 171177 (2013) COMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT
More informationIntroduction to Parallel. Programming
University of Nizhni Novgorod Faculty of Computational Mathematics & Cybernetics Introduction to Parallel Section 9. Programming Parallel Methods for Solving Linear Systems Gergel V.P., Professor, D.Sc.,
More informationLinear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons
Linear Separability Input space in the twodimensional case (n = ):       w =, w =, =       w = , w =, =       w = , w =, = Linear Separability So by varying the weights and the threshold,
More informationA Study on the Neural Network Model for Finger Print Recognition
A Study on the Neural Network Model for Finger Print Recognition Vijaya Sathiaraj Dept of Computer science and Engineering Bharathidasan University, Trichirappalli23 Abstract: Finger Print Recognition
More informationIN recent years, neural networks have attracted considerable attention
Multilayer Perceptron: Architecture Optimization and Training Hassan Ramchoun, Mohammed Amine Janati Idrissi, Youssef Ghanou, Mohamed Ettaouil Modeling and Scientific Computing Laboratory, Faculty of Science
More informationCOMPUTATIONAL INTELLIGENCE
COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation
More information^ Springer. Computational Intelligence. A Methodological Introduction. Rudolf Kruse Christian Borgelt. Matthias Steinbrecher Pascal Held
Rudolf Kruse Christian Borgelt Frank Klawonn Christian Moewes Matthias Steinbrecher Pascal Held Computational Intelligence A Methodological Introduction ^ Springer Contents 1 Introduction 1 1.1 Intelligent
More informationData Mining. Covering algorithms. Covering approach At each stage you identify a rule that covers some of instances. Fig. 4.
Data Mining Chapter 4. Algorithms: The Basic Methods (Covering algorithm, Association rule, Linear models, Instancebased learning, Clustering) 1 Covering approach At each stage you identify a rule that
More informationOnline Learning for Object Recognition with a Hierarchical Visual Cortex Model
Online Learning for Object Recognition with a Hierarchical Visual Cortex Model Stephan Kirstein, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl Legien Str. 30 63073 Offenbach
More informationDynamic Clustering of Data with Modified KMeans Algorithm
2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Dynamic Clustering of Data with Modified KMeans Algorithm Ahamed Shafeeq
More informationGaussSigmoid Neural Network
GaussSigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract Recently RBF(Radial Basis Function)based networks have
More informationBig Data Methods. Chapter 5: Machine learning. Big Data Methods, Chapter 5, Slide 1
Big Data Methods Chapter 5: Machine learning Big Data Methods, Chapter 5, Slide 1 5.1 Introduction to machine learning What is machine learning? Concerned with the study and development of algorithms that
More informationParticle Swarm Optimization
Particle Swarm Optimization Gonçalo Pereira INESCID and Instituto Superior Técnico Porto Salvo, Portugal gpereira@gaips.inescid.pt April 15, 2011 1 What is it? Particle Swarm Optimization is an algorithm
More informationThe Analysis of Animate Object Motion using Neural Networks and Snakes
The Analysis of Animate Object Motion using Neural Networks and Snakes Ken Tabb, Neil Davey, Rod Adams & Stella George email {K.J.Tabb, N.Davey, R.G.Adams, S.J.George}@herts.ac.uk http://www.health.herts.ac.uk/ken/vision/
More informationEE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR
EE 589 INTRODUCTION TO ARTIFICIAL NETWORK REPORT OF THE TERM PROJECT REAL TIME ODOR RECOGNATION SYSTEM FATMA ÖZYURT SANCAR 1.Introductıon. 2.Multi Layer Perception.. 3.Fuzzy CMeans Clustering.. 4.Real
More information9. Lecture Neural Networks
Soft Control (AT 3, RMA) 9. Lecture Neural Networks Application in Automation Engineering Outline of the lecture 1. Introduction to Soft Control: definition and limitations, basics of "smart" systems 2.
More informationExtended Hopfield Network for Sequence Learning: Application to Gesture Recognition
Extended Hopfield Network for Sequence Learning: Application to Gesture Recognition André Maurer, Micha Hersch and Aude G. Billard Ecole Polytechnique Fédérale de Lausanne (EPFL) Swiss Federal Institute
More informationImage Segmentation and Edge Detection Using a Neural Networks RBF Approach
Image Segmentation and Edge Detection Using a Neural Networks RBF Approach Shweta Lawanya Rao ME (VLSI) SSCET, Bhilai Dolley Shukla Sr. Associate. Professor SSCET, Bhilai ABSTRACT The system consist of
More informationNeural Network Approach for Automatic Landuse Classification of Satellite Images: OneAgainstRest and MultiClass Classifiers
Neural Network Approach for Automatic Landuse Classification of Satellite Images: OneAgainstRest and MultiClass Classifiers Anil Kumar Goswami DTRL, DRDO Delhi, India Heena Joshi Banasthali Vidhyapith
More informationCS 8520: Artificial Intelligence. Machine Learning 2. Paula Matuszek Fall, CSC 8520 Fall Paula Matuszek
CS 8520: Artificial Intelligence Machine Learning 2 Paula Matuszek Fall, 2015!1 Regression Classifiers We said earlier that the task of a supervised learning system can be viewed as learning a function
More informationDeep Learning. Deep Learning provided breakthrough results in speech recognition and image classification. Why?
Data Mining Deep Learning Deep Learning provided breakthrough results in speech recognition and image classification. Why? Because Speech recognition and image classification are two basic examples of
More informationUsing neural nets to recognize handwritten digits. Srikumar Ramalingam School of Computing University of Utah
Using neural nets to recognize handwritten digits Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the first chapter of the online book by Michael
More informationDynamic Memory Allocation for CMAC using Binary Search Trees
Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 1921, 2007 61 Dynamic Memory Allocation for CMAC using Binary Search Trees PETER SCARFE
More informationNeural Networks Laboratory EE 329 A
Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such
More informationImplementation Feasibility of Convex Recursive Deletion Regions Using MultiLayer Perceptrons
Implementation Feasibility of Convex Recursive Deletion Regions Using MultiLayer Perceptrons CHECHERN LIN National Kaohsiung Normal University Department of Industrial Technology Education 116 HePing
More informationJournal of Engineering Technology Volume 6, Special Issue on Technology Innovations and Applications Oct. 2017, PP
Oct. 07, PP. 0005 Implementation of a digital neuron using system verilog Azhar Syed and Vilas H Gaidhane Department of Electrical and Electronics Engineering, BITS Pilani Dubai Campus, DIAC Dubai345055,
More informationβrelease Multi Layer Perceptron Trained by Quasi Newton Rule MLPQNA User Manual
βrelease Multi Layer Perceptron Trained by Quasi Newton Rule MLPQNA User Manual DAMEMANNA0015 Issue: 1.0 Date: July 28, 2011 Author: M. Brescia, S. Riccardi Doc. : BetaRelease_Model_MLPQNA_UserManual_DAMEMANNA0015Rel1.0
More informationA Class of Instantaneously Trained Neural Networks
A Class of Instantaneously Trained Neural Networks Subhash Kak Department of Electrical & Computer Engineering, Louisiana State University, Baton Rouge, LA 708035901 May 7, 2002 Abstract This paper presents
More informationA SelfAdaptive Insert Strategy for ContentBased Multidimensional Database Storage
A SelfAdaptive Insert Strategy for ContentBased Multidimensional Database Storage Sebastian Leuoth, Wolfgang Benn Department of Computer Science Chemnitz University of Technology 09107 Chemnitz, Germany
More informationFace Detection Using Radial Basis Function Neural Networks With Fixed Spread Value
Detection Using Radial Basis Function Neural Networks With Fixed Value Khairul Azha A. Aziz Faculty of Electronics and Computer Engineering, Universiti Teknikal Malaysia Melaka, Ayer Keroh, Melaka, Malaysia.
More informationTanagra Tutorial. Determining the right number of neurons and layers in a multilayer perceptron.
1 Introduction Determining the right number of neurons and layers in a multilayer perceptron. At first glance, artificial neural networks seem mysterious. The references I read often spoke about biological
More informationArtificial Neural Network Model of Traffic Operations at Signalized Junction in Johor Bahru, Malaysia
Artificial Neural Network Model of Traffic Operations at Signalized Junction in Johor Bahru, Malaysia ARASH MORADKHANI ROSHANDEH 1, OTHMAN CHE PUAN 1 and MAJID JOSHANI 2 1 Department of Geotechnics and
More informationRadial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR)
Radial Basis Function (RBF) Neural Networks Based on the Triple Modular Redundancy Technology (TMR) Yaobin Qin qinxx143@umn.edu Supervisor: Pro.lilja Department of Electrical and Computer Engineering Abstract
More informationArtificial Neural Networks (Feedforward Nets)
Artificial Neural Networks (Feedforward Nets) y w 031 w 13 y 1 w 23 y 2 w 01 w 21 w 22 w 021 w 11 w 121 x 1 x 2 6.034  Spring 1 Single Perceptron Unit y w 0 w 1 w n w 2 w 3 x 0 =1 x 1 x 2 x 3... x
More informationRapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot
Rapid Simultaneous Learning of Multiple Behaviours with a Mobile Robot Koren Ward School of Information Technology and Computer Science University of Wollongong koren@uow.edu.au www.uow.edu.au/~koren Abstract
More informationAnalytical model A structure and process for analyzing a dataset. For example, a decision tree is a model for the classification of a dataset.
Glossary of data mining terms: Accuracy Accuracy is an important factor in assessing the success of data mining. When applied to data, accuracy refers to the rate of correct values in the data. When applied
More informationEfficiency of kmeans and KMedoids Algorithms for Clustering Arbitrary Data Points
Efficiency of kmeans and KMedoids Algorithms for Clustering Arbitrary Data Points Dr. T. VELMURUGAN Associate professor, PG and Research Department of Computer Science, D.G.Vaishnav College, Chennai600106,
More informationCHAPTER VI BACK PROPAGATION ALGORITHM
6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted
More informationSmart Sort and its Analysis
Smart Sort and its Analysis Varun Jain and Suneeta Agarwal Department of Computer Science and Engineering, Motilal Nehru National Institute of Technology, Allahabad211004, Uttar Pradesh, India. varun_jain22@yahoo.com,
More informationImplementation of a Library for Artificial Neural Networks in C
Implementation of a Library for Artificial Neural Networks in C Jack Breese TJHSST Computer Systems Lab 20072008 June 10, 2008 1 Abstract In modern computing, there are several approaches to pattern recognition
More information