Artificial Neural Network based Curve Prediction

Size: px
Start display at page:

Download "Artificial Neural Network based Curve Prediction"

Transcription

1 Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We use artificial neural networks to perform curve prediction. For that, we have created a class of neural networks (feed forward multilayer perceptron networks with backpropagation) that have a topology which is determined by their genetic makeup. Using a simple evolutionary strategy on their genes, we optimise the networks topologies to solve the problems at hand. Using this approach, we could generate networks that are able to predict simple functions such as sin(x) or linear combinations thereof, with moderate computational overhead. However, it was not possible to generate networks that predict more complex functions such as sinc(x) or the NASDAQ composite index satisfactorily, within the allowed sizes for the networks. In general though, it appears to be a useful approach to generate neural networks using this form of evolutionary strategy as it substitutes for experience in neural network design. Introduction Curve prediction is one of the most popular applications for artificial neural networks. However, the success of using a neural network to solve a certain problem is inherently linked to the designer s ability to apply an appropriate network to the task. Even relatively simple artificial neural networks such as the multi-layer-perceptron or variants thereof have several degrees of freedom, e.g. the number of neurons, the number of hidden layers, the type of transfer functions employed, which the network is very sensitive to. For most tasks there is no methodology for designing a neural network which guarantees success. Instead, we try to evolve a neural network topology that is suitable for any curve prediction task. Aim To develop an evolvable artificial neural network representation To optimise such a neural network to solve a number of curve prediction To evaluate the ability of an evolutionary approach to evolve suitable neural networks for a given task

2 Neural Networks In order to perform the prediction tasks described above, we use multi layer perceptron networks and a simple backpropagation learning rule. Then, we use an evolutionary strategy to change the following parameters of the network: Number of hidden layers Number of neurons in each layer Transfer function employed by the neurons in each layer (neurons in the same layer will employ the same transfer function) In order to do this, we define a genetic code for the class of neural networks comprising an N digit binary bit string. In order to limit the optimisation search space, we arbitrarily limit the number of hidden layers to The number of neurons per layer is limited to The allowed transfer functions are linear, linear with bounds and hyperbolic tangens, as shown below: Thus, the number of neurons for each hidden layer can be represented by a four bit number, and the transfer function for the neurons in each layer by a two bit number, totalling a required six bits per layer. As there are up to 10 hidden layers, the total bitstring will be 60 bits long, and is represented as follows: Bit 0 Bit 59 Layer 0 Layer 1 Layer 2 Layer 3 Layer 4 Layer 5 Layer 6 Layer 7 Layer 8 Layer 9 Layer 4, Neuron bit 3 Layer 4, Neuron bit 2 Layer 4, Neuron bit 1 Layer 4, Neuron bit 0 Layer 4, Transfer bit 1 Layer 4, Transfer bit 0 Each layer can have up to 15 neurons, as given by the binary number [Neuron bit 3: Neuron bit 0]. If the bit string encodes zero neurons for that layer, it is interpreted as being non-existent. Also, as the bit

3 string can encode for four transfer functions in each layer, as given by the binary number [Transfer bit 1: Transfer bit 0], but only three are employed, a bias is given towards the linear y = x transfer function, to be encoded in two of the possible four states. Thus, by changing the genes of a network, it will have a different topology, some more others less suitable for the tasks at hand. Optimisation In order to find an optimal network topology for the given tasks, we use an evolutionary strategy to evolve the genetic makeup of the network, which could also be regarded as a genetic algorithm without cross breeding. The Evolutionary Strategy The algorithm employed works as follows: 1. Produce a first generation of population size seven of random bit strings 2. Generate randomly initialised networks from the population of bit strings 3. Train the networks on a training set using backpropagation 4. Run the networks on the test data 5. A fitness function evaluates the fitness of each network, and the fittest network is kept for the next iteration, while the other ones are discarded. 6. The fittest network is cloned six times to refill the generation, and each of these clones is mutated randomly by inverting one of the 60 bits at random. 7. The process repeats at 3. until a maximum number of iterations has been performed, or a marginal or no improvement in the fitness can be achieved over several iterations A table that contains all the bitstrings that have already been evaluated is kept so as to avoid computing the same network topology multiple times. Here, we evaluate the fitness of the networks in two ways, depending on the task at hand. For short term prediction, a part of the test signal series is used as input to the network, and the first value the network predicts is compared with the actual value of the series at that point. The cumulative error is found by summing the absolute difference between the predicted value and the actual value for all shifted versions of the actual signal as input to the network. The fitness of the network is the reciprocal of the cumulative error. This method evaulates the network s ability to make short term predictions for a given pattern and number of inputs. (see below) Actual signal N Neural network with inputs and output

4 i n p u t s o N i n p u t s o N i n p u t s O Figure 1: Short term prediction method For long term prediction, a part of the signal is used as input to the network, and the first predicted value is fed back and used as the next input to the network, and this is repeated for a given number of points that are to be predicted. Then, the fitness is the reciprocal of the cumulative absolute difference between the actual signal and the recurrently predicted signal. This method evaluates the network s ability to make long term predictions (forecasts) for a given starting pattern and number of points to predict. (see below) Actual signal N Neural network with inputs and output i n p u t s o N i n p u t s o N i n p u t s O Figure 2: Long term prediction method In both cases, there is a minimum error (and thus maximum fitness) that each network must have, in order to avoid division by zero errors and infinite fitness. Furthermore, smaller networks, i.e. networks with fewer hidden layers are preferred over larger ones, as are networks with a small number of neurons. The Search Space The size of the search space can be calculated as follows:

5 In fact, the search space is slightly smaller than this, because some of the topologies where one or more layers have zero neurons are equivalent. Still, it can be appreciated, that the search space is large enough to justify this optimisation approach. Evaluation and Discussion We developed a software application with Java that implements this evolvable artificial neural network representation, and which allows our evolutionary strategy to evolve the network topology of the networks. To evaluate our approach, we adopted the following testing strategy. Testing Strategy There are two tasks, short- and long term (function) prediction that our neural networks will have to perform. Here, we shall qualitatively assess the ability of the evolved networks to perform each task, using the following representative test signals: Sinusoidal function: An arbitrary sinusoidal function such as Sin(x) with a given amplitude, frequency and phase. This is probably the simplest test signal for the networks and the tasks can be expected to be managed successfully by the networks. A-Periodic function: An aperiodic function sinc(x) = sin(x)/x. This is a challenging function to predict, as it is not periodic, and not monotonous. It would come as a positive surprise if the evolved networks would manage the task successfully. Noisy a-periodic or pseudo-random function: An excerpt of the NASDAQ composite index historical weekly values shall be used as an interesting and challenging test signal. The ability of the networks to predict this stock market index would be highly surprising and unexpected. Apart from running both tasks on the four different functions, the following questions shall be addressed: Do we always arrive at the same network for the same problem? In order to answer this question, several trials of the same test shall be run. If the same network topologies are arrived at most of the time, it implies that the evolutionary strategy converges to a local or global optimum, which is desirable. How fit are the networks? As mentioned above, there exists a maximum fitness that a network may achieve. How fit, relative to the maximum achievable fitness, are the evolved networks? Tabular Summary The section below describes important aspects of the individual tests in detail. In addition, the table below summarises the results.

6 Criteria 5 Sin(x) Sinc(x) NASDAQ inputs Short term prediction Long term prediction Same networks Fitness 17% 15% 2% %-Error 36% 86% 48% Convergence Table 1: Summary of the tests for 5 inputs Criteria 10 Sin(x) Sinc(x) NASDAQ inputs Short term prediction Long term prediction Same networks Fitness 95% 15% 2% %-Error 0% 113% 44% Convergence Table 2: Summary of the tests for 10 inputs Criteria 20 Sin(x) Sinc(x) NASDAQ inputs Short term prediction Long term prediction Same networks Fitness 99% 15% 3% %-Error 0% 95% 27% Convergence Table 3: Summary of the tests for 20 inputs Test Details 1. Sinusoidal The neural networks evolved are able to predict the sinusoidal signals with acceptable accuracy, provided they receive enough inputs. Figure 3 below illustrates the evolution process over several trials. Each point on the graph represents an improved network topology over the previous one.

7 Figure 3: Fitness evolution for different trials for Sin(x) - Clear fitness improvement As with the fitness evolution, the error performance of the evolved networks improves. Figure 4 below illustrates how the %-error in the long term prediction decreases in general with each generation. Figure 4: Error evolution for different trials for Sin(X) - Clear performance improvement Comparing the size of the network (number of neurons inside hidden layers) to the network s performance, it appears, that there exists a certain range of right sizes that the network should have, which allows it to achieve high fitness. Another way of looking at this is that the network should have a

8 certain minimum complexity (in terms of numbers of neurons) which is adequate to solve the task at hand. Below that critical size, it is unlikely, that a network can achieve a high fitness. Figure 5: Size does matter a network needs a certain minimum size or complexity to achieve high fitness 2. Sinc(x) Unlike in the previous case with sin(x), the evolutionary approach does not generate sufficiently fit networks to perform long-term prediction on a sinc(x) function. Figure 6 and Figure 7 outline the evolutionary performances over several trials. They show clearly, that the evolutionary approach works in principal, i.e.networks are evolving and improving, however the task to predict a sinc(x) seems to be too difficult a task for the simple feed-forward perceptrons employed here. It appears that the network evolution is hitting a fitness and %-error performance limit at about 15% (f)and 90%(e) respectively.

9 Figure 6: Fitness evolution for different trials for Sinc(x) - No clear fitness improvement Figure 7: Error evolution for different trials for Sinc(x) The error decreases, but is still unacceptably high 3. NASDAQ Attempting to perform long/term prediction on the NASDAQ is ambitious. Here, the evolutionary approach again works somewhat, as it is able to generate networks with improving performance over several generations, however the network model or complexity is again not able to cope with the challenge posed by the NASDAQ. Figure 8 and Figure 9 summarize the network evolution over several trials.

10 Figure 8: Fitness evolution for different trials for the NASDAQ - Too difficult for the networks

11 Figure 9: Error evolution for different trials for the NASDAQ - Error performance improves, but it is still too high General Comments The networks performances are closely tied to some randomness in the initialisation and the success of the training. The training method employed, backpropagation, however is not guaranteed to achieve a satisfactory level of training, and does not necessarily find the globally optimal solution parameters for the network. To overcome this, we repeated the training sequence of the networks several times to increase the likelihood of obtaining a well trained network. This however increased the computational load manifold, to an impractical degree, and is therefore not an adequate remedy to decouple the success of the evolutionary approach from its sensitivity on randomness and initial conditions. Conclusion To develop an evolvable artificial neural network representation To optimise such a neural network to solve a number of curve prediction To evaluate the ability of an evolutionary approach to evolve suitable neural networks for a given task The performance of an artificial neural network for a given problem is inherently tied to the topology of the network, and it might prove difficult to produce satisfactory results. Thus, we have set out to create a neural network design framework that replaces the designer s experience with an evolutionary strategy.

12 We have developed a bit string representation that corresponds to the genetic makeup of a multi-layer perceptron network, allowing an optimisation of the following network parameters: number of hidden layers, number of neurons in the hidden layers and the transfer function employed by the neurons in each layer. Using a simple evolutionary strategy, we tried to optimise the network topology for the tasks at hand by optimising a generation of networks genetic makeup. In principal, this approach has proven valid, and we have demonstrated an evolution of networks to predict a sinus function. For more complex function such as the sinc(x) function or the NASADQ, the evolutionary approach worked, although it was limited by the network model s inherent ability to predict complex patterns. We are optimistic about the approach to evolve neural network topologies for given tasks, and there are several aspects that could be improved or further investigated. In particular, we suggest the following: Within our simulations, we had to limit the search space by limiting the allowed size of the networks and their transfer functions. A more extensive investigation into the evolutionary approach could include other non-linear transfer functions, larger networks as well as more interconnected or feed-back networks. A major limiting computational factor was the learning rule employed (backpropagation). It would be worthwhile to consider other learning rules. In our approach, we allow evolution to uniformly develop networks of all size and shapes. It might be a more fruitful approach to instead constrain the evolution to organic growth, i.e. to start with a small network and growing them by evolution. We have used our evolutionary strategy on function prediction only. It would be worthwhile to apply this method to other common ann tasks, such as image recognition.

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Lecture 17: Neural Networks and Deep Learning. Instructor: Saravanan Thirumuruganathan

Lecture 17: Neural Networks and Deep Learning. Instructor: Saravanan Thirumuruganathan Lecture 17: Neural Networks and Deep Learning Instructor: Saravanan Thirumuruganathan Outline Perceptron Neural Networks Deep Learning Convolutional Neural Networks Recurrent Neural Networks Auto Encoders

More information

9. Lecture Neural Networks

9. Lecture Neural Networks Soft Control (AT 3, RMA) 9. Lecture Neural Networks Application in Automation Engineering Outline of the lecture 1. Introduction to Soft Control: definition and limitations, basics of "smart" systems 2.

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

Artificial Neural Network Evolutionary Algorithm (ANNEVA) Abstract

Artificial Neural Network Evolutionary Algorithm (ANNEVA) Abstract Artificial Neural Network Evolutionary Algorithm (ANNEVA) Tyler Haugen Dr. Jeff McGough Math and Computer Science Department South Dakota School of Mines and Technology Rapid City, SD 57701 tyler.haugen@mines.sdsmt.edu

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2

A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves

Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves http://neuroph.sourceforge.net 1 Introduction Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph Laura E. Carter-Greaves Neural networks have been applied

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Chapter 5 Components for Evolution of Modular Artificial Neural Networks

Chapter 5 Components for Evolution of Modular Artificial Neural Networks Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are

More information

CS 4510/9010 Applied Machine Learning

CS 4510/9010 Applied Machine Learning CS 4510/9010 Applied Machine Learning Neural Nets Paula Matuszek Spring, 2015 1 Neural Nets, the very short version A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

Cascade Networks and Extreme Learning Machines

Cascade Networks and Extreme Learning Machines Cascade Networks and Extreme Learning Machines A thesis submitted for the degree of Master of Computing in Computer Science of The Australian National University by Anthony Oakden u47501940@anu.edu.au

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing

5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing. 6. Meta-heuristic Algorithms and Rectangular Packing 1. Introduction 2. Cutting and Packing Problems 3. Optimisation Techniques 4. Automated Packing Techniques 5. Computational Geometry, Benchmarks and Algorithms for Rectangular and Irregular Packing 6.

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

Metaheuristic Optimization with Evolver, Genocop and OptQuest

Metaheuristic Optimization with Evolver, Genocop and OptQuest Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:

More information

Vulnerability of machine learning models to adversarial examples

Vulnerability of machine learning models to adversarial examples Vulnerability of machine learning models to adversarial examples Petra Vidnerová Institute of Computer Science The Czech Academy of Sciences Hora Informaticae 1 Outline Introduction Works on adversarial

More information

V.Petridis, S. Kazarlis and A. Papaikonomou

V.Petridis, S. Kazarlis and A. Papaikonomou Proceedings of IJCNN 93, p.p. 276-279, Oct. 993, Nagoya, Japan. A GENETIC ALGORITHM FOR TRAINING RECURRENT NEURAL NETWORKS V.Petridis, S. Kazarlis and A. Papaikonomou Dept. of Electrical Eng. Faculty of

More information

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5 Artificial Neural Networks Lecture Notes Part 5 About this file: If you have trouble reading the contents of this file, or in case of transcription errors, email gi0062@bcmail.brooklyn.cuny.edu Acknowledgments:

More information

Random Search Report An objective look at random search performance for 4 problem sets

Random Search Report An objective look at random search performance for 4 problem sets Random Search Report An objective look at random search performance for 4 problem sets Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA dwai3@gatech.edu Abstract: This report

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

^ Springer. Computational Intelligence. A Methodological Introduction. Rudolf Kruse Christian Borgelt. Matthias Steinbrecher Pascal Held

^ Springer. Computational Intelligence. A Methodological Introduction. Rudolf Kruse Christian Borgelt. Matthias Steinbrecher Pascal Held Rudolf Kruse Christian Borgelt Frank Klawonn Christian Moewes Matthias Steinbrecher Pascal Held Computational Intelligence A Methodological Introduction ^ Springer Contents 1 Introduction 1 1.1 Intelligent

More information

Learning and Generalization in Single Layer Perceptrons

Learning and Generalization in Single Layer Perceptrons Learning and Generalization in Single Layer Perceptrons Neural Computation : Lecture 4 John A. Bullinaria, 2015 1. What Can Perceptrons do? 2. Decision Boundaries The Two Dimensional Case 3. Decision Boundaries

More information

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons Linear Separability Input space in the two-dimensional case (n = ): - - - - - - w =, w =, = - - - - - - w = -, w =, = - - - - - - w = -, w =, = Linear Separability So by varying the weights and the threshold,

More information

Climate Precipitation Prediction by Neural Network

Climate Precipitation Prediction by Neural Network Journal of Mathematics and System Science 5 (205) 207-23 doi: 0.7265/259-529/205.05.005 D DAVID PUBLISHING Juliana Aparecida Anochi, Haroldo Fraga de Campos Velho 2. Applied Computing Graduate Program,

More information

Evolving SQL Queries for Data Mining

Evolving SQL Queries for Data Mining Evolving SQL Queries for Data Mining Majid Salim and Xin Yao School of Computer Science, The University of Birmingham Edgbaston, Birmingham B15 2TT, UK {msc30mms,x.yao}@cs.bham.ac.uk Abstract. This paper

More information

Central Manufacturing Technology Institute, Bangalore , India,

Central Manufacturing Technology Institute, Bangalore , India, 5 th International & 26 th All India Manufacturing Technology, Design and Research Conference (AIMTDR 2014) December 12 th 14 th, 2014, IIT Guwahati, Assam, India Investigation on the influence of cutting

More information

Biology in Computation: Evolving Intelligent Controllers

Biology in Computation: Evolving Intelligent Controllers Biology in Computation: Evolving Intelligent Controllers Combining Genetic Algorithms with Neural Networks: Implementation & Application Dimitrios N. Terzopoulos terzopod@math.auth.gr 18/1/2017 Contents

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms

Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Topological Machining Fixture Layout Synthesis Using Genetic Algorithms Necmettin Kaya Uludag University, Mechanical Eng. Department, Bursa, Turkey Ferruh Öztürk Uludag University, Mechanical Eng. Department,

More information

Evolving Variable-Ordering Heuristics for Constrained Optimisation

Evolving Variable-Ordering Heuristics for Constrained Optimisation Griffith Research Online https://research-repository.griffith.edu.au Evolving Variable-Ordering Heuristics for Constrained Optimisation Author Bain, Stuart, Thornton, John, Sattar, Abdul Published 2005

More information

Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002

Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002 Recitation Supplement: Creating a Neural Network for Classification SAS EM December 2, 2002 Introduction Neural networks are flexible nonlinear models that can be used for regression and classification

More information

Introduction to Genetic Algorithms

Introduction to Genetic Algorithms Advanced Topics in Image Analysis and Machine Learning Introduction to Genetic Algorithms Week 3 Faculty of Information Science and Engineering Ritsumeikan University Today s class outline Genetic Algorithms

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Using CODEQ to Train Feed-forward Neural Networks

Using CODEQ to Train Feed-forward Neural Networks Using CODEQ to Train Feed-forward Neural Networks Mahamed G. H. Omran 1 and Faisal al-adwani 2 1 Department of Computer Science, Gulf University for Science and Technology, Kuwait, Kuwait omran.m@gust.edu.kw

More information

Online Algorithm Comparison points

Online Algorithm Comparison points CS446: Machine Learning Spring 2017 Problem Set 3 Handed Out: February 15 th, 2017 Due: February 27 th, 2017 Feel free to talk to other members of the class in doing the homework. I am more concerned that

More information

GTC 2018 Silicon Valley, California

GTC 2018 Silicon Valley, California GTC 2018 Silicon Valley, California Predictive Learning of Factor Based Strategies using Deep Neural Networks for Investment and Risk Management Yigal Jhirad and Blay Tarnoff March 27, 2018 GTC 2018: Table

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

Genetic Algorithms Variations and Implementation Issues

Genetic Algorithms Variations and Implementation Issues Genetic Algorithms Variations and Implementation Issues CS 431 Advanced Topics in AI Classic Genetic Algorithms GAs as proposed by Holland had the following properties: Randomly generated population Binary

More information

Grid-Based Genetic Algorithm Approach to Colour Image Segmentation

Grid-Based Genetic Algorithm Approach to Colour Image Segmentation Grid-Based Genetic Algorithm Approach to Colour Image Segmentation Marco Gallotta Keri Woods Supervised by Audrey Mbogho Image Segmentation Identifying and extracting distinct, homogeneous regions from

More information

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy

Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Multiobjective Optimization Using Adaptive Pareto Archived Evolution Strategy Mihai Oltean Babeş-Bolyai University Department of Computer Science Kogalniceanu 1, Cluj-Napoca, 3400, Romania moltean@cs.ubbcluj.ro

More information

Heuristic Optimisation

Heuristic Optimisation Heuristic Optimisation Part 10: Genetic Algorithm Basics Sándor Zoltán Németh http://web.mat.bham.ac.uk/s.z.nemeth s.nemeth@bham.ac.uk University of Birmingham S Z Németh (s.nemeth@bham.ac.uk) Heuristic

More information

Evolutionary Neurocontrol

Evolutionary Neurocontrol ACT Global Optimization Competition Workshop Evolutionary Neurocontrol Team 1 Bernd Dachwald German Aerospace Center (DLR) Mission Operations Section Oberpfaffenhofen b e r n d. d a c h w a l d @ d l r.

More information

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm

Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm 2011 International Conference on Software and Computer Applications IPCSIT vol.9 (2011) (2011) IACSIT Press, Singapore Automated Test Data Generation and Optimization Scheme Using Genetic Algorithm Roshni

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

Neural Networks Laboratory EE 329 A

Neural Networks Laboratory EE 329 A Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

Hidden Units. Sargur N. Srihari

Hidden Units. Sargur N. Srihari Hidden Units Sargur N. srihari@cedar.buffalo.edu 1 Topics in Deep Feedforward Networks Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation

More information

Role of Genetic Algorithm in Routing for Large Network

Role of Genetic Algorithm in Routing for Large Network Role of Genetic Algorithm in Routing for Large Network *Mr. Kuldeep Kumar, Computer Programmer, Krishi Vigyan Kendra, CCS Haryana Agriculture University, Hisar. Haryana, India verma1.kuldeep@gmail.com

More information

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey. Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Knowledge Discovery and Data Mining

Knowledge Discovery and Data Mining Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

arxiv: v1 [cs.ne] 28 Mar 2016

arxiv: v1 [cs.ne] 28 Mar 2016 Genetic cellular neural networks for generating three-dimensional geometry Hugo Martay 2015-03-19 arxiv:1603.08551v1 [cs.ne] 28 Mar 2016 Abstract There are a number of ways to procedurally generate interesting

More information

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design

Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design Applied Cloning Techniques for a Genetic Algorithm Used in Evolvable Hardware Design Viet C. Trinh vtrinh@isl.ucf.edu Gregory A. Holifield greg.holifield@us.army.mil School of Electrical Engineering and

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM

CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM 100 CHAPTER-5 APPLICATION OF SYMBIOTIC ORGANISMS SEARCH ALGORITHM 5.1 INTRODUCTION The progressive increase in electrical demand has been the cause for technical problems to the utility companies such

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

CMPT 882 Week 3 Summary

CMPT 882 Week 3 Summary CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being

More information

DERIVATIVE-FREE OPTIMIZATION

DERIVATIVE-FREE OPTIMIZATION DERIVATIVE-FREE OPTIMIZATION Main bibliography J.-S. Jang, C.-T. Sun and E. Mizutani. Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence. Prentice Hall, New Jersey,

More information

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction

4/22/2014. Genetic Algorithms. Diwakar Yagyasen Department of Computer Science BBDNITM. Introduction 4/22/24 s Diwakar Yagyasen Department of Computer Science BBDNITM Visit dylycknow.weebly.com for detail 2 The basic purpose of a genetic algorithm () is to mimic Nature s evolutionary approach The algorithm

More information

A Neural Network Model Of Insurance Customer Ratings

A Neural Network Model Of Insurance Customer Ratings A Neural Network Model Of Insurance Customer Ratings Jan Jantzen 1 Abstract Given a set of data on customers the engineering problem in this study is to model the data and classify customers

More information

GENETIC ALGORITHM with Hands-On exercise

GENETIC ALGORITHM with Hands-On exercise GENETIC ALGORITHM with Hands-On exercise Adopted From Lecture by Michael Negnevitsky, Electrical Engineering & Computer Science University of Tasmania 1 Objective To understand the processes ie. GAs Basic

More information

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?)

Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) SKIP - May 2004 Hardware Neuronale Netzwerke - Lernen durch künstliche Evolution (?) S. G. Hohmann, Electronic Vision(s), Kirchhoff Institut für Physik, Universität Heidelberg Hardware Neuronale Netzwerke

More information

Genetic Algorithms and the Evolution of Neural Networks for Language Processing

Genetic Algorithms and the Evolution of Neural Networks for Language Processing Genetic Algorithms and the Evolution of Neural Networks for Language Processing Jaime J. Dávila Hampshire College, School of Cognitive Science Amherst, MA 01002 jdavila@hampshire.edu Abstract One approach

More information

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

A neural network that classifies glass either as window or non-window depending on the glass chemistry. A neural network that classifies glass either as window or non-window depending on the glass chemistry. Djaber Maouche Department of Electrical Electronic Engineering Cukurova University Adana, Turkey

More information

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016 Neural Network Viscosity Models for Multi-Component Liquid Mixtures Adel Elneihoum, Hesham Alhumade, Ibrahim Alhajri, Walid El Garwi, Ali Elkamel Department of Chemical Engineering, University of Waterloo

More information

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods

Multi-Objective Memetic Algorithm using Pattern Search Filter Methods Multi-Objective Memetic Algorithm using Pattern Search Filter Methods F. Mendes V. Sousa M.F.P. Costa A. Gaspar-Cunha IPC/I3N - Institute of Polymers and Composites, University of Minho Guimarães, Portugal

More information

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? Initially considered for this model was a feed forward neural network. Essentially, this means connections between units do not form

More information

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R. Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

More information

Predicting Diabetes using Neural Networks and Randomized Optimization

Predicting Diabetes using Neural Networks and Randomized Optimization Predicting Diabetes using Neural Networks and Randomized Optimization Kunal Sharma GTID: ksharma74 CS 4641 Machine Learning Abstract This paper analysis the following randomized optimization techniques

More information

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS

ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS ARTIFICIAL INTELLIGENCE (CSCU9YE ) LECTURE 5: EVOLUTIONARY ALGORITHMS Gabriela Ochoa http://www.cs.stir.ac.uk/~goc/ OUTLINE Optimisation problems Optimisation & search Two Examples The knapsack problem

More information

CHAPTER 5. CHE BASED SoPC FOR EVOLVABLE HARDWARE

CHAPTER 5. CHE BASED SoPC FOR EVOLVABLE HARDWARE 90 CHAPTER 5 CHE BASED SoPC FOR EVOLVABLE HARDWARE A hardware architecture that implements the GA for EHW is presented in this chapter. This SoPC (System on Programmable Chip) architecture is also designed

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Evolutionary origins of modularity

Evolutionary origins of modularity Evolutionary origins of modularity Jeff Clune, Jean-Baptiste Mouret and Hod Lipson Proceedings of the Royal Society B 2013 Presented by Raghav Partha Evolvability Evolvability capacity to rapidly adapt

More information

Department of applied mathematics. Mat Individual Research Projects in Applied Mathematics course

Department of applied mathematics. Mat Individual Research Projects in Applied Mathematics course Department of applied mathematics Mat-2.108 Individual Research Projects in Applied Mathematics course Use of neural networks for imitating and improving heuristic system performance Jens Wilke 41946R

More information

IN recent years, neural networks have attracted considerable attention

IN recent years, neural networks have attracted considerable attention Multilayer Perceptron: Architecture Optimization and Training Hassan Ramchoun, Mohammed Amine Janati Idrissi, Youssef Ghanou, Mohamed Ettaouil Modeling and Scientific Computing Laboratory, Faculty of Science

More information

CSC 578 Neural Networks and Deep Learning

CSC 578 Neural Networks and Deep Learning CSC 578 Neural Networks and Deep Learning Fall 2018/19 7. Recurrent Neural Networks (Some figures adapted from NNDL book) 1 Recurrent Neural Networks 1. Recurrent Neural Networks (RNNs) 2. RNN Training

More information

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism) Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent

More information

Investigating the Application of Genetic Programming to Function Approximation

Investigating the Application of Genetic Programming to Function Approximation Investigating the Application of Genetic Programming to Function Approximation Jeremy E. Emch Computer Science Dept. Penn State University University Park, PA 16802 Abstract When analyzing a data set it

More information

MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons.

MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons. MATLAB representation of neural network Outline Neural network with single-layer of neurons. Neural network with multiple-layer of neurons. Introduction: Neural Network topologies (Typical Architectures)

More information

Instructor: Jessica Wu Harvey Mudd College

Instructor: Jessica Wu Harvey Mudd College The Perceptron Instructor: Jessica Wu Harvey Mudd College The instructor gratefully acknowledges Andrew Ng (Stanford), Eric Eaton (UPenn), David Kauchak (Pomona), and the many others who made their course

More information

A Fuzzy Logic Approach to Assembly Line Balancing

A Fuzzy Logic Approach to Assembly Line Balancing Mathware & Soft Computing 12 (2005), 57-74 A Fuzzy Logic Approach to Assembly Line Balancing D.J. Fonseca 1, C.L. Guest 1, M. Elam 1, and C.L. Karr 2 1 Department of Industrial Engineering 2 Department

More information

With data-based models and design of experiments towards successful products - Concept of the product design workbench

With data-based models and design of experiments towards successful products - Concept of the product design workbench European Symposium on Computer Arded Aided Process Engineering 15 L. Puigjaner and A. Espuña (Editors) 2005 Elsevier Science B.V. All rights reserved. With data-based models and design of experiments towards

More information

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017) APPLICATION OF LRN AND BPNN USING TEMPORAL BACKPROPAGATION LEARNING FOR PREDICTION OF DISPLACEMENT Talvinder Singh, Munish Kumar C-DAC, Noida, India talvinder.grewaal@gmail.com,munishkumar@cdac.in Manuscript

More information

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation:

Inertia Weight. v i = ωv i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) The new velocity update equation: Convergence of PSO The velocity update equation: v i = v i +φ 1 R(0,1)(p i x i )+φ 2 R(0,1)(p g x i ) for some values of φ 1 and φ 2 the velocity grows without bound can bound velocity to range [ V max,v

More information

Keywords: ANN; network topology; bathymetric model; representability.

Keywords: ANN; network topology; bathymetric model; representability. Proceedings of ninth International Conference on Hydro-Science and Engineering (ICHE 2010), IIT Proceedings Madras, Chennai, of ICHE2010, India. IIT Madras, Aug 2-5,2010 DETERMINATION OF 2 NETWORK - 5

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

Optimization Methods for Machine Learning (OMML)

Optimization Methods for Machine Learning (OMML) Optimization Methods for Machine Learning (OMML) 2nd lecture Prof. L. Palagi References: 1. Bishop Pattern Recognition and Machine Learning, Springer, 2006 (Chap 1) 2. V. Cherlassky, F. Mulier - Learning

More information

Dynamic Analysis of Structures Using Neural Networks

Dynamic Analysis of Structures Using Neural Networks Dynamic Analysis of Structures Using Neural Networks Alireza Lavaei Academic member, Islamic Azad University, Boroujerd Branch, Iran Alireza Lohrasbi Academic member, Islamic Azad University, Boroujerd

More information

Introduction to Evolutionary Computation

Introduction to Evolutionary Computation Introduction to Evolutionary Computation The Brought to you by (insert your name) The EvoNet Training Committee Some of the Slides for this lecture were taken from the Found at: www.cs.uh.edu/~ceick/ai/ec.ppt

More information

CS420 Project IV. Experimentation with Artificial Neural Network Architectures. Alexander Saites

CS420 Project IV. Experimentation with Artificial Neural Network Architectures. Alexander Saites CS420 Project IV Experimentation with Artificial Neural Network Architectures Alexander Saites 3/26/2012 Alexander Saites 1 Introduction In this project, I implemented an artificial neural network that

More information

Research Article Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks

Research Article Forecasting SPEI and SPI Drought Indices Using the Integrated Artificial Neural Networks Computational Intelligence and Neuroscience Volume 2016, Article ID 3868519, 17 pages http://dx.doi.org/10.1155/2016/3868519 Research Article Forecasting SPEI and SPI Drought Indices Using the Integrated

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information