An Empirical Study of Software Metrics in Artificial Neural Networks

Size: px
Start display at page:

Download "An Empirical Study of Software Metrics in Artificial Neural Networks"

Transcription

1 An Empirical Study of Software Metrics in Artificial Neural Networks WING KAI, LEUNG School of Computing Faculty of Computing, Information and English University of Central England Birmingham B42 2SU UNITED KINGDOM Abstract: - In the previous studies by Leung et al. [1,2,3,4], a set of software metrics, named Neural Metrics that are applicable in supervised feedforward Artificial Neural Networks (ANNs and provide measures of the network quality and training complexity, was proposed. This study extends the work by empirically evaluating the neural metrics against the standard Backpropagation Algorithm (BPA across several types of benchmark application problems. The result of the evaluation was, for each type of problem, a specification of values of all neural metrics and network parameters which can be used to successfully solve the same or similar type of problem. With such a specification, neural users can reduce the uncertainty and hence time in choosing the reliable network details for solving the same or similar type of problem. In addition, users are provided with a better understanding of the algorithmic complexity of the problem by referring to the values of the neural metrics in the specification. They may also use the specified neural metric values as reference points for similar experiments with a view to obtaining a better or sub-optimal solution for the problem. Thus, if the values of neural metrics obtained in a further experiment are less than that reported in the specification, the experiment may be considered to be an improved or more efficient approach than the standard BPA. Keywords: - Neural Networks, Backpropagation, Software Metrics, Algorithmic Complexity. 1 Introduction Due to the lack of applicable measurements, no analysis has been carried out comprehensively on the quality characteristics (e.g. efficiency and complexity of any ANN system simulated on a conventional computer. Hence, users of such systems with little or no specialist expertise in this area do not generally know how efficient the network system performs nor how complicated the training process is for solving a specific application problem. It is also difficult to acquire and compare reliable implementation details based on the published work of each researcher. Furthermore, researchers sometimes use minor variations on the published algorithms (such as the standard BPA without making it clear what the variations are and why they were necessary. The results obtained by individual researchers do not normally specify the values of all variables and parameters (e.g. number of network layers and number of hidden units that are used in the training process. Without a complete specification, users are often uncertain on the choice of reliable values in order to solve the same or similar type of problem successfully. They are also uncertain if their experiments have resulted in a better or sub-optimal solution for the problem since no reference points for such values are available. To overcome the above difficulties, Leung et al [1] have proposed a set of software metrics, named Neural Metrics, which provide indicative measures of the quality characteristics of a neural network system. This study extends the work by empirically evaluating these metrics across several types of benchmark problems. It has been reported that 82% of 113 papers published in Neural Computation and Neural Networks over the 1993/94 period do not use two or more realistic benchmarking tests [5]. This is generally still the same case in the last five years. To make the proposed neural metrics applicable to ANNs, it is essential that they are evaluated across more than two benchmark problems. The results obtained in the evaluation were the average values of neural metrics that may be used to solve a specific type of problem. They were also used in the calculation of the generalised algorithmic complexity of each type of problem. 2 Evaluation Approach In this study, the evaluation of neural metrics was done iteratively for each problem (see Figure 1. Arbitrary initial values of the neural metrics were supplied by the user to the network system. The network system was trained with a view to improving its quality characteristics. If the network

2 showed improvement (e.g. faster convergence, the values of the neural metrics were recorded; otherwise they were discarded and new values were to be used. This process continues with new values of the neural metrics replacing the old ones for each improvement. The values of the neural metrics at the end of the evaluation process should ideally provide the optimal solution achievable to the given application problem. Despite physical limitations such as machine precision and architecture, the optimal solution means the attainment of the best quality characteristics, e.g. the highest efficiency and lowest complexity of the network system. However, such an optimal solution has hardly been obtained (or even identified by any neural network researcher. What most researchers can do is to improve the results obtained in similar studies by advocating or applying one or more optimisation techniques. Neural Network Improved Neural Network Improved Neural Network Optimal Neural Network Neural Metrics Revised Neural Metrics Revised Neural Metrics Optimal Neural Metrics User Figure 1. Evaluation of Neural Metrics 3 Testing Strategy This section describes the strategy that was used in this research to test the results obtained from the evaluation of neural metrics for a specific problem. This strategy, which was applied throughout the evaluation process, includes the choice of training data, network structure, training algorithm, optimisation techniques, initial values of neural metrics and network parameters, training completion criteria, and the calculation of the complexity of the chosen problem. The training data sets were chosen such that each consisted of all input-output pairs that constituted the problem domain. Once training was completed, the same training sets were used to test the final networks. The values 0.1 and 0.9 were used to represent binary values 0 and 1 respectively in order to prevent any of the delta weights from attaining the value of 0 [6]. Such a representation also helped in speeding up the training process by avoiding any of the output values from attaining the asymptotical values of 0 or 1. Since networks with more than one hidden layers are more prone to fall into local minima [7,8], single hidden layer networks were used for all the problems. In addition, no short-cut connection weights were included in the networks as these violate the assumptions made on the standard BPA and hence the resulting complexity (and validity of neural metrics of a problem [1]. Bias weights were, however, allowed as they behave in the same way as standard connection weights and do not violate any of the assumptions. Standard BPA with periodic weight update was chosen as the training algorithm because of the nonlinear nature of most application problems [9]. The initial weights were randomly initialised between the values of -0.5 and 0.5 for periodic update and -0.1 and 0.1 for continuous update [10]. The initial value of the learning coefficient was chosen between and 1.00 [11]. The value of the steepness coefficient was fixed in the standard BPA with the value of 1.0 [12]. Further, the initial value of the momentum term was chosen between 0.1 and 1.0 [13]. There are five commonly used completion criteria [14] in BP based training. The sharp threshold criterion is widely used in binary problems where any output over 0.5 is accepted as 1, while any output below 0.5 is accepted as 0. If the small individual error criterion is used, each output value will need to be very close to the desired value. The small composite error criterion, on the other hand, requires that the sum of squared errors for all the outputs must fall below some fixed value. In the winner-take-all criterion, the value of the correct output unit must be larger than that of any other output unit. The threshold with margin criterion chooses a fixed threshold (e.g. 0.5 and treats as incorrect any value that is too close to this threshold. For problems that use this criterion (e.g. [14,15], any

3 output value over 0.6 is accepted as 1, any under 0.4 as 0, and any between 0.4 and 0.6 as indeterminate or incorrect. The sharp threshold criterion was used in this research for all the problems studied. This is because it does not require additional computations (e.g. calculating the composite errors in the small composite error criterion and comparisons (e.g. selecting the correct output in the winner-take-all criterion. There are also no indeterminate values (e.g. those in the threshold with margin criterion to be catered for. The evaluation process was repeated a number of times for each application problem until the attainment of a set of values of neural metrics and network parameters that can be used to successfully solve the problem. The average algorithmic complexity required in solving the problem was calculated using neural metric function TOT which is defined in terms of primitive metrics such as the average number of epochs taken, the number of layers, the number of units on each layer, and the number of training pairs in the training data set [1,2]. 4 Benchmark Problems Four types of benchmark application problems were chosen for the evaluation process. They were chosen as the benchmark in this research because they have been widely studied or used by a number of neural researchers. For example, they have been frequently chosen to illustrate the techniques being proposed for BP optimisation and compare the results obtained amongst the various optimisation techniques. The Encoding problem, which has been chosen in a number of studies (e.g. [14,16,17], aims to develop a network which can produce the same output as the given input. Each input or output consists of an n-bit binary number containing a single binary 1. The most common choice for n is 4, 8 or 10 and so the problem is normally referred to as 4-bit, 8-bit or 10- bit Encoding. Another benchmark that is studied or used by a number of neural scientists is the Parity problem which determines if a given binary string contains an odd number of 1 s. The 2-bit, 3-bit and 4-bit Parity problems are most commonly studied (e.g. [17,18,19]. The 2-bit Parity problem is the same as the XOR problem. The other two benchmarks are the Binary Addition and Symmetric problems. The former deals with the addition of two equal length binary numbers and the latter is to determine if a given binary string is symmetric about its centre. The 2-bit Binary Addition and 4-bit Symmetry problems are widely being studied. It must be emphasised that the evaluation results obtained in this study for each application problem are based on standard BPA whilst those obtained by other researchers may be based on non-standard BPA where specific optimisation techniques are used. Thus the results of this study may show a better or poorer efficiency than the others in terms of training complexity and convergence. They are presented in this study in order to show the average values of neural metrics involved in the successful resolution of an application problem via standard BPA. These values can be used to compute the average algorithmic complexity of the problem or be evaluated further in finding a sub-optimal solution for the problem. In other words, the results of this study may be used as a reference point for optimisation. For instance, if the number of training epochs or iterations (i.e. value of neural metric M obtained in an optimisation is less than that reported in this study, then such an optimisation may be considered an improved or more efficient algorithm than the standard BPA. 4.1 Encoding Problem The 4-bit, 8-bit and 10-bit Encoding problems were addressed. Wang et al. [16] conducted simulations on the 4-bit Encoding problem using several BP variant approaches. The average number of epochs they obtained varied between 10 and 200. Fahlman [14] solved the same problem with an average of 16 epochs. The average number of epochs (i.e. neural metric M obtained in this study (using standard BPA is 15 as shown in Table 1. The network was trained by evaluating the values of the network parameters and neural metrics over a number of simulations. A set of these values that can be used to solve the problem are specified in Table 1. As mentioned in the testing strategy, the initial values of the connection weights W were chosen at random between -0.5 and 0.5. The initial value of the learning coefficient ε was chosen between and The steepness coefficient λ was kept constant at the value The value of the momentum factor α was initially chosen between 0.1 and 1.0. Moreover, the value of m chosen in all training cycles was 3, i.e. the network consisted of 3 layers (1 input layer, 1 hidden layer and 1 output layer. The values of some of the neural metrics were determined from the training data set. These metrics are the number of input units, n [1], the number of output units, n [3], and the number of input patterns per epoch, P. They were the length of the input binary string (i.e. 4, the length of the output binary string (i.e. 4, and the number of input-output

4 pairs in the training set (i.e. 4 respectively. These values were kept unchanged in all training cycles. No scaling on the input and output values was carried out since the training data set contained only binary data. Thus the number of scaling operations, S, was 0. The number of hidden units, n [2], and the average number of epochs required for convergence, M, were determined experimentally. The initial number of hidden units was chosen to be that of the input units (i.e. 4. Encoding Problem 4-bit 8-bit 10-bit Neural Metrics Description Values Values Values n [1] Number of input units n [2] Number of hidden units n [3] Number of output units m Number of layers M Average number of epochs required P Number of input patterns per epoch N [2] Number of hidden weights N [3] Number of output weights N Number of weights S Number of scaling operations ACT Number of activation function invocations 420 7,280 19,200 ADD Number of additions and subtractions 4, , ,400 MUL Number of multiplications and divisions 7, , ,400 TOT Total number of operations 12, ,000 1,104,000 Network Parameters Values Values Values ε Learning coefficient λ Steepness coefficient α Momentum factor W Initial hidden and output weights [-0.5, 0.5] [-0.5,0.5] [-0.5, 0.5] Table 1. Encoding Problem Specification The initial average number of epochs was set to 10. As the network was fully connected with no short-cut connections, the number of hidden weights, N [2] was the product of n [1] and n [2] (i.e. 4*3 = 12. The number of output weights, N [3] was the product of n [2] and n [3] (i.e. 3*4 = 12. The total number of weights, N, was therefore 24. The values of neural metrics n [2] and M and the values of network parameters ε and α were then altered after each training cycle until the network converges. It can be seen from Table 1 that M O( N and P O( N,i.e. the average number of epochs required for convergence and the number of input patterns per epoch are both proportional to the total number of weights in the network. These are consistent to the theoretical conditions specified in [1]. The values of neural metrics ACT, ADD, MUL and TOT as shown in Table 1 were computed using formulae as defined in [1]. Since 12,180 < 24 3, the values show that TOT O( N 3, i.e. the average algorithmic complexity required to solve the 4-bit Encoding problem is proportional to the 3rd power of the number of connection weights in the network. Similarly, the results for the 8-bit and 10-bit Encoding problems as shown in Table 1 show that M O( N, P O N 3. ( and TOT O( N 4.2 Parity Problem The second type of problem being studied is that of Parity. Three cases had been conducted, namely 2- bit, 3-bit and 4-bit Parity. The 2-bit Parity problem is the same as the XOR problem and has been widely used as a benchmark problem by variant versions of standard BPA. Jacobs [19] reported that an average of 530 and 250 epochs are respectively required to solve this problem based on the standard Jacobs method and the Jacobs method with delta-bar-rule. A network was solved by Deleone et al [17] in an average of 121 epochs. The Quick-Prop algorithm (Fahlman et al. [18], on the other hand, achieves convergence in 24 epochs on average. The average number of epochs obtained in this study is 60 as shown in Table 2. The variations amongst these results are due to different network architectures and training algorithms being used by different researchers. With similar reasoning for the Encoding problems, the results obtained in this study show 2 that M O( N, P O( N and TOT O( N 4. These are the same for the 3-bit and 4-bit Parity problems. 4.3 Binary Addition & Symmetry Problems

5 The Binary Addition problem deals with the addition of two binary numbers of the same length. The 2 results of the simulations showed that M O( N, Parity Problem 2-bit 3-bit 4-bit Neural Metrics Description Values Values Values n [1] Number of input units n [2] Number of hidden units n [3] Number of output units m Number of layers M Average number of epochs required ,090 P Number of input patterns per epoch N [2] Number of hidden weights N [3] Number of output weights N Number of weights S Number of scaling operations ACT Number of activation function invocations 1,440 23, ,840 ADD Number of additions and subtractions 9, ,280 2,007,780 MUL Number of multiplications and divisions 18, ,800 4,046,080 TOT Total number of operations 29, ,120 6,245,700 Network Parameters Values Values Values ε Learning coefficient λ Steepness coefficient α Momentum factor W Initial hidden and output weights [-0.5, 0.5] [-0.5,0.5] [-0.5, 0.5] Table 2 Parity Problem Specification P O( N and TOT O( N 4, i.e. this type of problem has an average algorithmic complexity of O ( N 4. The Symmetry problem is to tell if a given binary string is symmetric about its centre. The fourbit symmetry problem has 16 input-output pairs and has been solved by Fukuoka et al. [20] in 500 to 1,200 epochs, using the network. In this study, the network converged to a solution at the 140th 2 epoch. The results in show that M O( N, P O( N and TOT O( N 5, i.e. this type of problem has an average algorithmic complexity of O ( N 5. 5 Generalised Results The simulations in this paper showed that each type of benchmark problem can be solved by standard BPA within an average algorithmic complexity that is bounded by a specific power of the total number of connection weights in the network, i.e. the computed metric TOT of each problem is such N k that TOT O( for N connection weights in the network and an integer k such that 3 k 5. This is the same as what Leung et al have claimed [1]. The results from each problem are summarised in Table 3 (where N is the total number of connection weights in the network. It shows that each type of problem Problem Simulations Number of Input Patterns Encoding 4-Bit 8-Bit O N 10-Bit Parity 2-Bit (XOR 3-Bit O N 4-Bit Binary Addition 2-Bit Symmetry 4-Bit Number of Epochs ( O( N O( N 3 ( O( N 2 O ( N 4 O ( N O( N 2 O ( N 4 O ( N O( N 2 O ( N 5 Average Algorithmic Complexity Table 3. Specification of Algorithmic Complexity for some Benchmark Problems

6 has a polynomial-bound solution and thus belongs to the class of feasible problems. It can be seen that each type of problem has the same order of algorithmic complexity. For example, the Encoding problem is O ( N 3 whether it is 4-bit, 8-bit or 10- bit. Table 3 can serve as a general reference for the average algorithmic complexity involved in solving a particular type of problem. It also provides a categorisation of the average algorithmic complexity for all types of problems addressed in this study. For instance, both the Parity and Binary Addition problems belong to the same category since they have the same order of magnitude of algorithmic complexity. 6 Conclusion By evaluating the neural metrics across several types benchmark problems, it is believed that the results of this research will provide neural users with a reliable suite of problem specifications which detail the values of neural metrics and parameters that may be used to solve the problems successfully. In addition, the results were generalised to provide the average algorithmic complexity of the problems. It is also believed that the research results can be further extended to cover other network paradigms (e.g. unsupervised ANNs and application problems. However, it must be emphasised that the problem specification proposed in this study may not give the optimal values of the neural metrics and network parameters. All it shows are the values that will lead to a reliable and successful training of the network for the given problem. Researchers who are interested in the optimal condition may use the values in the specification to further their experiments. They may need to change the values of some of the neural metrics and/or parameters, while keeping the others unchanged, until the specified optimal criteria (e.g. attaining the threshold number of epochs or hidden units is satisfied. References: [1] Leung W.K., and Simpson R., Neural Metrics - Software Metrics in Artificial Neural Networks, Proceedings of the 2000 International Conference of the Knowledge Based Engineering s, University of Brighton, Brighton, UK; Aug [2] Leung W.K., and Winfield M., A Complexity Analysis of the Backpropagation Algorithm, Proceedings of the WSES 2000 International Conference of Applied and Theoretical Mathematics, Athens, Greece; Dec [3] Leung W.K., and Winfield M., Implementing Backpropagation with Momentum, Periodic Weight Update and Gradient Descent on Steepness, Proceedings of the WSES 2000 International Conference of Applied and Theoretical Mathematics, Athens, Greece; Dec [4] Leung W.K., On the Complexity of Backpropagation with Momentum and Gradient Descent on Sigmoidal Steepness, Proceedings of the WSES 2001 International Conference of Neural Network and Applications, Tenerife, Spain, Feb [5] Prechelt L., A Quantitative Study of Experimental Evaluations of Neural Network Learning Algorithms: Current Research Practice, Neural Networks, Vol 9, 1996, pp [6] Freeman J.A., Simulating Neural Networks with Mathematica, Addison-Wesley, [7] Alder M., Lim S.G., Hadingham P., and Attikiouzel Y., Improving Three Layer Neural Convergence, The University of Western Australia, Australia, [8] De Villiers J., and Barnard E., Backpropagation Neural Nets with One and Two Hidden Layers, IEEE Trans. on Neural Networks, Vol 4, No. 1, Jan [9] Gori M., and Maggini M., Optimal Convergence of On-Line Backpropagation, IEEE Transactions on Neural Networks, Vol 7, No. 1, Jan [10] Bartlett P.L., and Downs T., Using Random Weights to Train Multilayer Networks of Hard-Limiting Units, IEEE Trans. on Neural Networks, Vol 3, no. 2, [11] Salomon R., and van Hemmen J.L., Accelerating Backpropagation through Dynamic Self Adaptation, Neural Networks, Vol 9, No. 4, 1996, pp [12] Moerland P., Thimm G., and Fiesler E., Results on the Steepness in Backpropagation Neural Networks, IDIAP, [13] Yu X.-H., and Chen G.-A., Efficient Backpropagation Learning Using Optimal Learning Rate and Momentum, Neural Networks, Vol. 10, No. 3, 1997, pp [14] Fahlman S.E., An Empirical Study of Learning Speed in Backpropagation Networks, CMU-CS , Carnegie Mellon University, [15] Lang K.J., and Witbrock M.J., Learning to Tell Two Spirals Apart, Procs. of the 1988 Connectionist Model Summer School, Carnegie Mellon University, [16] Wang G.J., and Chen C.C., A Fast Multilayer Neural- Network Training Algorithm Based on the Layer-By- Layer Optimizing Procedures, IEEE Transactions on Neural Networks, Vol 7, No. 3, May [17] Deleone R., Capparuccia R. and Merellie E., A Successive Overrelaxation Backpropagation Algorithm for Neural-Network Training, IEEE Trans. on Neural Networks, Vol. 9, No. 3, May 1998, pp [18] Fahlman S.E., and Lebiere C., The Cascade- Correlation Learning Architecture, Advances in Neural Information Processing s 2, 1990, pp [19] Jacobs R.A., Increased Rates of Convergence through Learning Rate Adaptation, Neural Networks, Vol 1, 1988, pp [20] Fukuoka Y., Matsuki H., Minamitani H., and Ishida A., A Modified Back-propagation Method to Avoid False Local Minima, Neural Networks, Vol 11, 1998, pp

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

An Improved Backpropagation Method with Adaptive Learning Rate

An Improved Backpropagation Method with Adaptive Learning Rate An Improved Backpropagation Method with Adaptive Learning Rate V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, Division of Computational Mathematics

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R. Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

More information

Automatic Adaptation of Learning Rate for Backpropagation Neural Networks

Automatic Adaptation of Learning Rate for Backpropagation Neural Networks Automatic Adaptation of Learning Rate for Backpropagation Neural Networks V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece.

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Multi-Layered Perceptrons (MLPs)

Multi-Layered Perceptrons (MLPs) Multi-Layered Perceptrons (MLPs) The XOR problem is solvable if we add an extra node to a Perceptron A set of weights can be found for the above 5 connections which will enable the XOR of the inputs to

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

Subgoal Chaining and the Local Minimum Problem

Subgoal Chaining and the Local Minimum Problem Subgoal Chaining and the Local Minimum roblem Jonathan. Lewis (jonl@dcs.st-and.ac.uk), Michael K. Weir (mkw@dcs.st-and.ac.uk) Department of Computer Science, University of St. Andrews, St. Andrews, Fife

More information

Fast Training of Multilayer Perceptrons

Fast Training of Multilayer Perceptrons Fast Training of Multilayer Perceptrons Brijesh Verma, Member of IEEE & IASTED School of Information Technology Faculty of Engineering and Applied Science Griffith University, Gold Coast Campus Gold Coast,

More information

Neural Network Weight Selection Using Genetic Algorithms

Neural Network Weight Selection Using Genetic Algorithms Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks

More information

Connectivity and Performance Tradeoffs in the Cascade Correlation Learning Architecture

Connectivity and Performance Tradeoffs in the Cascade Correlation Learning Architecture Connectivity and Performance Tradeoffs in the Cascade Correlation Learning Architecture D. S. Phatak and I. Koren Department of Electrical and Computer Engineering University of Massachusetts, Amherst,

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications

Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications Recurrent Neural Network Models for improved (Pseudo) Random Number Generation in computer security applications D.A. Karras 1 and V. Zorkadis 2 1 University of Piraeus, Dept. of Business Administration,

More information

OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network

OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network 2011 International Conference on Telecommunication Technology and Applications Proc.of CSIT vol.5 (2011) (2011) IACSIT Press, Singapore OMBP: Optic Modified BackPropagation training algorithm for fast

More information

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5 Artificial Neural Networks Lecture Notes Part 5 About this file: If you have trouble reading the contents of this file, or in case of transcription errors, email gi0062@bcmail.brooklyn.cuny.edu Acknowledgments:

More information

Practical Tips for using Backpropagation

Practical Tips for using Backpropagation Practical Tips for using Backpropagation Keith L. Downing August 31, 2017 1 Introduction In practice, backpropagation is as much an art as a science. The user typically needs to try many combinations of

More information

Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm

Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm Markus Hoehfeld and Scott E. Fahlman May 3, 1991 CMU-CS-91-130 School of Computer Science Carnegie Mellon University Pittsburgh,

More information

Neural Networks: A Classroom Approach Satish Kumar Department of Physics & Computer Science Dayalbagh Educational Institute (Deemed University)

Neural Networks: A Classroom Approach Satish Kumar Department of Physics & Computer Science Dayalbagh Educational Institute (Deemed University) Chapter 6 Supervised Learning II: Backpropagation and Beyond Neural Networks: A Classroom Approach Satish Kumar Department of Physics & Computer Science Dayalbagh Educational Institute (Deemed University)

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically Supervised Learning in Neural Networks (Part 1) A prescribed set of well-defined rules for the solution of a learning problem is called a learning algorithm. Variety of learning algorithms are existing,

More information

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples.

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples. Supervised Learning with Neural Networks We now look at how an agent might learn to solve a general problem by seeing examples. Aims: to present an outline of supervised learning as part of AI; to introduce

More information

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism) Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

COMBINING NEURAL NETWORKS FOR SKIN DETECTION COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,

More information

4. Feedforward neural networks. 4.1 Feedforward neural network structure

4. Feedforward neural networks. 4.1 Feedforward neural network structure 4. Feedforward neural networks 4.1 Feedforward neural network structure Feedforward neural network is one of the most common network architectures. Its structure and some basic preprocessing issues required

More information

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm

An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm An improved PID neural network controller for long time delay systems using particle swarm optimization algorithm A. Lari, A. Khosravi and A. Alfi Faculty of Electrical and Computer Engineering, Noushirvani

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Constructively Learning a Near-Minimal Neural Network Architecture

Constructively Learning a Near-Minimal Neural Network Architecture Constructively Learning a Near-Minimal Neural Network Architecture Justin Fletcher and Zoran ObradoviC Abetract- Rather than iteratively manually examining a variety of pre-specified architectures, a constructive

More information

Accelerating the convergence speed of neural networks learning methods using least squares

Accelerating the convergence speed of neural networks learning methods using least squares Bruges (Belgium), 23-25 April 2003, d-side publi, ISBN 2-930307-03-X, pp 255-260 Accelerating the convergence speed of neural networks learning methods using least squares Oscar Fontenla-Romero 1, Deniz

More information

Back Propagation with Discrete Weights and Activations

Back Propagation with Discrete Weights and Activations Back Propagation with Discrete Weights and Activations Guy Smith William H. Wilson June 1989 Discipline of Computer Science, Flinders University of South Australia, Bedford Park 502 AUSTRALIA G.M. Smith

More information

Self-Splitting Modular Neural Network Domain Partitioning at Boundaries of Trained Regions

Self-Splitting Modular Neural Network Domain Partitioning at Boundaries of Trained Regions Self-Splitting Modular Neural Network Domain Partitioning at Boundaries of Trained Regions V. Scott Gordon and Jeb Crouson Abstract A modular neural network works by dividing the input domain into segments,

More information

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, and D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart

More information

Semi supervised clustering for Text Clustering

Semi supervised clustering for Text Clustering Semi supervised clustering for Text Clustering N.Saranya 1 Assistant Professor, Department of Computer Science and Engineering, Sri Eshwar College of Engineering, Coimbatore 1 ABSTRACT: Based on clustering

More information

A Novel Technique for Optimizing the Hidden Layer Architecture in Artificial Neural Networks N. M. Wagarachchi 1, A. S.

A Novel Technique for Optimizing the Hidden Layer Architecture in Artificial Neural Networks N. M. Wagarachchi 1, A. S. American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629

More information

A Compensatory Wavelet Neuron Model

A Compensatory Wavelet Neuron Model A Compensatory Wavelet Neuron Model Sinha, M., Gupta, M. M. and Nikiforuk, P.N Intelligent Systems Research Laboratory College of Engineering, University of Saskatchewan Saskatoon, SK, S7N 5A9, CANADA

More information

Combined Weak Classifiers

Combined Weak Classifiers Combined Weak Classifiers Chuanyi Ji and Sheng Ma Department of Electrical, Computer and System Engineering Rensselaer Polytechnic Institute, Troy, NY 12180 chuanyi@ecse.rpi.edu, shengm@ecse.rpi.edu Abstract

More information

AN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP

AN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP AN NOVEL NEURAL NETWORK TRAINING BASED ON HYBRID DE AND BP Xiaohui Yuan ', Yanbin Yuan 2, Cheng Wang ^ / Huazhong University of Science & Technology, 430074 Wuhan, China 2 Wuhan University of Technology,

More information

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku

Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Multiple Constraint Satisfaction by Belief Propagation: An Example Using Sudoku Todd K. Moon and Jacob H. Gunther Utah State University Abstract The popular Sudoku puzzle bears structural resemblance to

More information

Fast Learning for Big Data Using Dynamic Function

Fast Learning for Big Data Using Dynamic Function IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fast Learning for Big Data Using Dynamic Function To cite this article: T Alwajeeh et al 2017 IOP Conf. Ser.: Mater. Sci. Eng.

More information

Pattern Classification Algorithms for Face Recognition

Pattern Classification Algorithms for Face Recognition Chapter 7 Pattern Classification Algorithms for Face Recognition 7.1 Introduction The best pattern recognizers in most instances are human beings. Yet we do not completely understand how the brain recognize

More information

Artificial Neural Network based Curve Prediction

Artificial Neural Network based Curve Prediction Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We

More information

A NEW EFFICIENT VARIABLE LEARNING RATE FOR PERRY S SPECTRAL CONJUGATE GRADIENT TRAINING METHOD

A NEW EFFICIENT VARIABLE LEARNING RATE FOR PERRY S SPECTRAL CONJUGATE GRADIENT TRAINING METHOD 1 st International Conference From Scientific Computing to Computational Engineering 1 st IC SCCE Athens, 8 10 September, 2004 c IC SCCE A NEW EFFICIENT VARIABLE LEARNING RATE FOR PERRY S SPECTRAL CONJUGATE

More information

A VARIANT OF BACK-PROPAGATION ALGORITHM FOR MULTILAYER FEED-FORWARD NETWORK. Anil Ahlawat, Sujata Pandey

A VARIANT OF BACK-PROPAGATION ALGORITHM FOR MULTILAYER FEED-FORWARD NETWORK. Anil Ahlawat, Sujata Pandey International Conference «Information Research & Applications» - i.tech 2007 1 A VARIANT OF BACK-PROPAGATION ALGORITHM FOR MULTILAYER FEED-FORWARD NETWORK Anil Ahlawat, Sujata Pandey Abstract: In this

More information

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES A. Likas, K. Blekas and A. Stafylopatis National Technical University of Athens Department

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

Optimum Design of Truss Structures using Neural Network

Optimum Design of Truss Structures using Neural Network Optimum Design of Truss Structures using Neural Network Seong Beom Kim 1.a, Young Sang Cho 2.b, Dong Cheol Shin 3.c, and Jun Seo Bae 4.d 1 Dept of Architectural Engineering, Hanyang University, Ansan,

More information

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons

Linear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons Linear Separability Input space in the two-dimensional case (n = ): - - - - - - w =, w =, = - - - - - - w = -, w =, = - - - - - - w = -, w =, = Linear Separability So by varying the weights and the threshold,

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS BOGDAN M.WILAMOWSKI University of Wyoming RICHARD C. JAEGER Auburn University ABSTRACT: It is shown that by introducing special

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

Image Classification Using Wavelet Coefficients in Low-pass Bands

Image Classification Using Wavelet Coefficients in Low-pass Bands Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August -7, 007 Image Classification Using Wavelet Coefficients in Low-pass Bands Weibao Zou, Member, IEEE, and Yan

More information

Hybrid PSO-SA algorithm for training a Neural Network for Classification

Hybrid PSO-SA algorithm for training a Neural Network for Classification Hybrid PSO-SA algorithm for training a Neural Network for Classification Sriram G. Sanjeevi 1, A. Naga Nikhila 2,Thaseem Khan 3 and G. Sumathi 4 1 Associate Professor, Dept. of CSE, National Institute

More information

arxiv: v1 [cs.lg] 25 Jan 2018

arxiv: v1 [cs.lg] 25 Jan 2018 A New Backpropagation Algorithm without Gradient Descent arxiv:1802.00027v1 [cs.lg] 25 Jan 2018 Varun Ranganathan Student at PES University varunranga1997@hotmail.com January 2018 S. Natarajan Professor

More information

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield Agricultural Economics Research Review Vol. 21 January-June 2008 pp 5-10 Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield Rama Krishna Singh and Prajneshu * Biometrics

More information

Utilizing Neural Networks to Reduce Packet Loss in Self-Similar Teletraffic Patterns

Utilizing Neural Networks to Reduce Packet Loss in Self-Similar Teletraffic Patterns Utilizing Neural Networks to Reduce Packet Loss in Self-Similar Teletraffic Patterns Homayoun Yousefi zadeh; EECS Dept; UC, Irvine Edmond A. Jonckheere; EE-Systems Dept; USC John A. Silvester; EE-Systems

More information

SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training

SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training SNIWD: Simultaneous Weight Noise Injection With Weight Decay for MLP Training John Sum and Kevin Ho Institute of Technology Management, National Chung Hsing University Taichung 4, Taiwan. pfsum@nchu.edu.tw

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Deep Learning. Architecture Design for. Sargur N. Srihari

Deep Learning. Architecture Design for. Sargur N. Srihari Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation

More information

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey. Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Neural Networks Laboratory EE 329 A

Neural Networks Laboratory EE 329 A Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such

More information

Knowledge Discovery and Data Mining

Knowledge Discovery and Data Mining Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters

More information

COMPUTER SIMULATION OF COMPLEX SYSTEMS USING AUTOMATA NETWORKS K. Ming Leung

COMPUTER SIMULATION OF COMPLEX SYSTEMS USING AUTOMATA NETWORKS K. Ming Leung POLYTECHNIC UNIVERSITY Department of Computer and Information Science COMPUTER SIMULATION OF COMPLEX SYSTEMS USING AUTOMATA NETWORKS K. Ming Leung Abstract: Computer simulation of the dynamics of complex

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course

More information

Gesture Recognition using Neural Networks

Gesture Recognition using Neural Networks Gesture Recognition using Neural Networks Jeremy Smith Department of Computer Science George Mason University Fairfax, VA Email: jsmitq@masonlive.gmu.edu ABSTRACT A gesture recognition method for body

More information

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION http:// INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION 1 Rajat Pradhan, 2 Satish Kumar 1,2 Dept. of Electronics & Communication Engineering, A.S.E.T.,

More information

Liquefaction Analysis in 3D based on Neural Network Algorithm

Liquefaction Analysis in 3D based on Neural Network Algorithm Liquefaction Analysis in 3D based on Neural Network Algorithm M. Tolon Istanbul Technical University, Turkey D. Ural Istanbul Technical University, Turkey SUMMARY: Simplified techniques based on in situ

More information

Neural Network Classifier for Isolated Character Recognition

Neural Network Classifier for Isolated Character Recognition Neural Network Classifier for Isolated Character Recognition 1 Ruby Mehta, 2 Ravneet Kaur 1 M.Tech (CSE), Guru Nanak Dev University, Amritsar (Punjab), India 2 M.Tech Scholar, Computer Science & Engineering

More information

Neuro-Fuzzy Inverse Forward Models

Neuro-Fuzzy Inverse Forward Models CS9 Autumn Neuro-Fuzzy Inverse Forward Models Brian Highfill Stanford University Department of Computer Science Abstract- Internal cognitive models are useful methods for the implementation of motor control

More information

Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons*

Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons* J.N. Hwang, J.J. Choi, S. Oh, R.J. Marks II, "Query learning based on boundary search and gradient computation of trained multilayer perceptrons", Proceedings of the International Joint Conference on Neural

More information

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15 Perceptrons and Backpropagation Fabio Zachert Cognitive Modelling WiSe 2014/15 Content History Mathematical View of Perceptrons Network Structures Gradient Descent Backpropagation (Single-Layer-, Multilayer-Networks)

More information

Learning internal representations

Learning internal representations CHAPTER 4 Learning internal representations Introduction In the previous chapter, you trained a single-layered perceptron on the problems AND and OR using the delta rule. This architecture was incapable

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

The Genetic Algorithm for finding the maxima of single-variable functions

The Genetic Algorithm for finding the maxima of single-variable functions Research Inventy: International Journal Of Engineering And Science Vol.4, Issue 3(March 2014), PP 46-54 Issn (e): 2278-4721, Issn (p):2319-6483, www.researchinventy.com The Genetic Algorithm for finding

More information

Development of Generic Search Method Based on Transformation Invariance

Development of Generic Search Method Based on Transformation Invariance Development of Generic Search Method Based on Transformation Invariance Fuminori Adachi, Takashi Washio, Hiroshi Motoda and *Hidemitsu Hanafusa I.S.I.R., Osaka University, {adachi, washio, motoda}@ar.sanken.osaka-u.ac.jp

More information

A Comparative Study of SVM Kernel Functions Based on Polynomial Coefficients and V-Transform Coefficients

A Comparative Study of SVM Kernel Functions Based on Polynomial Coefficients and V-Transform Coefficients www.ijecs.in International Journal Of Engineering And Computer Science ISSN:2319-7242 Volume 6 Issue 3 March 2017, Page No. 20765-20769 Index Copernicus value (2015): 58.10 DOI: 18535/ijecs/v6i3.65 A Comparative

More information

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Ritika Luthra Research Scholar Chandigarh University Gulshan Goyal Associate Professor Chandigarh University ABSTRACT Image Skeletonization

More information

CMPT 882 Week 3 Summary

CMPT 882 Week 3 Summary CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being

More information

Transactions on Information and Communications Technologies vol 20, 1998 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 20, 1998 WIT Press,   ISSN A feed-forward neural network approach to edge detection L.X. Zhou & W.K. Gu Department of Information Science & Electronic Engineering, Zhejiang University, Hangzhou 3007, P.R. China EMail: zhoulx@isee.zju.edu.cn

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

Artificial Neural Networks MLP, RBF & GMDH

Artificial Neural Networks MLP, RBF & GMDH Artificial Neural Networks MLP, RBF & GMDH Jan Drchal drchajan@fel.cvut.cz Computational Intelligence Group Department of Computer Science and Engineering Faculty of Electrical Engineering Czech Technical

More information

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 10 December. 2016 PP-29-34 Artificial Neural Network and Multi-Response Optimization

More information

Simulation of objective function for training of new hidden units in constructive Neural Networks

Simulation of objective function for training of new hidden units in constructive Neural Networks International Journal of Mathematics And Its Applications Vol.2 No.2 (2014), pp.23-28. ISSN: 2347-1557(online) Simulation of objective function for training of new hidden units in constructive Neural Networks

More information

The Application Research of Neural Network in Embedded Intelligent Detection

The Application Research of Neural Network in Embedded Intelligent Detection The Application Research of Neural Network in Embedded Intelligent Detection Xiaodong Liu 1, Dongzhou Ning 1, Hubin Deng 2, and Jinhua Wang 1 1 Compute Center of Nanchang University, 330039, Nanchang,

More information