A Compensatory Wavelet Neuron Model

Size: px
Start display at page:

Download "A Compensatory Wavelet Neuron Model"

Transcription

1 A Compensatory Wavelet Neuron Model Sinha, M., Gupta, M. M. and Nikiforuk, P.N Intelligent Systems Research Laboratory College of Engineering, University of Saskatchewan Saskatoon, SK, S7N 5A9, CANADA sask.usask.ca Abstract This paper proposes a compensatory-wavelet neural model, which is based on a wavelet activation function. Here, the basis function comprises of both summation and multiplicative functions. It is shown in [l] that, for a spectrum of functional mapping and classification problems, the compensatory neuron based neural network model performs better than the ordinary neuron based neural network, in terms of both the accuracy of prediction and the computational time involved. On thc other hand, the wavelet neuron is obtained by modifying an ordinary neuron with a non-orthogonal wavelet bases [2]. The performances of different neuron based neural networks are also analyzed in this paper. 2 Neuron Models The neuron model affects the classification and the functional mapping power of a neural'network. In the following sections we investigate different existing neuron models and formulate some new neuron models to improve upon the capability of the existing ones. Basic Neuron Model: The neuron model due to Mc- Culloch and Pitts is given by the Equations (I), (2) and (3) as described below. N u = c wixi i=o 1 Introduction A robust performance and quick convergence of a neural network (NN) with small complexity are vital for its wide application. The architectural complexity, which governs the size of a NN, depends on the number of neurons and the connections [3J. The larger the number of neurons and connections, the more complex will be the architecture. Similarly, the learning complexity depends on the learning algorithm. Any NN designed for the real-life applications must not be complex and it must have adequate functional mapping, classification and generalization capabilitics. The present investigation explores the feasibility of constructing higher order neuron models which may actually serve as the basis for the formulation of some powerful neural network architectures. Some benchmark classification and functional mapping problems are addressed to validate the neuron and neural network models dcvclopeil and reported in this papcr. It will be shown that even simple feedforward neural network can predict a chaotic nonlinear time series problem contrary to the conclusion drawn by Yamakawa et a1 [2]. 4(u) = Y = 4 (U) y (exp-xu/2 - expx"/2) (exp-x"/2 + expxu/2 1 where X is a steepness factor and y is a multiplication factor. (3) Compensatory Neuron Model: Sinha et a1 [I] proposed a compensatory neuron model where each neuron consists of two nonlinearities. Here, we propose a compensatory neuron model with one nonlinearity as shown in Fig. 1. This forms the basis for formulating the compensatory neural network architecture (CNNA) as shown in Fig. 4. This not only reduces the number of neurons required to solve some of the benchmark classification and mapping problems, but also improves the convergence speed and reduces the computational burden. The compensatory neuron model can be expressed as N M N /OU$l0.00 (C)U#)l IEEE. Page: 1372

2 where d, (U) is defined in Equation (3). \ 'W I.." / / I.' / neuron Y ----t output Figure 1. Compensatory Neuron Model Figure 3. Compensatory Model -Y output Wavelet Neuron Wavelet Neuron Model: Yamakawa et a1 [2] proposed an over-complete system of non-orthogonal smooth wavelet bases in order to approximate a nonlinear function with a smooth function. The shape of the bases can be described by the following set of equations: u = N i=o wixi where 6 is a shifting parameter, the maximum value of which equals the corresponding scaling parameter a. Figure 2 depicts the wavelet neuron model. 3 Formulation of Various Architectures The neuron models described in the previous section can be arranged to form neural network architectures for solving different problems. The architecture based on a basic neuron model is referred to as standard feedforward neural network (STD). The neural network architectures based on compensatory, wavelet, and compensatory wavelet neurons will be termed compensatory neural network architecture (CNNA) (Fig. 4). wavelet neural network architecture (WNNA) (Fig. 5) and compensatory wavelet neural network architecture (CWNNA) respectively. A modified form of STD where only summation function is used in the output layer will be referred to as modified standard neural network (MSTD). I fo \ Y output XIinp neuron Figure 2. Wavelet Neuron Model I L 8 inputs neuron blocks outputs Figure 4. Compensatory Neural Network Architecture Compensatory Wavelet Neuron Model: In the compensatory neuron model if the sigmoid function is replaced by a wavelet function it results in a compensatory wavelet neural model. Figure 3 presents the schematic of the compensatory wavelet neuron model. This model can be defined as given by Equations (6) and (7) where U is defined by Equation (4). 4 Learning Algorithm The learning rules correlate the input and output values of the nodes by adjusting the weights in a neural /0V$l0.00 (C)u)ol IEEE. Page: 1373

3 6s = -(4-Oh) {Xs(yL - 0:)/2yk} Hidden layer weights update: W inputs input neurotis output neurons Figure 5. Wavelet Neural Network Architecture Input layer weights update: network. The steepest descent algorithm requires a selection of user-defined parameters sorted out by trial and error and is slow in convergence. The problem of poor convergence is combated using various acceleration techniques mentioned in the literature [4][5]. But most of these techniques are adhoc patching. We adopt here a method termed scaled conjugate gradient learning (SCG) 161 and self-scaling scaled conjugslte gradient learning (SSCG) [I] to train the various neural network models. If in the neural network model, the output layer has a summation function only, then the output layer-weights update can be carried out using either a linear scheme such as matrix inversion, or a singular value decomposition approach or can be carried out using the usual back propagation scheme. If the weights are updated using a backpropagation scheme in conjunction with SCG then this constitutes the SSCG learning [I]. It has been shown that this method gives better accuracy for functional mapping and classification problems[ I]. Below we give the equations for calculating the error gradient for the STD and the CNNA. Expressions for the error gradients described below are for the error function defined by the Equation (9). All the computations are done in off-line mode. The error gradients for all the patterns are obtained by summing up and averaging the error gradients for the individual patterns. Error gradient calculation for the compensatory neural network architecture (CNNA) is as follows: Output layer: Input layer: K E(G(n)) = 0.5(& - (9) k=l Error gradient calculation for the standard feedforward neural network Output layer weights update: /0u$10.00 (C)u)Ol IEEE. Page: 1374

4 error at the nth iteration number o f neurons in output layer number of inputs number o f neurons in a layer kth desired output kth actual output input output o f a neuron weights steepness coefficient multiplication factor nth iteration bias 5 Simulation Studies The essential components defining a NN are topology, size, functionality, learning algorithms, traininghalidation, and implementation/realization. The performance measure involves the selection of these features and quantifying, in some form, success of the selection. The result of the performance evaluation will depend significantly on the application. The main factors which will decide the superiority of neuron models, in general, using supervised learning, are the computational burden for each iteration/epoch, number of epochs for convergence, NN size, gcneralizationltcst, and benchmark problems. Here, we analyze first some functional mapping problems and then classification problems. 5.1 Functional Mapping This may involve mapping from a lower dimensional to a higher dimensional system or vice verse. Essentially, the capability of mapping a function depends upon the neuron model and the architecture of the NN together with the learning scheme being used. In the following sections we first test on z(z, y) = sin(z).sin(y) and a nonlinear time series problem. In all the figures depicting the convergence of the NN, the mean square error (mean of the error functioned defined by the Equation (9)) against the number of iterations is plotted. Sin(x).Sin(y) Problem: General function mapping problems have been used by different researchers to test a NN s capabilities, learning algorithm s efficiency, etc. A popular mapping function is Z(Z, y ) = sin(x). sin(y). This function becomes more complex as the norm of the input vector (2, y) grows. We have generated the training set consisting of 2500 training patterns by varying the values of z and y in the range [O, 5~1. A Chaotic Nonlinear Time Series Problem: Here the training and the test sets are generated using the following nonlinear time series equation. 52, z,+1 = - 1+x; 0.5~~ - 0.5~,-1 + O.5xn-2 (22) with the initial values xo = 0.2, z1 = 0.3, and z2 = 1.0. The set consists of 3 inputs and 1 output. The 3 inputs were comprised of 2 delays and 1 present value of thc independent variable. The data set are constructed by deleting the past-past value and adding a new predicted value. A time series of 101 points was used to construct the training data set, consisting of 99 patterns, as explained above. 5.2 Classification Any new neuron model and learning algorithm developed must be tested for its classification capability on benchmark problems. Therefore, to verify the efficacy of the proposed neuron models we examined them on a few classification problems, such as parity and XOR. XOR Problem: The exclusive-or (XOR) problem is the classic problem requiring hidden units. The XOR problem as compared to other logic operations is nonlinearly separable. The NN models were trained for the XOR problem and their performance was analyzed in terms of the number of epochs required and the degree of accuracy achieved. Parity Problem: The N-input parity problem has been a popular benchmark problem among researchers in NN such as Minsky and Papert [7]. The problem consists in mapping an N - bit wide binary number into its parity, i.e. if the input pattern contains an odd number of Is then the panty is I. else it is 0. We have used it to determine the properties of neurons. 6 Results and Discussion The simulation results for the different neuron models based NNs are presented in Figs. 6 to IO. In Fig. 6 the error decay for the sin(z).sin(y) problem during training is presented. The CNNA-4, STD-3-5- I, MSTD-3-5-1, WNNA-36, CWNNA-15 refer to CNNA with 4 neurons, STD with neuron in the input, hidden and output layers respectively, MSTD with neuron in the input, hidden and output layers respectively, WNNA with 36 neurons (generated out of 8 complete @78-3/Ol/$lO.@l (C)zoOl IEEE. Page: 075

5 I I Figure 6. M.S. error decay during training for the sin(x).sin(v) problem Figure 8. Prediction error for diflerent NN architecture bases), CWNNA with 15 neurons (generated out of 5 complete bases) respectively. It may be observed that the convergence of the CWNNA and the CNNA were the best. But the number of neurons involved in thc CNNA were only 4 while that for the CWNNA were 15. This resulted in a computation saving, moreover, fewer parameters (weights) were used to approximate the mapping. The STD and the MSTD have equal numbers of neurons, but the convergence for the MSTD is better except for a small interval where convergence were slow in the case of the MSTD. Figs. 7 and 8 present the results for the chaotic time series problem discussed earlicr. convergence but the ultimate convergence was better for the other models. It is to be noted that the error decay for the wavelet models can be made faster and better but only at the cost of increased computation. This is obvious from Fig. 7 that the wavelet models were computationally costly due to large number of weights and neurons as compared to the other models. A further increase in the number of neurons would make these wavelet models even costlier. The prediction was best for the CNNA which was of course computationally cheaper than the wavelet models and the STD and the MSTD. Similarly, it can be observed that for the classification problems (XOR and Parity) the performance of the compensatory wavelet model was at par with that of the wavelet model while the amount of computation involved in the former was less than that for the latter (Figs. 9 and IO). The compensatory neural network performed best while the amount of computation involved was the least. Figure 7. M.S.error decay during training for STD, MSTD, CNNA, WNNA, CWNNA for the tame series problem.., CNNA-I CWNNA-6 WNNA-6 5TD MSTD Here the legends have the same meaning as explained earlier. The CWNNA-2 I was generated out of 6 complete wavelet bases while WNNA-28 was generated out of 7 complete wavelet bases. It can be observed that the convergence for the STD was better than both the wavelet models. This conclusion is contrary to the conclusion drawn earlier by Yamakawa et a1 [2]. This is because of the fact that STD model does not require as many neurons as was used to solve this problem by Yamakawa et a1 [2]. The wavelet models showed early Figure 9. M.S. error decay during training for the XOR problem /0l/$l0.00 (C)2001 IEEJL Page: U76

6 [6] Molter, M. E, A Scaled Conjugate Gradient Algorithm for Fast Supervised Learning, Neural Networks, Vol. 6, 1993, pp [7] Minsky, M. L., Papert, S., Perceptrons: An Intro-. duction to Computational Geometry, MIT Press, Cambridgc, MA Figure 10. M.S. error decay during training for the parity problem 7 Conclusion A compensatory and a wavelet compensatory neuron models were proposed proposed in this paper. These models serve as the basis for the formulation of the compensatory neural network and the compensatory wavelet neural network architectures. It is concluded that the compensatory models are much superior to the other models. Moreover, the modified standard neural network (MSTD) is also much superior to the wavelet model. References Sinha, M., Kumar, K., and Kaka, P. K., Some New Neural Network Architectures with Improved Learning Schemes, to appear in Softcomputing, Springer Verlag. Yamakawa, T., Uchino E. and Samatsu T., Wavelet Neural Network Employing Over- Complete Number of Compactly Supported Non-orthogonal Wavelets and their Applications, in Proceeding of IEEE International Conference on Neural Networks, June 28-July 2, 1994, pp Hassoun, M. H., Fundamentals of Artificial Neural Networks, MIT Press, Cambridge, Massachusetts, Jacobs, R. A., Increased Rate of Learning Convergence through Learning Adaptation, Neural Networks, Vol. 1, 1988, pp Hush, D. R., and Salas, J. M., Improving the Learning Rate of Backpropagation with the Gradient Reuse Algorithm, in Proceedings of IEEE International Conference on Neural Networks, Vol. I, 1988, pp /0U$l0.00 (C)ZUOl IEEE. Page: 1377

Dynamic Analysis of Structures Using Neural Networks

Dynamic Analysis of Structures Using Neural Networks Dynamic Analysis of Structures Using Neural Networks Alireza Lavaei Academic member, Islamic Azad University, Boroujerd Branch, Iran Alireza Lohrasbi Academic member, Islamic Azad University, Boroujerd

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation

Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation Neuron Selectivity as a Biologically Plausible Alternative to Backpropagation C.J. Norsigian Department of Bioengineering cnorsigi@eng.ucsd.edu Vishwajith Ramesh Department of Bioengineering vramesh@eng.ucsd.edu

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Accelerating the convergence speed of neural networks learning methods using least squares

Accelerating the convergence speed of neural networks learning methods using least squares Bruges (Belgium), 23-25 April 2003, d-side publi, ISBN 2-930307-03-X, pp 255-260 Accelerating the convergence speed of neural networks learning methods using least squares Oscar Fontenla-Romero 1, Deniz

More information

OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network

OMBP: Optic Modified BackPropagation training algorithm for fast convergence of Feedforward Neural Network 2011 International Conference on Telecommunication Technology and Applications Proc.of CSIT vol.5 (2011) (2011) IACSIT Press, Singapore OMBP: Optic Modified BackPropagation training algorithm for fast

More information

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R. Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons*

Query Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons* J.N. Hwang, J.J. Choi, S. Oh, R.J. Marks II, "Query learning based on boundary search and gradient computation of trained multilayer perceptrons", Proceedings of the International Joint Conference on Neural

More information

Exercise: Training Simple MLP by Backpropagation. Using Netlab.

Exercise: Training Simple MLP by Backpropagation. Using Netlab. Exercise: Training Simple MLP by Backpropagation. Using Netlab. Petr Pošík December, 27 File list This document is an explanation text to the following script: demomlpklin.m script implementing the beckpropagation

More information

Multi Layer Perceptron trained by Quasi Newton learning rule

Multi Layer Perceptron trained by Quasi Newton learning rule Multi Layer Perceptron trained by Quasi Newton learning rule Feed-forward neural networks provide a general framework for representing nonlinear functional mappings between a set of input variables and

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

An Improved Backpropagation Method with Adaptive Learning Rate

An Improved Backpropagation Method with Adaptive Learning Rate An Improved Backpropagation Method with Adaptive Learning Rate V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, Division of Computational Mathematics

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016

CPSC 340: Machine Learning and Data Mining. Principal Component Analysis Fall 2016 CPSC 340: Machine Learning and Data Mining Principal Component Analysis Fall 2016 A2/Midterm: Admin Grades/solutions will be posted after class. Assignment 4: Posted, due November 14. Extra office hours:

More information

Solar Radiation Data Modeling with a Novel Surface Fitting Approach

Solar Radiation Data Modeling with a Novel Surface Fitting Approach Solar Radiation Data Modeling with a Novel Surface Fitting Approach F. Onur Hocao glu, Ömer Nezih Gerek, Mehmet Kurban Anadolu University, Dept. of Electrical and Electronics Eng., Eskisehir, Turkey {fohocaoglu,ongerek,mkurban}

More information

Artificial Neural Networks

Artificial Neural Networks The Perceptron Rodrigo Fernandes de Mello Invited Professor at Télécom ParisTech Associate Professor at Universidade de São Paulo, ICMC, Brazil http://www.icmc.usp.br/~mello mello@icmc.usp.br Conceptually

More information

A neural network that classifies glass either as window or non-window depending on the glass chemistry.

A neural network that classifies glass either as window or non-window depending on the glass chemistry. A neural network that classifies glass either as window or non-window depending on the glass chemistry. Djaber Maouche Department of Electrical Electronic Engineering Cukurova University Adana, Turkey

More information

Texture classification using convolutional neural networks

Texture classification using convolutional neural networks University of Wollongong Research Online Faculty of Informatics - Papers (Archive) Faculty of Engineering and Information Sciences 2006 Texture classification using convolutional neural networks Fok Hing

More information

Automatic Adaptation of Learning Rate for Backpropagation Neural Networks

Automatic Adaptation of Learning Rate for Backpropagation Neural Networks Automatic Adaptation of Learning Rate for Backpropagation Neural Networks V.P. Plagianakos, D.G. Sotiropoulos, and M.N. Vrahatis University of Patras, Department of Mathematics, GR-265 00, Patras, Greece.

More information

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Lecture 20: Neural Networks for NLP. Zubin Pahuja Lecture 20: Neural Networks for NLP Zubin Pahuja zpahuja2@illinois.edu courses.engr.illinois.edu/cs447 CS447: Natural Language Processing 1 Today s Lecture Feed-forward neural networks as classifiers simple

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

Channel Performance Improvement through FF and RBF Neural Network based Equalization

Channel Performance Improvement through FF and RBF Neural Network based Equalization Channel Performance Improvement through FF and RBF Neural Network based Equalization Manish Mahajan 1, Deepak Pancholi 2, A.C. Tiwari 3 Research Scholar 1, Asst. Professor 2, Professor 3 Lakshmi Narain

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa

Lecture 2 Notes. Outline. Neural Networks. The Big Idea. Architecture. Instructors: Parth Shah, Riju Pahwa Instructors: Parth Shah, Riju Pahwa Lecture 2 Notes Outline 1. Neural Networks The Big Idea Architecture SGD and Backpropagation 2. Convolutional Neural Networks Intuition Architecture 3. Recurrent Neural

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION International Journal of Science, Environment and Technology, Vol. 2, No 5, 2013, 779 786 ISSN 2278-3687 (O) THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

Fast Learning for Big Data Using Dynamic Function

Fast Learning for Big Data Using Dynamic Function IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fast Learning for Big Data Using Dynamic Function To cite this article: T Alwajeeh et al 2017 IOP Conf. Ser.: Mater. Sci. Eng.

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

Image Classification Using Wavelet Coefficients in Low-pass Bands

Image Classification Using Wavelet Coefficients in Low-pass Bands Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August -7, 007 Image Classification Using Wavelet Coefficients in Low-pass Bands Weibao Zou, Member, IEEE, and Yan

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem

Artificial Neural Network and Multi-Response Optimization in Reliability Measurement Approximation and Redundancy Allocation Problem International Journal of Mathematics and Statistics Invention (IJMSI) E-ISSN: 2321 4767 P-ISSN: 2321-4759 Volume 4 Issue 10 December. 2016 PP-29-34 Artificial Neural Network and Multi-Response Optimization

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

Neural Networks Laboratory EE 329 A

Neural Networks Laboratory EE 329 A Neural Networks Laboratory EE 329 A Introduction: Artificial Neural Networks (ANN) are widely used to approximate complex systems that are difficult to model using conventional modeling techniques such

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Constructively Learning a Near-Minimal Neural Network Architecture

Constructively Learning a Near-Minimal Neural Network Architecture Constructively Learning a Near-Minimal Neural Network Architecture Justin Fletcher and Zoran ObradoviC Abetract- Rather than iteratively manually examining a variety of pre-specified architectures, a constructive

More information

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples.

Supervised Learning with Neural Networks. We now look at how an agent might learn to solve a general problem by seeing examples. Supervised Learning with Neural Networks We now look at how an agent might learn to solve a general problem by seeing examples. Aims: to present an outline of supervised learning as part of AI; to introduce

More information

Artificial Neural Network-Based Prediction of Human Posture

Artificial Neural Network-Based Prediction of Human Posture Artificial Neural Network-Based Prediction of Human Posture Abstract The use of an artificial neural network (ANN) in many practical complicated problems encourages its implementation in the digital human

More information

Ensemble methods in machine learning. Example. Neural networks. Neural networks

Ensemble methods in machine learning. Example. Neural networks. Neural networks Ensemble methods in machine learning Bootstrap aggregating (bagging) train an ensemble of models based on randomly resampled versions of the training set, then take a majority vote Example What if you

More information

Performance analysis of a MLP weight initialization algorithm

Performance analysis of a MLP weight initialization algorithm Performance analysis of a MLP weight initialization algorithm Mohamed Karouia (1,2), Régis Lengellé (1) and Thierry Denœux (1) (1) Université de Compiègne U.R.A. CNRS 817 Heudiasyc BP 49 - F-2 Compiègne

More information

Improving Classification Accuracy for Single-loop Reliability-based Design Optimization

Improving Classification Accuracy for Single-loop Reliability-based Design Optimization , March 15-17, 2017, Hong Kong Improving Classification Accuracy for Single-loop Reliability-based Design Optimization I-Tung Yang, and Willy Husada Abstract Reliability-based design optimization (RBDO)

More information

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield

Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield Agricultural Economics Research Review Vol. 21 January-June 2008 pp 5-10 Artificial Neural Network Methodology for Modelling and Forecasting Maize Crop Yield Rama Krishna Singh and Prajneshu * Biometrics

More information

STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK

STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK 2004 IEEE Workshop on Machine Learning for Signal Processing STEREO-DISPARITY ESTIMATION USING A SUPERVISED NEURAL NETWORK Y. V. Venkatesh, B. S. Venhtesh and A. Jaya Kumar Department of Electrical Engineering

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

Wevelet Neuron Filter with the Local Statistics. Oriented to the Pre-processor for the Image Signals

Wevelet Neuron Filter with the Local Statistics. Oriented to the Pre-processor for the Image Signals Wevelet Neuron Filter with the Local Statistics Oriented to the Pre-processor for the Image Signals Noriaki Suetake Naoki Yamauchi 3 Takeshi Yamakawa y epartment of Control Engineering and Science Kyushu

More information

MODIFIED KALMAN FILTER BASED METHOD FOR TRAINING STATE-RECURRENT MULTILAYER PERCEPTRONS

MODIFIED KALMAN FILTER BASED METHOD FOR TRAINING STATE-RECURRENT MULTILAYER PERCEPTRONS MODIFIED KALMAN FILTER BASED METHOD FOR TRAINING STATE-RECURRENT MULTILAYER PERCEPTRONS Deniz Erdogmus, Justin C. Sanchez 2, Jose C. Principe Computational NeuroEngineering Laboratory, Electrical & Computer

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

arxiv: v1 [cs.lg] 25 Jan 2018

arxiv: v1 [cs.lg] 25 Jan 2018 A New Backpropagation Algorithm without Gradient Descent arxiv:1802.00027v1 [cs.lg] 25 Jan 2018 Varun Ranganathan Student at PES University varunranga1997@hotmail.com January 2018 S. Natarajan Professor

More information

FACE RECOGNITION USING FUZZY NEURAL NETWORK

FACE RECOGNITION USING FUZZY NEURAL NETWORK FACE RECOGNITION USING FUZZY NEURAL NETWORK TADI.CHANDRASEKHAR Research Scholar, Dept. of ECE, GITAM University, Vishakapatnam, AndraPradesh Assoc. Prof., Dept. of. ECE, GIET Engineering College, Vishakapatnam,

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters

More information

THE CLASSICAL method for training a multilayer feedforward

THE CLASSICAL method for training a multilayer feedforward 930 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 10, NO. 4, JULY 1999 A Fast U-D Factorization-Based Learning Algorithm with Applications to Nonlinear System Modeling and Identification Youmin Zhang and

More information

Face recognition based on improved BP neural network

Face recognition based on improved BP neural network Face recognition based on improved BP neural network Gaili Yue, Lei Lu a, College of Electrical and Control Engineering, Xi an University of Science and Technology, Xi an 710043, China Abstract. In order

More information

Experimental Data and Training

Experimental Data and Training Modeling and Control of Dynamic Systems Experimental Data and Training Mihkel Pajusalu Alo Peets Tartu, 2008 1 Overview Experimental data Designing input signal Preparing data for modeling Training Criterion

More information

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK Proceedings of the National Conference on Trends and Advances in Mechanical Engineering, YMCA Institute of Engineering, Faridabad, Haryana., Dec 9-10, 2006. KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL

More information

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach

Edge Detection for Dental X-ray Image Segmentation using Neural Network approach Volume 1, No. 7, September 2012 ISSN 2278-1080 The International Journal of Computer Science & Applications (TIJCSA) RESEARCH PAPER Available Online at http://www.journalofcomputerscience.com/ Edge Detection

More information

Comparing Dropout Nets to Sum-Product Networks for Predicting Molecular Activity

Comparing Dropout Nets to Sum-Product Networks for Predicting Molecular Activity 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016 Neural Network Viscosity Models for Multi-Component Liquid Mixtures Adel Elneihoum, Hesham Alhumade, Ibrahim Alhajri, Walid El Garwi, Ali Elkamel Department of Chemical Engineering, University of Waterloo

More information

Extreme Learning Machines. Tony Oakden ANU AI Masters Project (early Presentation) 4/8/2014

Extreme Learning Machines. Tony Oakden ANU AI Masters Project (early Presentation) 4/8/2014 Extreme Learning Machines Tony Oakden ANU AI Masters Project (early Presentation) 4/8/2014 This presentation covers: Revision of Neural Network theory Introduction to Extreme Learning Machines ELM Early

More information

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION http:// INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION 1 Rajat Pradhan, 2 Satish Kumar 1,2 Dept. of Electronics & Communication Engineering, A.S.E.T.,

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

PARALLEL LEVENBERG-MARQUARDT-BASED NEURAL NETWORK WITH VARIABLE DECAY RATE

PARALLEL LEVENBERG-MARQUARDT-BASED NEURAL NETWORK WITH VARIABLE DECAY RATE PARALLEL LEVENBERG-MARQUARDT-BASED NEURAL NETWORK WITH VARIABLE DECAY RATE Tomislav Bacek, Dubravko Majetic, Danko Brezak Mag. ing. mech. T. Bacek, University of Zagreb, FSB, I. Lucica 5, 10000 Zagreb

More information

CMPT 882 Week 3 Summary

CMPT 882 Week 3 Summary CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being

More information

CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS

CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS 95 CHAPTER 6 IMPLEMENTATION OF RADIAL BASIS FUNCTION NEURAL NETWORK FOR STEGANALYSIS 6.1 INTRODUCTION The concept of distance measure is used to associate the input and output pattern values. RBFs use

More information

Efficient Iterative Semi-supervised Classification on Manifold

Efficient Iterative Semi-supervised Classification on Manifold . Efficient Iterative Semi-supervised Classification on Manifold... M. Farajtabar, H. R. Rabiee, A. Shaban, A. Soltani-Farani Sharif University of Technology, Tehran, Iran. Presented by Pooria Joulani

More information

Image compression and reconstruction using pi t -sigma neural networks

Image compression and reconstruction using pi t -sigma neural networks Soft Comput (2007) 11: 53 61 DOI 10.1007/s00500-006-0052-z ORIGINAL PAPER Eduardo Masato Iyoda Takushi Shibata Hajime Nobuhara Witold Pedrycz Kaoru Hirota Image compression and reconstruction using pi

More information

6. Backpropagation training 6.1 Background

6. Backpropagation training 6.1 Background 6. Backpropagation training 6.1 Background To understand well how a feedforward neural network is built and it functions, we consider its basic first steps. We return to its history for a while. In 1949

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

The Mathematics Behind Neural Networks

The Mathematics Behind Neural Networks The Mathematics Behind Neural Networks Pattern Recognition and Machine Learning by Christopher M. Bishop Student: Shivam Agrawal Mentor: Nathaniel Monson Courtesy of xkcd.com The Black Box Training the

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

CSC 578 Neural Networks and Deep Learning

CSC 578 Neural Networks and Deep Learning CSC 578 Neural Networks and Deep Learning Fall 2018/19 7. Recurrent Neural Networks (Some figures adapted from NNDL book) 1 Recurrent Neural Networks 1. Recurrent Neural Networks (RNNs) 2. RNN Training

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification

Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification Robustness of Selective Desensitization Perceptron Against Irrelevant and Partially Relevant Features in Pattern Classification Tomohiro Tanno, Kazumasa Horie, Jun Izawa, and Masahiko Morita University

More information

Instantaneously trained neural networks with complex inputs

Instantaneously trained neural networks with complex inputs Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural

More information

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism) Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course

More information

A Novel Technique for Optimizing the Hidden Layer Architecture in Artificial Neural Networks N. M. Wagarachchi 1, A. S.

A Novel Technique for Optimizing the Hidden Layer Architecture in Artificial Neural Networks N. M. Wagarachchi 1, A. S. American International Journal of Research in Science, Technology, Engineering & Mathematics Available online at http://www.iasir.net ISSN (Print): 2328-3491, ISSN (Online): 2328-3580, ISSN (CD-ROM): 2328-3629

More information

IN recent years, neural networks have attracted considerable attention

IN recent years, neural networks have attracted considerable attention Multilayer Perceptron: Architecture Optimization and Training Hassan Ramchoun, Mohammed Amine Janati Idrissi, Youssef Ghanou, Mohamed Ettaouil Modeling and Scientific Computing Laboratory, Faculty of Science

More information

PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK. Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM]

PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK. Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM] PERFORMANCE OF GRID COMPUTING FOR DISTRIBUTED NEURAL NETWORK Submitted By:Mohnish Malviya & Suny Shekher Pankaj [CSE,7 TH SEM] All Saints` College Of Technology, Gandhi Nagar, Bhopal. Abstract: In this

More information

Model parametrization strategies for Newton-based acoustic full waveform

Model parametrization strategies for Newton-based acoustic full waveform Model parametrization strategies for Newton-based acoustic full waveform inversion Amsalu Y. Anagaw, University of Alberta, Edmonton, Canada, aanagaw@ualberta.ca Summary This paper studies the effects

More information

An Empirical Study of Software Metrics in Artificial Neural Networks

An Empirical Study of Software Metrics in Artificial Neural Networks An Empirical Study of Software Metrics in Artificial Neural Networks WING KAI, LEUNG School of Computing Faculty of Computing, Information and English University of Central England Birmingham B42 2SU UNITED

More information

Keywords: ANN; network topology; bathymetric model; representability.

Keywords: ANN; network topology; bathymetric model; representability. Proceedings of ninth International Conference on Hydro-Science and Engineering (ICHE 2010), IIT Proceedings Madras, Chennai, of ICHE2010, India. IIT Madras, Aug 2-5,2010 DETERMINATION OF 2 NETWORK - 5

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial

More information

Artificial Neural Network based Curve Prediction

Artificial Neural Network based Curve Prediction Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We

More information

Static Gesture Recognition with Restricted Boltzmann Machines

Static Gesture Recognition with Restricted Boltzmann Machines Static Gesture Recognition with Restricted Boltzmann Machines Peter O Donovan Department of Computer Science, University of Toronto 6 Kings College Rd, M5S 3G4, Canada odonovan@dgp.toronto.edu Abstract

More information