Application of neural networks to model catamaran type powerboats

Size: px
Start display at page:

Download "Application of neural networks to model catamaran type powerboats"

Transcription

1 Application of eural etors to model Catamaran Type Poerboats Application of neural netors to model catamaran type poerboats Garron Fish Mie Dempsey Claytex Services Ltd Edmund House, Rugby Road, Leamington Spa, UK Abstract Poerboats in operation represent a system consisting of a number of complex components such as: surface propellers, aerodynamics and hydrodynamics; hich interact ith each other and ith the ind and ater surface conditions. By measuring the behaviour of the poerboat it is possible to create a mathematical model using system identification methods. A neural netor model has been generated hich can be used to predict ho the poerboat ill perform under different driver inputs for the purpose of optimizing performance. Keyords: neural netors; system identification; poerboats surface propellers and environmental conditions such as the ater surface and ind speed and direction. The aim of generating a mathematical model of the system as to be able to investigate the effect on the boat performance of variations in driver input and boat setup. A neural netor system identification method as selected as the most appropriate ay to model the system. eural netor techniques can be very effective at identifying complex nonlinear systems hen complete model information cannot be obtained [2]. A Modelica library called A_SID has been developed to facilitate system identification using neural netors. The library contains different types of neural netor and several training methods and has been applied to study the poerboat system. Introduction There are many different approaches to mathematical modelling and the decision about the most appropriate method to use is based on hat a priori noledge is non about the system. Modelica is typically used for hite box modelling, hich is based on the application of the universal las and principles. This paper discusses the use of blac box modelling techniques that are entirely based on the use of measurement data to generate the mathematical model []. In blac box modelling, the inputs and outputs of an unnon system are used to create a model that produces an output close to that of the actual system, hen supplied ith the same inputs. eural netor system identification is one method that can be used to create blac box models. In the case of a poerboat, it is convenient to model the system as a blac box, as it is not feasible to model the behaviour of the system as a hite box model. Figure shos a class poerboat under race conditions. To create a hite box model e ould need to create models of the aerodynamic and hydrodynamic effects, and their interaction ith the Figure. Vie of a poerboat during operation. Image courtesy of Victory Team 2 eural netors 2. An artificial neural netor An artificial neural netor is a netor of functions called neurons, hich are connected by eighted signals see Figure 2. This architecture is loosely based on a biological neural netor. eural netors can be used for a variety of tass such as system identification and classification. The A_SID library provides neural netors appropriate for system identification tass. The Modelica Association 247 Modelica 2008, March 3 rd 4 th, 2008

2 G. Fish, M. Dempsey Input layer Hidden layer Output Layer bias eight neurons bias Figure 2. Feedforard neural netor Figure 2 shos a simple diagram of a typical neural netor, commonly called a feedforard neural netor, hich consists of an input layer, a hidden layer and an output layer. In both the hidden and output layers, the eighted sum of the inputs to the layer and the bias, are applied to neuron functions. The formulae that describe the feedforard neural netor in Figure 2 are shon belo. Equation calculates the outputs of the hidden layer and equation 2 calculates the output of the neural netor. 2.2 Types of neural netors implemented in the A_SID library The A_SID library provides pre-defined neural netor models for feedforard neural netors and a form of dynamic recursive neural netors called eural etor Output Error. Within each type of neural netor there can be any number of inputs, hidden neurons, neuron layers and output neurons. In the dynamic recursive neural netor, the output of the neural netor can be used as an input to the neural netor, as shon in Figure 3. This type of recursive netor is used for modelling dynamic systems here the next output is affected by the previous output values and previous input values. The process of minimising the cost function of the neural netor is called training. In the A_SID library both bacpropagation and the Levenbergdelay delay ui vi + vn+ i o + m+ o O y Y 2 here: o is the output of the hidden layer O is the hidden neuron function u i is the input v n+ is the bias eight for the neuron there are n inputs y is the output of the neural netor Y is the output neuron function m+ is the bias eight for the neuron there are m hidden neurons t is the current sample Within the A_SID library the most common neuron functions such as linear, sigmoid and tanh are available. The user can also easily add their on neuron function by extending from the neuron function base class and implementing the required function. Figure 3. Dynamic recursive neural netor 2.3 Training of the neural netor The eights and biases in a neural netor have to be trained so that the output of the neural netor approximates the actual system ell. The mean square error beteen the actual output and predicted output is the cost function determining the measure of the closeness of the approximation of the neural netor to the actual system, as in equation 3. MSE 2 z y 2 3 here: MSE is the mean square error z is the target value output from the actual system y is the output of the neural netor is the total number of target values The Modelica Association 248 Modelica 2008, March 3 rd 4 th, 2008

3 Application of eural etors to model Catamaran Type Poerboats Marquardt training methods are available. These methods have been implemented in Modelica in both continuous and discrete forms. The choice of method to train a neural netor is influenced by the size of the neural netor and the amount of data being used to train the netor. The continuous training methods have the advantage that the gradient, hich is the rate of change of the eights, is accurate everyhere, not only at the linearization points as ith discrete methods. This can result in the search method travelling along the bottom of valleys of the cost function and not oscillating along valley alls. The continuous method interacts ith the variable step solvers to determine the step-size. If the gradient changes suddenly then the solver ill reduce the step size to deal ith this efficiently. The disadvantage of the continuous method is that it generates huge numbers of equations due to the ay that Dymola expands the for loops used in the model. By using Modelica functions and external C functions these problems can be minimized through the reuse of code sections. Data storage and the manipulation of large matrices in Dymola can also generate problems ith large neural netors if the continuous training methods are used. The discrete methods have been implemented to overcome these issues Bacpropagation It is possible to train a neural netor by calculating the gradient of the cost function ith respect to the eights, and to then adust the eights in the appropriate direction to reduce the cost function. This method is called bacpropagation and can be slo to converge to a solution. Appendix A has further information about ho the gradient is calculated Levenberg-Marquardt The Levenberg-Marquardt training method generally requires feer iterations than the bacpropagation method to train a neural netor. Hoever the LM method is more complex and requires more computation and memory to perform each iteration. The rules used to calculate the eights are described in Appendix B Recursive method In this method the partial derivative of the neural netor ith respect to the eights is required. From this partial derivative the gradient and Hessian matrices can be calculated. Once e have determined these matrices either the bacpropagation or Levenberg-Marquardt training methods can be used to minimise the cost function. Modelica provides semantics to define partial derivatives and Dymola is able to utilise these semantics to generate the symbolic derivative of functions. Example shos ho the partial derivatives are defined in Modelica. This method as used to help define a function to calculate the partial derivatives of the neural netor ith respect to the eights. [Example: The specific enthalphy can be computed from a Gibbs-function as follos: function Gibbs input Real p,t; output Real g; algorithm... end Gibbs; function Gibbs_TderGibbs, T; function specificenthalpy input Real p,t; output Real h; algorithm h:gibbsp,t-t*gibbs_tp,t; end specificenthalpy; ] Example. An example of Modelica code for the generation of the partial derivative of a function. Quoted from Modelica 3.0 Specification [4] A_SID Implementation An example of training a neural netor using the A_SID library is shon in Figure 4. The training methods are implemented in the replaceable training component and the user simply selects the required method. Training component Figure 4. A_SID training performed in a model The Modelica Association 249 Modelica 2008, March 3 rd 4 th, 2008

4 G. Fish, M. Dempsey 2.4 Improving the neural netor training When training a neural netor it is important to have confidence that the neural netor ill approximate ell ith inputs that are not part of the training data. This ability is non as generalisation [5]. One ay to investigate this is to divide the data into to sets, one that is used to train the neural netor training data, and one that is used to test the generalisation of the neural netor test data. Generalisation is liely to be improved by reducing the number of eights used in the neural netor [3]. The A_SID library supports both eight decay and pruning methods to reduce the number of eights and improve generalisation. To pruning methods are available in the library and these are non as Optimal Brain Surgery and Optimal Brain Damage. These algorithms determine hich eights to remove from the neural netor. The remaining eights are then updated to reduce the errors introduced by removing the eights for further details refer to [3]. Weight decay is another approach to removing eights from a neural netor. In this method a penalty proportional to the magnitude of the eights is added to the cost function see [3] for further details. All cost functions should contain a measure of the closeness of the neural netor outputs to the desired output. Adding a eight penalty to the cost function generates a trade off beteen reducing the magnitude of the eights and reducing the closeness measure. As a result of this the eights that have little effect on improving the closeness measure ill no be reduced in magnitude. 3 Poerboat operation The type of poerboat that has been modelled using the A_SID library is a Victory Team class offshore poerboat as shon in Figures and 5. These boats have a catamaran hull ith to engines and a central rudder. Each engine drives a height adustable, steerable propeller. The boats are operated by to cremembers: a throttle man and a driver. Beteen them they have 5 controls, hich are: A steering heel that directly controls the rudder angle. The steered angle for the propellers is also controlled by the steering heel angle. The propeller heights are set using to rocer sitches. These control the trim pistons that move the propellers vertically. Throttle position is set using the throttle levers for the left and right engine. trim piston rudder Figure 5. Rear vie of a poerboat. Image courtesy of Victory Team As the boat accelerates it begins to plane and travels higher above the ater, i.e. less of the hull is belo the ater line. As the boat lifts out of the ater, the propellers are loered trimmed don to control their depth in the ater. The trim height or propeller depth also affects the pitch angle at hich the boat travels. In general, the loer the depth of the propellers, the loer the pitch. If the boat is travelling at a pitch angle that is too high for the speed it is doing, it ill flip over a bloover. If the pitch angle of the boat is too lo the result ill be a larger surface area of the boat in the ater and thus an increase in drag. When cornering, the catamaran poerboat rolls to the inside of the corner due to the asymmetrical hull design. If the cornering is too severe for the current speed, the boat ill begin to roll to the outside of the corner, and ill roll over if the drivers do not tae correcting action. A typical cornering manoeuvre requires the throttle man to reduce the throttle to slo the boat to a controllable speed before the corner, and the driver to steer the boat along the course, ensuring that the steering angle is not too steep for the current speed. The Modelica Association 250 Modelica 2008, March 3 rd 4 th, 2008

5 Application of eural etors to model Catamaran Type Poerboats Figure 6. A Racecourse. The poerboats must travel along a course defined by buoys. There are three different types of laps. The start lap in red, the short lap in blac and the long lap in green. Supplied courtesy of IOTA. A race involves the boats travelling around a course defined by buoys laid out in the ater see Figure 6. The drivers try to select good trim height to maximize acceleration hile maintaining stability. The drivers also try to find good throttle, rudder and trim positions for cornering that result in fast and stable cornering. The neural netor can investigate different possible driver inputs and predict their effects on boat performance over a lap. 4 Poerboat model 4. Defining the neural netor To model the poerboat using neural netor techniques, a significant amount of data is used to characterise the system. During races and testing sessions the boats are fitted ith a data logger that records the data required to train the neural netor. The input data required is: The engine throttle positions The rudder angle The trim height of both propellers The target output data required is: The engine speed of both engines The boat speed The ya rate of the boat Using this input and output data e can train a neural netor to represent the poerboat system and then use the neural netor to investigate the system performance ith different inputs. As the boat is an example of a dynamic system, the dynamic recursive neural netor as chosen. The model has been generated from data recorded by Victory Team from their racing boat number 77 during the 2007 Arendal race. During this race the boat completed 2 laps of the course. The measured data as filtered using Basel and Chebyshev filters to reduce the amount of noise and high frequency components in the data. The filtered data as then re-sampled from 00Hz to.7hz to reduce the number of duplicate data points and to decrease the amount of time required to train the neural netor. Finally the data as divided into training and test data sets. 4.2 Training the neural netor Training a neural netor for such a complex system is done in a number of steps. When first training a dynamic recursive neural netor it is not non ho many past outputs and inputs ill result in the model giving a good representation of the poerboat system. It is also not non ho many neurons ill be required, or hich neuron functions should be used. These can only be determined by trying different configurations to find the best setup. The first step in training this type of neural netor is to train it to only predict the next output value from the previous data value. The eights from this training are then used as the initial eights for the recursive training. The recursive training algorithm described in as used ith the Levenberg- Marquardt method to train the neural netor. To improve the generalization, eight decay as used. 4.3 Correlation results After training, the MSE for the neural netor using the training data set as and the MSE for the test data set as This means that the neural netor has been trained successfully and is able to accurately predict the performance of the poerboat, as shon in Figure 7. In Figure 7, the recorded driver inputs have been fed in to the trained neural netor and the outputs for boat speed, engine speed and the ya rate of the boat are compared to the measured data. Overall the results sho that there is very good correlation beteen the neural netor and the real poerboat. There are some small deviations hich could be due to a number of different factors, such as sell and The Modelica Association 25 Modelica 2008, March 3 rd 4 th, 2008

6 G. Fish, M. Dempsey ind conditions along the course, that are not accounted for in the neural netor. neural netor. Using the optimised trimming strategy the poerboat ould tae s less to travel along a 2m straight than using the example trim strategy. Figure 7. Comparison of simulated neural netor ith recorded data for a single lap of the course. 4.4 Optimisation of the trim strategy Section 3 describes ho the propeller height trim heigh affects the performance of the poerboat. By using the neural netor it is possible to determine hat the optimum trimming strategy is for the poerboat. The model shon in Figure 8 uses the trained neural netor to simulate the poerboat accelerating from an initial speed up to its maximum speed. Figure 9. Comparison of a simulated trim strategy ith a real trim strategy. The simulated optimal results Simulated Speed and Simulated Trim assume the boat is travelling perfectly straight. The Example Trim strategy as taen from the race data and applied to the simulator. The neural netor used in the model can only be expected to accurately model an operating region if this region as sufficiently excited during the data recording stage. In Figure 0 the histogram data identifies hat trim position data is available for the operating region of the simulated result. The optimum trim strategy is limited by the availability of data see Figure 0. The upper bound on the trim data is probably due to driver caution because of the ris of a bloover in this operating region. Figure 8. Acceleration model test. The throttle position is set to 00% and the boat is travelling in a straight line. Figure 9 compares an example trimming strategy extracted from the race data and the optimized trim strategy that has been determined ith the use of the Figure 0. Histogram plot of recorded trim position at the operating state. The optimised trim position is plotted over the histogram as hite circles The Modelica Association 252 Modelica 2008, March 3 rd 4 th, 2008

7 Application of eural etors to model Catamaran Type Poerboats 5 Conclusions A library called A_SID has been developed for the development and training of neural netors for system identification. This library as used to generate a blac box model of a poerboat, and this model as then used to determine an improved trimming strategy that should deliver improvements in boat performance. Acnoledgments Victory Team have indly provided the photographs used in this paper and the poerboat data used to carry out this study. IOTA have indly provided the course map used in Figure 9. References [] Estrada-Flores S., et-al. Development and validation of grey-box models for refrigeration applications: a revie of ey concepts. International Journal of Refrigeration, pages , July [2] Wen Yu, onlinear system identification using discrete-time recurrent neural netors ith stable learning algorithms. Information Sciences, pages 3-47, [3] M. ørgaard, O. Ravn,.K.Poulsen and L.K.Hansen, eural etors for Modelling and Control of Dynamic Systems [4] Modelica 3.0 Specifications modelica_3_0/vie [5] eural etor FAQ, part 3 of 7, ftp://ftp.sas.com/pub/neural/faq3.ht ml Bacpropagation al- APPEDIX A: gorithm By calculating the gradient of the cost function see Section 2.3 it is possible to update the eights in a ay that ill reduce the cost function. The example belo is ho bacpropagation ould be used to update a feedforard neural netor using the MSE as the cost function. The folloing equations describe a neural netor. uivi + vn+ i o + m+ o O y Y 2 The cost function is the mean square error i.e. MSE: V 2 z y 2 3 The calculation of the partial derivative of MSE ith respect to output eight follos: 2 Substituting in 2: z y 2 y ε let ε z-y ε Y y ε a o o here a + o m+ + m+ The calculation of the partial derivative of the cost function ith respect to hidden eight follos: i i i 2 y ε v i z y 2 note that ε and y are vectors i i y ε a o y o ε i a bi here: b i O u v + v The discrete eight update method is: i i i n + i ui m+ The Modelica Association 253 Modelica 2008, March 3 rd 4 th, 2008

8 G. Fish, M. Dempsey -η v i By choosing η sufficiently small, the cost function can be decreased at each iterate. The continuous method uses the gradient calculated above to update the existing eights continuously. APPEDIX B: Levenberg-Marquardt algorithm In the bacpropagation algorithm the search direction is calculated from the first order Talyor approximation of the cost function. The Levenberg- Marquardt algorithm maes use of the second order Talyor approximation of the cost function to update the eights. The second order approximation of the cost function follos: ^ V θ Vθ * + θ -θ * V θ * + ^ ½θ -θ * Vθ * θ -θ * V θ Vθ * + θ -θ * G + ½θ -θ * Hθ -θ * here: θ represents all the eights in the neural netor θ* are the eights at hich the Taylor approximation is made. V is dv/dθ and equal to the gradient G V is d 2 V/dθ 2 and equal to the Hessian H minimum of L is far from the current iterate, θ i, a poor search direction may be obtained. [3]. θ i+ here: arg min L θ i θ subect to θ i+ - θ i δ i 5 λ i has a monotonic relationship ith δ i [3]. Where increasing λ i decreases δ i and visa versa. The eights are updated using the folloing rule: [Rθ i + λ i I] θ -Gθ here: θ θ i+ - θ i The update rule for the λ value follos:. If the L i value approximates MSE ell, then λ i+ λ i /2 and thus increasing the search region. 2. If the L i value does not approximates MSE ell, then λ i+ λ i *2 and thus decreasing the search region. 3. Leave λ * if neither the or 2 thresholds are true. To get a more detailed explanation on the update rule for λ * refer to [3]. In the Levenberg-Marquardt method a further approximation is made; the Hessian is approximated by the folloing equation: R θ dy dy dθ dθ This is valid hen the MSE is the cost function. Let the approximation of the cost function be: Lθ Vθ * + θ -θ * G + ½θ -θ * Rθ -θ * 4 This cost function is minimised using an iterative process; here the next eights are limited to a region around the current eights see 5. Limiting the range of the search is often effective as If the The Modelica Association 254 Modelica 2008, March 3 rd 4 th, 2008

Dept. of Computing Science & Math

Dept. of Computing Science & Math Lecture 4: Multi-Laer Perceptrons 1 Revie of Gradient Descent Learning 1. The purpose of neural netor training is to minimize the output errors on a particular set of training data b adusting the netor

More information

An Edge Detection Method Using Back Propagation Neural Network

An Edge Detection Method Using Back Propagation Neural Network RESEARCH ARTICLE OPEN ACCESS An Edge Detection Method Using Bac Propagation Neural Netor Ms. Utarsha Kale*, Dr. S. M. Deoar** *Department of Electronics and Telecommunication, Sinhgad Institute of Technology

More information

The Prediction of Real estate Price Index based on Improved Neural Network Algorithm

The Prediction of Real estate Price Index based on Improved Neural Network Algorithm , pp.0-5 http://dx.doi.org/0.457/astl.05.8.03 The Prediction of Real estate Price Index based on Improved Neural Netor Algorithm Huan Ma, Ming Chen and Jianei Zhang Softare Engineering College, Zhengzhou

More information

Inverse Analysis of Soil Parameters Based on Deformation of a Bank Protection Structure

Inverse Analysis of Soil Parameters Based on Deformation of a Bank Protection Structure Inverse Analysis of Soil Parameters Based on Deformation of a Bank Protection Structure Yixuan Xing 1, Rui Hu 2 *, Quan Liu 1 1 Geoscience Centre, University of Goettingen, Goettingen, Germany 2 School

More information

Growing and Learning Algorithms of Radial Basis Function Networks by. Tiantian Xie

Growing and Learning Algorithms of Radial Basis Function Networks by. Tiantian Xie Groing and Learning Algorithms of Radial Basis Function Netorks by Tiantian Xie A dissertation submitted to the Graduate Faculty of Auburn University in partial fulfillment of the requirements for the

More information

COMPARING DIFFERENT IMPLEMENTATIONS FOR THE LEVENBERG-MARQUARDT ALGORITHM. Darío Baptista, Fernando Morgado-Dias

COMPARING DIFFERENT IMPLEMENTATIONS FOR THE LEVENBERG-MARQUARDT ALGORITHM. Darío Baptista, Fernando Morgado-Dias COMPARIG DIFFERE IMPLEMEAIOS FOR HE LEVEBERG-MARQUARD ALGORIHM Darío Baptista, Fernando Morgado-Dias Madeira Interactive echnologies Institute and Centro de Competências de Ciências Eactas e da Engenharia,

More information

K d (490) models characterisation Issue 1 revision 0 Gilbert Barrot, ACRI-ST 05/jul/2006

K d (490) models characterisation Issue 1 revision 0 Gilbert Barrot, ACRI-ST 05/jul/2006 K d (490) models characterisation Issue 1 revision 0 Gilbert Barrot, ACRI-ST 05/jul/2006 K d (490) is the diffuse attenuation coefficient at 490 nm. It is one indicator of the turbidity of the ater column.

More information

Adaptive Regularization. in Neural Network Filters

Adaptive Regularization. in Neural Network Filters Adaptive Regularization in Neural Network Filters Course 0455 Advanced Digital Signal Processing May 3 rd, 00 Fares El-Azm Michael Vinther d97058 s97397 Introduction The bulk of theoretical results and

More information

10.2 Single-Slit Diffraction

10.2 Single-Slit Diffraction 10. Single-Slit Diffraction If you shine a beam of light through a ide-enough opening, you might expect the beam to pass through ith very little diffraction. Hoever, hen light passes through a progressively

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

A Robust Method of Facial Feature Tracking for Moving Images

A Robust Method of Facial Feature Tracking for Moving Images A Robust Method of Facial Feature Tracking for Moving Images Yuka Nomura* Graduate School of Interdisciplinary Information Studies, The University of Tokyo Takayuki Itoh Graduate School of Humanitics and

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

CMPT 882 Week 3 Summary

CMPT 882 Week 3 Summary CMPT 882 Week 3 Summary! Artificial Neural Networks (ANNs) are networks of interconnected simple units that are based on a greatly simplified model of the brain. ANNs are useful learning tools by being

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

A dynamic programming algorithm for perceptually consistent stereo

A dynamic programming algorithm for perceptually consistent stereo A dynamic programming algorithm for perceptually consistent stereo The Harvard community has made this article openly available. Please share ho this access benefits you. Your story matters. Citation Accessed

More information

Acyclic orientations do not lead to optimal deadlock-free packet routing algorithms

Acyclic orientations do not lead to optimal deadlock-free packet routing algorithms Acyclic orientations do not lead to optimal deadloc-ree pacet routing algorithms Daniel Šteanovič 1 Department o Computer Science, Comenius University, Bratislava, Slovaia Abstract In this paper e consider

More information

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey. Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Contrast Improvement on Various Gray Scale Images Together With Gaussian Filter and Histogram Equalization

Contrast Improvement on Various Gray Scale Images Together With Gaussian Filter and Histogram Equalization Contrast Improvement on Various Gray Scale Images Together With Gaussian Filter and Histogram Equalization I M. Rajinikannan, II A. Nagarajan, III N. Vallileka I,II,III Dept. of Computer Applications,

More information

Knowledge Discovery and Data Mining

Knowledge Discovery and Data Mining Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

On-line and off-line wheel/rail contact algorithm in the analysis of multibody railroad vehicle systems

On-line and off-line wheel/rail contact algorithm in the analysis of multibody railroad vehicle systems Journal of Mechanical Science and Technology 23 (2009) 99~996 Journal of Mechanical Science and Technology.springerlink.com/content/738-494x DOI 0.007/s2206-009-0327-2 On-line and off-line heel/rail contact

More information

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c)

Convex Optimization. Erick Delage, and Ashutosh Saxena. October 20, (a) (b) (c) Convex Optimization (for CS229) Erick Delage, and Ashutosh Saxena October 20, 2006 1 Convex Sets Definition: A set G R n is convex if every pair of point (x, y) G, the segment beteen x and y is in A. More

More information

Optimal time-delay spiking deconvolution and its application in the physical model measurement

Optimal time-delay spiking deconvolution and its application in the physical model measurement Optimal time-delay spiking deconvolution and its application in the physical model measurement Zhengsheng Yao, Gary F. Margrave and Eric V. Gallant ABSRAC Spike deconvolution based on iener filter theory

More information

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah

Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com

More information

Ship Energy Systems Modelling: a Gray-Box approach

Ship Energy Systems Modelling: a Gray-Box approach MOSES Workshop: Modelling and Optimization of Ship Energy Systems Ship Energy Systems Modelling: a Gray-Box approach 25 October 2017 Dr Andrea Coraddu andrea.coraddu@strath.ac.uk 30/10/2017 Modelling &

More information

Clustering algorithms and autoencoders for anomaly detection

Clustering algorithms and autoencoders for anomaly detection Clustering algorithms and autoencoders for anomaly detection Alessia Saggio Lunch Seminars and Journal Clubs Université catholique de Louvain, Belgium 3rd March 2017 a Outline Introduction Clustering algorithms

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

Implementation of a Turbo Encoder and Turbo Decoder on DSP Processor-TMS320C6713

Implementation of a Turbo Encoder and Turbo Decoder on DSP Processor-TMS320C6713 International Journal of Engineering Research and Development e-issn : 2278-067X, p-issn : 2278-800X,.ijerd.com Volume 2, Issue 5 (July 2012), PP. 37-41 Implementation of a Turbo Encoder and Turbo Decoder

More information

Mixture models and clustering

Mixture models and clustering 1 Lecture topics: Miture models and clustering, k-means Distance and clustering Miture models and clustering We have so far used miture models as fleible ays of constructing probability models for prediction

More information

APPLICATION OF A MULTI- LAYER PERCEPTRON FOR MASS VALUATION OF REAL ESTATES

APPLICATION OF A MULTI- LAYER PERCEPTRON FOR MASS VALUATION OF REAL ESTATES FIG WORKING WEEK 2008 APPLICATION OF A MULTI- LAYER PERCEPTRON FOR MASS VALUATION OF REAL ESTATES Tomasz BUDZYŃSKI, PhD Artificial neural networks the highly sophisticated modelling technique, which allows

More information

w Suggested ALC Settings for Portable Devices

w Suggested ALC Settings for Portable Devices Suggested ALC Settings for Portable Devices WAN_0178 INTRODUCTION This document applies to all WM Codec devices for portable applications. There are currently 3 versions of the ALC circuit; recommended

More information

A Simplified Vehicle and Driver Model for Vehicle Systems Development

A Simplified Vehicle and Driver Model for Vehicle Systems Development A Simplified Vehicle and Driver Model for Vehicle Systems Development Martin Bayliss Cranfield University School of Engineering Bedfordshire MK43 0AL UK Abstract For the purposes of vehicle systems controller

More information

Gradient Compared l p -LMS Algorithms for Sparse System Identification

Gradient Compared l p -LMS Algorithms for Sparse System Identification Gradient Comared l -LMS Algorithms for Sarse System Identification Yong Feng,, Jiasong Wu, Rui Zeng, Limin Luo, Huazhong Shu. School of Biological Science and Medical Engineering, Southeast University,

More information

Linear models. Subhransu Maji. CMPSCI 689: Machine Learning. 24 February February 2015

Linear models. Subhransu Maji. CMPSCI 689: Machine Learning. 24 February February 2015 Linear models Subhransu Maji CMPSCI 689: Machine Learning 24 February 2015 26 February 2015 Overvie Linear models Perceptron: model and learning algorithm combined as one Is there a better ay to learn

More information

Box-Cox Transformation for Simple Linear Regression

Box-Cox Transformation for Simple Linear Regression Chapter 192 Box-Cox Transformation for Simple Linear Regression Introduction This procedure finds the appropriate Box-Cox power transformation (1964) for a dataset containing a pair of variables that are

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

23 Single-Slit Diffraction

23 Single-Slit Diffraction 23 Single-Slit Diffraction Single-slit diffraction is another interference phenomenon. If, instead of creating a mask ith to slits, e create a mask ith one slit, and then illuminate it, e find, under certain

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

Multi-Backpropagation Network In Medical Diagnosis

Multi-Backpropagation Network In Medical Diagnosis Multi-Bacpropagation Netor In Medical Diagnosis Wan Hussain Wan Isha, Fadilah Sira, Abu Talib Othman School of Information Technology, Uniersiti Utara Malaysia, 0600 Sinto, Kedah, MALAYSIA Email: {hussain;

More information

An Analysis of Interference as a Source for Diffraction

An Analysis of Interference as a Source for Diffraction J. Electromagnetic Analysis & Applications, 00,, 60-606 doi:0.436/jemaa.00.0079 Published Online October 00 (http://.scirp.org/journal/jemaa) 60 An Analysis of Interference as a Source for Diffraction

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Project 1: Creating and Using Multiple Artboards

Project 1: Creating and Using Multiple Artboards E00ILCS.qxp 3/19/2010 1:0 AM Page 7 Workshops Introduction The Workshop is all about being creative and thinking outside of the box. These orkshops ill help your right-brain soar, hile making your left-brain

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Gdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering

Gdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering Gdansk University of Technology Faculty of Electrical and Control Engineering Department of Control Systems Engineering Artificial Intelligence Methods Neuron, neural layer, neural netorks - surface of

More information

Development of Redundant Robot Simulator for Avoiding Arbitrary Obstacles Based on Semi-Analytical Method of Solving Inverse Kinematics

Development of Redundant Robot Simulator for Avoiding Arbitrary Obstacles Based on Semi-Analytical Method of Solving Inverse Kinematics Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, USA, Oct 29 - Nov 2, 2007 ThC2.1 Development of Redundant Robot Simulator for Avoiding Arbitrary

More information

Plug-in Board Editor for PLG150-DR/PLG150-PC

Plug-in Board Editor for PLG150-DR/PLG150-PC Plug-in Board Editor for PLG150-DR/PLG150-PC Oner s Manual Contents Introduction.........................................2 Starting Up.........................................3 Assigning the PLG150-DR/PLG150-PC

More information

Experimental Data and Training

Experimental Data and Training Modeling and Control of Dynamic Systems Experimental Data and Training Mihkel Pajusalu Alo Peets Tartu, 2008 1 Overview Experimental data Designing input signal Preparing data for modeling Training Criterion

More information

THE BENEFIT OF ANSA TOOLS IN THE DALLARA CFD PROCESS. Simona Invernizzi, Dallara Engineering, Italy,

THE BENEFIT OF ANSA TOOLS IN THE DALLARA CFD PROCESS. Simona Invernizzi, Dallara Engineering, Italy, THE BENEFIT OF ANSA TOOLS IN THE DALLARA CFD PROCESS Simona Invernizzi, Dallara Engineering, Italy, KEYWORDS automatic tools, batch mesh, DFM, morphing, ride height maps ABSTRACT In the last few years,

More information

Keywords: Real-Life Images, Cartoon Images, HSV and RGB Features, K-Means Applications, Image Classification. Features

Keywords: Real-Life Images, Cartoon Images, HSV and RGB Features, K-Means Applications, Image Classification. Features Volume 5, Issue 4, 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Softare Engineering Research Paper Available online at:.ijarcsse.com Distinguishing Images from

More information

WAN_0218. Implementing ReTune with Wolfson Audio CODECs INTRODUCTION

WAN_0218. Implementing ReTune with Wolfson Audio CODECs INTRODUCTION Implementing ReTune ith Wolfson Audio CODECs INTRODUCTION ReTune TM is a technology for compensating for deficiencies in the frequency responses of loudspeakers and microphones and the housing they are

More information

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW

A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW A REAL-TIME REGISTRATION METHOD OF AUGMENTED REALITY BASED ON SURF AND OPTICAL FLOW HONGBO LI, MING QI AND 3 YU WU,, 3 Institute of Web Intelligence, Chongqing University of Posts and Telecommunications,

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

WAN_0214. External Component Requirements for Ground Referenced Outputs INTRODUCTION RECOMMENDED EXTERNAL COMPONENTS

WAN_0214. External Component Requirements for Ground Referenced Outputs INTRODUCTION RECOMMENDED EXTERNAL COMPONENTS External Component Requirements for Ground Referenced Outputs WAN_0214 INTRODUCTION Many Wolfson audio devices no have ground referenced headphone and line outputs, rather than outputs referenced to VMID

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Indexing Methods for Moving Object Databases: Games and Other Applications

Indexing Methods for Moving Object Databases: Games and Other Applications Indexing Methods for Moving Object Databases: Games and Other Applications Hanan Samet Jagan Sankaranarayanan Michael Auerbach {hjs,jagan,mikea}@cs.umd.edu Department of Computer Science Center for Automation

More information

Practice Exam Sample Solutions

Practice Exam Sample Solutions CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of

More information

Optimization Methods for Machine Learning (OMML)

Optimization Methods for Machine Learning (OMML) Optimization Methods for Machine Learning (OMML) 2nd lecture Prof. L. Palagi References: 1. Bishop Pattern Recognition and Machine Learning, Springer, 2006 (Chap 1) 2. V. Cherlassky, F. Mulier - Learning

More information

Lecture 1: Turtle Graphics. the turtle and the crane and the swallow observe the time of their coming; Jeremiah 8:7

Lecture 1: Turtle Graphics. the turtle and the crane and the swallow observe the time of their coming; Jeremiah 8:7 Lecture 1: Turtle Graphics the turtle and the crane and the sallo observe the time of their coming; Jeremiah 8:7 1. Turtle Graphics Motion generates geometry. The turtle is a handy paradigm for investigating

More information

Data Mining. Covering algorithms. Covering approach At each stage you identify a rule that covers some of instances. Fig. 4.

Data Mining. Covering algorithms. Covering approach At each stage you identify a rule that covers some of instances. Fig. 4. Data Mining Chapter 4. Algorithms: The Basic Methods (Covering algorithm, Association rule, Linear models, Instance-based learning, Clustering) 1 Covering approach At each stage you identify a rule that

More information

Accelerating the convergence speed of neural networks learning methods using least squares

Accelerating the convergence speed of neural networks learning methods using least squares Bruges (Belgium), 23-25 April 2003, d-side publi, ISBN 2-930307-03-X, pp 255-260 Accelerating the convergence speed of neural networks learning methods using least squares Oscar Fontenla-Romero 1, Deniz

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

A Practical Print-and-Scan Resilient Watermarking for High Resolution Images

A Practical Print-and-Scan Resilient Watermarking for High Resolution Images A Practical Print-and-Scan Resilient Watermaring for High Resolution Images Yongping Zhang, Xiangui Kang 2, and Philipp Zhang 3 Research Department, Hisilicon Technologies Co., Ltd, 00094 Beijing, China.

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

BIG Data How to handle it. Mark Holton, College of Engineering, Swansea University,

BIG Data How to handle it. Mark Holton, College of Engineering, Swansea University, BIG Data How to handle it Mark Holton, College of Engineering, Swansea University, m.d.holton@swansea.ac.uk The usual What I m going to talk about The source of the data some tag (data loggers) history

More information

Box-Cox Transformation

Box-Cox Transformation Chapter 190 Box-Cox Transformation Introduction This procedure finds the appropriate Box-Cox power transformation (1964) for a single batch of data. It is used to modify the distributional shape of a set

More information

A FRACTAL WATERMARKING SCHEME FOR IMAGE IN DWT DOMAIN

A FRACTAL WATERMARKING SCHEME FOR IMAGE IN DWT DOMAIN A FRACTAL WATERMARKING SCHEME FOR IMAGE IN DWT DOMAIN ABSTRACT A ne digital approach based on the fractal technolog in the Disperse Wavelet Transform domain is proposed in this paper. First e constructed

More information

291 Programming Assignment #3

291 Programming Assignment #3 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

A Method to Eliminate Wrongly Matched Points for Image Matching

A Method to Eliminate Wrongly Matched Points for Image Matching 2017 2nd International Seminar on Applied Physics, Optoelectronics and Photonics (APOP 2017) ISBN: 978-1-60595-522-3 A Method to Eliminate Wrongly Matched Points for Image Matching Xiao-fei AI * ABSTRACT

More information

STAT, GRAPH, TA- BLE, RECUR

STAT, GRAPH, TA- BLE, RECUR Chapter Sketch Function The sketch function lets you dra lines and graphs on an existing graph. Note that Sketch function operation in the STAT, GRAPH, TA- BLE, RECUR and CONICS Modes is different from

More information

Pattern Classification Algorithms for Face Recognition

Pattern Classification Algorithms for Face Recognition Chapter 7 Pattern Classification Algorithms for Face Recognition 7.1 Introduction The best pattern recognizers in most instances are human beings. Yet we do not completely understand how the brain recognize

More information

Optimization and Probabilistic Analysis Using LS-DYNA

Optimization and Probabilistic Analysis Using LS-DYNA LS-OPT Optimization and Probabilistic Analysis Using LS-DYNA Nielen Stander Willem Roux Tushar Goel Livermore Software Technology Corporation Overview Introduction: LS-OPT features overview Design improvement

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Artificial Neural Network-Based Prediction of Human Posture

Artificial Neural Network-Based Prediction of Human Posture Artificial Neural Network-Based Prediction of Human Posture Abstract The use of an artificial neural network (ANN) in many practical complicated problems encourages its implementation in the digital human

More information

Bottom Line. Data Science: From System Identification to (Deep) Learning and Big Data. Data Science? The core

Bottom Line. Data Science: From System Identification to (Deep) Learning and Big Data. Data Science? The core ................ Bottom Line : From System Identification to (Deep) Learning and Big Data Reglerteknik, ISY, Linköpings Universitet Many Buzzwords:, Machine Learning, Deep Learning, Big Data... One well-defined

More information

Computational Fluid Dynamics Simulation of a Rim Driven Thruster

Computational Fluid Dynamics Simulation of a Rim Driven Thruster Computational Fluid Dynamics Simulation of a Rim Driven Thruster Aleksander J Dubas, N. W. Bressloff, H. Fangohr, S. M. Sharkh (University of Southampton) Abstract An electric rim driven thruster is a

More information

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet.

The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or

More information

Contour Error Decoupling Compensation for Non-circular Grinding Qi-guang LI 1,a, Qiu-shi HAN 1, Wei-hua LI 2 and Bao-ying PENG 1

Contour Error Decoupling Compensation for Non-circular Grinding Qi-guang LI 1,a, Qiu-shi HAN 1, Wei-hua LI 2 and Bao-ying PENG 1 07 nd International Conference on Applied Mechanics, Electronics and Mechatronics Engineering (AMEME 07) ISBN: 978--60595-497-4 Contour Error Decoupling Compensation for Non-circular Grinding Qi-guang

More information

System identification from multiple short time duration signals

System identification from multiple short time duration signals System identification from multiple short time duration signals Sean Anderson Dept of Psychology, University of Sheffield www.eye-robot.org 25th October, 2007 Presentation Content Context Motor Control

More information

MLPQNA-LEMON Multi Layer Perceptron neural network trained by Quasi Newton or Levenberg-Marquardt optimization algorithms

MLPQNA-LEMON Multi Layer Perceptron neural network trained by Quasi Newton or Levenberg-Marquardt optimization algorithms MLPQNA-LEMON Multi Layer Perceptron neural network trained by Quasi Newton or Levenberg-Marquardt optimization algorithms 1 Introduction In supervised Machine Learning (ML) we have a set of data points

More information

Fitting Ellipses and Predicting Confidence Envelopes using a Bias Corrected Kalman Filter

Fitting Ellipses and Predicting Confidence Envelopes using a Bias Corrected Kalman Filter Fitting Ellipses and Predicting Confidence Envelopes using a Bias Corrected Kalman Filter John Porrill AI Vision Research Unit University of Sheffield Sheffield S10 2TN England We describe the use of the

More information

COMPARISON OF SPILL PLUME EQUATIONS

COMPARISON OF SPILL PLUME EQUATIONS COMPARISON OF SPILL PLUME EQUATIONS P Bluett, T Clemence and J Francis * University of Central Lancashire, Preston, Lancashire, UK. ABSTRACT Various methods have been proposed for calculating smoke production

More information

Sequence Modeling: Recurrent and Recursive Nets. By Pyry Takala 14 Oct 2015

Sequence Modeling: Recurrent and Recursive Nets. By Pyry Takala 14 Oct 2015 Sequence Modeling: Recurrent and Recursive Nets By Pyry Takala 14 Oct 2015 Agenda Why Recurrent neural networks? Anatomy and basic training of an RNN (10.2, 10.2.1) Properties of RNNs (10.2.2, 8.2.6) Using

More information

FEL-H Robust Control Real-Time Scheduling

FEL-H Robust Control Real-Time Scheduling J. Softare Engineering & Applications, 009, :60-65 Published Online April 009 in SciRes (.SciRP.org/journal/jsea) FEL-H Robust Control Real-Time Scheduling Bing Du 1, Chun Ruan 1 School of Electrical and

More information

FRACTAL 3D MODELING OF ASTEROIDS USING WAVELETS ON ARBITRARY MESHES 1 INTRODUCTION: FRACTAL SURFACES IN NATURE

FRACTAL 3D MODELING OF ASTEROIDS USING WAVELETS ON ARBITRARY MESHES 1 INTRODUCTION: FRACTAL SURFACES IN NATURE 1 FRACTAL D MODELING OF ASTEROIDS USING WAVELETS ON ARBITRARY MESHES André Jalobeanu Automated Learning Group, USRA/RIACS NASA Ames Research Center MS 269-4, Moffett Field CA 9405-1000, USA ajalobea@email.arc.nasa.gov

More information

Logical Rhythm - Class 3. August 27, 2018

Logical Rhythm - Class 3. August 27, 2018 Logical Rhythm - Class 3 August 27, 2018 In this Class Neural Networks (Intro To Deep Learning) Decision Trees Ensemble Methods(Random Forest) Hyperparameter Optimisation and Bias Variance Tradeoff Biological

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Neural networks in the re-engineering process based on construction drawings

Neural networks in the re-engineering process based on construction drawings Neural netorks in the re-engineering process based on construction draings S. Komoroski HOCHTIEF Construction AG, Bauen im Βestand NRW, Essen, Germany V. Berkhahn Institute of Computer Science in Civil

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

REAL-TIME 3D GRAPHICS STREAMING USING MPEG-4

REAL-TIME 3D GRAPHICS STREAMING USING MPEG-4 REAL-TIME 3D GRAPHICS STREAMING USING MPEG-4 Liang Cheng, Anusheel Bhushan, Renato Pajarola, and Magda El Zarki School of Information and Computer Science University of California, Irvine, CA 92697 {lcheng61,

More information

Automatic Deployment and Formation Control of Decentralized Multi-Agent Networks

Automatic Deployment and Formation Control of Decentralized Multi-Agent Networks Automatic Deployment and Formation Control of Decentralized Multi-Agent Netorks Brian S. Smith, Magnus Egerstedt, and Ayanna Hoard Abstract Novel tools are needed to deploy multi-agent netorks in applications

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics J. Software Engineering & Applications, 2010, 3: 230-239 doi:10.4236/jsea.2010.33028 Published Online March 2010 (http://www.scirp.org/journal/jsea) Applying Neural Network Architecture for Inverse Kinematics

More information

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute

More information

FPGA IMPLEMENTATION OF ADAPTIVE TEMPORAL KALMAN FILTER FOR REAL TIME VIDEO FILTERING March 15, 1999

FPGA IMPLEMENTATION OF ADAPTIVE TEMPORAL KALMAN FILTER FOR REAL TIME VIDEO FILTERING March 15, 1999 FPGA IMPLEMENTATION OF ADAPTIVE TEMPORAL KALMAN FILTER FOR REAL TIME VIDEO FILTERING March 15, 1999 Robert D. Turney +, Ali M. Reza, and Justin G. R. Dela + CORE Solutions Group, Xilinx San Jose, CA 9514-3450,

More information

Chapter 5 Components for Evolution of Modular Artificial Neural Networks

Chapter 5 Components for Evolution of Modular Artificial Neural Networks Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are

More information

15.4 Constrained Maxima and Minima

15.4 Constrained Maxima and Minima 15.4 Constrained Maxima and Minima Question 1: Ho do ou find the relative extrema of a surface hen the values of the variables are constrained? Question : Ho do ou model an optimization problem ith several

More information

Levenberg-Marquardt minimisation in ROPP

Levenberg-Marquardt minimisation in ROPP Ref: SAF/GRAS/METO/REP/GSR/006 Web: www.grassaf.org Date: 4 February 2008 GRAS SAF Report 06 Levenberg-Marquardt minimisation in ROPP Huw Lewis Met Office, UK Lewis:Levenberg-Marquardt in ROPP GRAS SAF

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION

THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM FOR TIFINAGH CHARACTER RECOGNITION International Journal of Science, Environment and Technology, Vol. 2, No 5, 2013, 779 786 ISSN 2278-3687 (O) THE NEURAL NETWORKS: APPLICATION AND OPTIMIZATION APPLICATION OF LEVENBERG-MARQUARDT ALGORITHM

More information

The optimisation of shot peen forming processes

The optimisation of shot peen forming processes The optimisation of shot peen forming processes T. Wang a, M.J. Platts b, J. Wu c a School of Engineering and Design, Brunel University, Uxbridge UB8 3PH, U. b Department of Engineering, University of

More information