CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)

Size: px
Start display at page:

Download "CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN)"

Transcription

1 128 CHAPTER 7 MASS LOSS PREDICTION USING ARTIFICIAL NEURAL NETWORK (ANN) Various mathematical techniques like regression analysis and software tools have helped to develop a model using equation, which is able to explain the input output relation with minimum error. Depending upon the complexity involved in the problem, either mathematical technique or software tools can be selected. The Neural Networks (NN) can be used with many complicated functions, because of their sophisticated nature. This technique brings out in almost every technological field, solutions to ample range of problems in a convenient and easier way (Laurene 1994 and Galushkin 2010). Owing to the natural self-learning nature of NN, their activities can be sometimes unpredictable and unexpected. 7.1 INTRODUCTION TO ANN ANN is a mathematical model or computational model based on biological neural networks. The first model of basic neurons was summarized by McCulloch and Pitts in The basic model of the artificial neuron was derived from the functionality of a biological neuron in the human brain. The human brain has a set of more than ten billion interconnected neurons and each neuron is a cell that uses biochemical reactions to receive, process and transmit information. The dendrites (treelike networks of nerve fibers) are interlinked to the soma or cell body, where the cell nucleus is positioned. A single long fiber called axon, extending from the cell body is connected to

2 129 other neurons through synapses or synaptic terminals. Figure 7.1(a-b) shows a simplified biological and artificial neuron. The soma of the cell body receives inputs from the other neurons through adaptive synaptic connections to the dendrites, when a neuron is excited. The nerve impulses from the soma are transmitted next to an axon to the synapses of other neurons. Artificial neurons are similar to their biological counter parts. Basic building block of an ANN is the artificial neuron and such a model has three simple set of rules namely multiplication, summation and activation. Figure 7.1 Model of a) Biological neuron b) Artificial neuron

3 130 Every input value is multiplied with individual weight at the entrance and the weighted input vales are summed up to determine the strength of their output. The sum of previously weighted inputs and bias passing through activation function is called transfer function. Activation function which gives output varying between 0 (for low input values) and 1 (for high output values) or -1 and 1. Mathematically, this process is described in Figure 7.1b. The resultant of this function is then passed as the input and these weights determine the behavior of the network. One of the advantages of ANN is their competence to learn from their environment. It is useful in applications where complexity of the environment makes implementations of the other type of solutions which are not practical. As such ANNs can be used for a variety of tasks like data processing, regulations, decision making, classification, clustering, robotics, compression, function approximation, etc. Taskin & Caligulu 2006 and Mustafa et al (2008) have modeled the adhesive wear resistance of Al-Si-Mg/SiCp composites using Back Propagation Neural Network (BPNN). At the end of the training and testing process, the results were compared with the experimental test results. It was found that the overall performance of the model was quite satisfactory and lower fraction values were obtained. John & Kingsly (2008) have attempted to predict the wear loss of A390 aluminium alloy. The results showed a satisfactory agreement between the experimental and ANN results and it was suggested that ANN can be an efficient tool used for prediction in the area of material characterization and tribology. Ahmet et al (2009) have found the wear loss of Al2024 and Al6063 alloys at different temperatures, aging time and applied load. It was suggested that the overall performance of the model was quite satisfactory and this prediction technique may be applied to all the manufacturing processes.

4 131 Dobrzanski et al (2007) have attempted to predict the mechanical properties of the Al-Si-Cu alloy using NN approach. The predicted results showed that good compatibility with experimental data with better accuracy. Dobrzanski et al (2008) have also investigated the prediction of hardness of various magnesium alloys at different temperatures, solution heat treatment, aging time and various percentage of aluminium content using ANN. The optimal heat treatment working conditions and time were obtained from a well trained model and that helped to attain the best mechanical properties. Tang et al (2009) have developed a NN model with smaller errors, which helped to improve the accuracy of the prediction results. The predicted results were found to be in good agreement with the experimental data. Zhang et al (2006) have developed an ANN model for the tribological behavior of SiC-filled PEEK coating based on the influence of sliding velocity and applied load. The results found that the developed models were relatively satisfactory. Singh et al (2006) have predicted the tool flank wear of High Speed Steel (HSS) drill bits over copper. The models were well trained by using BPNN and the results were compared with the experimental values and it was observed that the NN was able to efficiently learn the model of wear. Palanisamy et al (2008) have attempted, to predict the flank wear of the cutting tool used in end-milling operation by using regression and ANN model. The results revealed that the NN model was better prediction tool than the regression method. Mustafa et al (2008) have tested the accuracy of the ANN model of SiC reinforced Al-alloy Metal Matrix Composite (MMC). The obtained results exhibited low error fraction values and it ensured the performance of the model. Ugur et al (2008) have found the effects of various burnishing parameters such as burnishing force, number of passes, feed rate and burnishing speed on the surface roughness of AA7075 Al-alloy. The

5 132 prediction of ANN model coincided with the test results and this had helped to determine the average surface roughness value in a short time. Prediction of friction and wear properties of the developed alloys is significant, which can save not only cost but also time. ANNs have the capacity to eliminate the need for expensive and difficult experimental investigations in testing and manufacturing processes. In the recent years, neural network model have been widely used in different metallurgical applications Statistical Analysis of ANN Model shown to be: From artificial neuron model interval activity of the neuron can be v k j p W kj x j 1 (7.1) In general there are three types of activation functions used in the ANN namely threshold function, piecewise-linear function and sigmoid function. Sigmoid function is commonly used as activation function and an example for the sigmoid function of the hyperbolic tangent function is given by Equation (7.2). Similarly the bipolar sigmoid activation function, which is used to estimate the output of a neuron that receives input (except neurons in input layer) from other neuron is given by Equation (7.3) (Laurene Fausett 1994). (-v) v 1- e (v) tanh (7.2) (-v) 2 1 e x 1 e g(x) (7.3) x 1 e

6 133 The outputs of the aforementioned layers can be determined as shown in Equations ( ). h F Mx (7.4) o F Nh (7.5) The internal error of output layer is calculated and then back propagated to the hidden layer. The weights of the links have to be adjusted to minimize this error and again the error is calculated with new weights. This is called as an epoch and training of the net would stop if any one of the criteria is met. The prediction of the error of the ANN system is calculated by using Equation (7.6) (Pendse & Joshi 2004). actual computed %e pr X 100 (7.6) actual The several layers used in the NN are input layer (first layer), output layer (final layer) and hidden layer (intermediate layer). The input value is weighed and compared against a threshold value and if this value exceeds the threshold value, the unit will stop. The amount of error is calculated from the difference between the desired output of the net for a given input pattern and the actual value. This value for the error is for that particular pattern. These are combined to find the total error for the network. The weights are repeatedly adjusted in order to minimize the error between the actual and required output. The feed forward back propagation technique has a two-stage learning process involving two passes: forward one and a backward one. In the forward pass, the information moves in only one direction, forward, from the input nodes, through the hidden nodes and to the output nodes. There are no cycles or loops in the network. In the backward pass, reversed, starting at the output layer and armed with the actual and

7 134 required output patterns, an error value can be found for each output unit. The procedure is worked backwards through the layers and the error is used to apply the appropriate weight changes to each unit in the network. 7.2 MODELING USING ANN The performance of the ANN model is evaluated by separating the data into two sets: the training set and the testing set. During the training set, the parameters of the network are calculated. Then the learning process is stopped when the error goal is reached and finally the network is evaluated with the data from the testing set. It consists of a large number of simple synchronous processing elements called neurons, and is assembled in different layers in the network such as an input layer, an output layer and hidden layer as shown in Figure 7.2. Figure 7.2 Feed-forward neural network architecture The ANN is built with a systematic approach to optimize a performance criterion or to follow some implicit internal constraint, which is commonly referred to as the learning rule. In the supervised learning, an input

8 135 is presented to the neural network and a corresponding target set at the output. The difference between the desired response and the system output is calculated as error. From this error, the information is fed back to the system and based on the learning rule systematic adjustments of the system parameters are done. The process is repeated until the performance of the net is acceptable. Effectiveness of ANN is ensured by normalizing which is to confine them between certain limits and also make all the input parameters equally important in the training of neural network. This is done by mapping each term to a value between 1 and +1 or 0 and 1 using the following equation (7.7) Normalized value of the parameter, 2( y ymin ) y norm 1 (7.7) y y max min The input layer receives input from the external environment and the output layer which communicates the output of the system to the user or external environment. There are usually a number of hidden layers between these two layers. The process continues until the network outputs fit the targets. Once the network is trained, the NN may be used to calculate the output for any arbitrary set of input data through the fixed weight factors and the errors are also calculated. ANN has the potential to minimize the need for expensive experimental investigation and/or inspection of aluminum alloys used in various applications, hence resulting in large economic benefits for organizations. The training phase can be finished in a few minutes whereas the experimental study lasts for a number of days. The number of neurons in the input and output layers are decided based on the number of input parameters available and output responses, respectively. There is not a perfect theory for instructing choice for the

9 136 number of neurons in the hidden layer (Shang and Sun 2008). The initial number of neurons in the hidden layer and the modeling error, measured by Root Mean Square Error (RMSE), Mean Percentage Error (MPE), Absolute Percentage Error (APE), and Absolute Fraction of Variance values have been used for making comparisons which can be evaluated by the following Equations ( ). n h n i n 2 o m (7.8) APE (%) Model prediction values - Experimental values Experimental values X 100 (7.9) RMSE 1 m m i 1 y i y i y i 2 (7.10) MPE j a j - p n j / a j X 100 (7.11) 2 2 j j j R - 2 p j j a - p 1 (7.12) 7.3 RESULTS AND DISCUSSIONS In ANN, the designer chooses the network topology, the performance function, the learning rule, and the criterion to stop the training phase and finally the system automatically adjusts the parameters. For every set of input data an error between the result at output layer and the actual value is calculated and the weights between the set of nodes are adjusted to

10 137 minimize this error. This is done by correcting weights from output to input layer via a hidden layer and hence the name back propagation. Out of 36 experimental data, 28 training data sets are considered for both the networks to compare the performances. Besides, 8 testing sets outside the training data set are selected for testing the neural networks. In this work, two hidden layers are used, each layer containing 15 nodes. In the present work, the neural network models are designed and trained using the MATLAB package. Back propagation algorithm is used for predicting the mass loss under lubricated conditions. The input selection is a very important aspect of NN modelling (Nalbant et al 2008). In this work, the network has three neurons in input layer (load, sliding distance and different alloying element), one neuron in output layer (mass loss) and 15 neurons in each hidden layer. So the architecture of ANN is 3:15:15:1 as shown in Figure 7.3. Figure 7.3 ANN architecture for this study Wear Curves of Developed Alloys The mass loss of the developed alloy for three different applied loads of 50 N, 60 N and 70 N is calculated through the tribological wear test analysis. The test is conducted under lubricated conditions with an oil temperature of 80 0 C. The total sliding distance of the alloy is 54 km with a constant sliding speed of 1 m/s. Before and after the wear test, the weight of

11 138 the discs of the alloy is calculated by using electronic balance machine with an accuracy of 10-4 g and the mass loss of the developed alloys is determined. The presented results clearly show that, at maximum applied load the mass loss was significantly higher when compared to the other loads. The mass loss is also increased due to increase in sliding distance. The reason for higher wear and testing procedures are explained in detail in previous chapter six in the sections 6.4 and 6.5. Relations of sliding distance versus mass loss of the AlTSi alloy and AlTSiH alloy are presented in Figures Figure 7.4 Wear graph for AlTSi alloy

12 139 Figure 7.5 Wear graph for AlTSiH alloy All the input and output values are normalized between 0.1 and 0.9 by using linear scaling. After selecting the final network structure 3:15:15:1, the sigmoid activation function is selected as the transfer function and learning rate, and momentum are set as 0.8 and 0.8 respectively. After fixing the momentum and learning rate the trial is continued to find out the optimal value for number of epochs. The training process is ended after epochs. Figure 7.6 shows the Normalized Standard Error (NSE) with training cycles which decreases with the increasing number of iteration and attains e-005. The testing process is carried out, in order to understand whether the ANN is making good predictions.

13 140 Figure 7.6 ANN training performance graph Using the Equations ( ), the statistical values between the network predictions and the experimental values with training and testing data have been calculated and the results are presented in Table 7.1. Table 7.1 Statistical values of the mass loss of the developed alloys Training performance Testing performance RMS R MPE Error, % AlTSi alloy AlTSiH alloy

14 Prediction by ANN Model The experimental values are compared with the predicted values, so that the performance of the trained network is tested and the results are as shown in Figure , it is obvious that the mass loss values derived from the trained ANN system are closely matching with the experimental values. Figure 7.7 Comparison of mass loss at training stage From this comparison the prediction accuracies of the network are calculated. In the learning stage, the obtained mean errors in the AlTSi alloy is 0.175% and AlTSiH alloy is 6.911% respectively. This can still be improved by training the ANN system with more number of experimental results.

15 142 Figure 7.8 Comparison of mass loss at testing stage The mean errors in the testing stage are also found to be 0.151% for AlTSi alloy and 7.498% for AlTSiH alloy. Now, by using this trained network one can predict the mass loss of the alloy at any combinations of the chosen parameters within the range of values. The values are within the acceptable ranges which meets the reliability of the ANN training and testing stages and the summary of the proposed model is given in Table 7.2. Very good performance of the trained neural network is attained and the prediction of mass loss of the alloys is in good agreement with the experimental values.

16 143 Table 7.2 Summary of ANN model Object model Mass loss prediction Total number of layers 4 Number of hidden layers 2 The number of neuron on the layers Input: 3; hidden1: 15; hidden2: 15; output:1 Network type Feed-forward back propagation Transfer function Log-sigmoid Training function Trainlm Learning function Learngdm Learning rate, lr 0.8 Momentum constant, mc 0.8 Acceptable mean square error MSE at the end of training e CONCLUSIONS Speed, ability to learn from the experimental results and ease are the advantages of ANN when compared to the classical method and it can also reduce the conduct of wide experimental study. Because of the above reasons, ANN is chosen. This approach emerges to be a dominant tool in materials engineering and can be used efficiently as prediction technique in the area of material characterization and tribology. In this work, feed-forward BPNN is developed and used to calculate the mass loss of the developed alloys. For both training and testing, the experimental values of mass of the alloys are used. The error between the predicted value and experimental value is less, i.e., good compatibility with the experimental value and also this network can save much time. The overall performance of the model is relatively agreeable and it can be used to predict the mass loss with high accuracy.

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine

Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine Use of Artificial Neural Networks to Investigate the Surface Roughness in CNC Milling Machine M. Vijay Kumar Reddy 1 1 Department of Mechanical Engineering, Annamacharya Institute of Technology and Sciences,

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS

LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

Opening the Black Box Data Driven Visualizaion of Neural N

Opening the Black Box Data Driven Visualizaion of Neural N Opening the Black Box Data Driven Visualizaion of Neural Networks September 20, 2006 Aritificial Neural Networks Limitations of ANNs Use of Visualization (ANNs) mimic the processes found in biological

More information

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network

Review on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,

More information

Multilayer Feed-forward networks

Multilayer Feed-forward networks Multi Feed-forward networks 1. Computational models of McCulloch and Pitts proposed a binary threshold unit as a computational model for artificial neuron. This first type of neuron has been generalized

More information

Yuki Osada Andrew Cannon

Yuki Osada Andrew Cannon Yuki Osada Andrew Cannon 1 Humans are an intelligent species One feature is the ability to learn The ability to learn comes down to the brain The brain learns from experience Research shows that the brain

More information

ANN Based Surface Roughness Prediction In Turning Of AA 6351

ANN Based Surface Roughness Prediction In Turning Of AA 6351 ANN Based Surface Roughness Prediction In Turning Of AA 6351 Konani M. Naidu 1, Sadineni Rama Rao 2 1, 2 (Department of Mechanical Engineering, SVCET, RVS Nagar, Chittoor-517127, A.P, India) ABSTRACT Surface

More information

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES?

WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? WHAT TYPE OF NEURAL NETWORK IS IDEAL FOR PREDICTIONS OF SOLAR FLARES? Initially considered for this model was a feed forward neural network. Essentially, this means connections between units do not form

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Artificial Neural Network (ANN) Approach for Predicting Friction Coefficient of Roller Burnishing AL6061

Artificial Neural Network (ANN) Approach for Predicting Friction Coefficient of Roller Burnishing AL6061 International Journal of Machine Learning and Computing, Vol. 2, No. 6, December 2012 Artificial Neural Network (ANN) Approach for Predicting Friction Coefficient of Roller Burnishing AL6061 S. H. Tang,

More information

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically Supervised Learning in Neural Networks (Part 1) A prescribed set of well-defined rules for the solution of a learning problem is called a learning algorithm. Variety of learning algorithms are existing,

More information

International Journal of Advanced Research in Computer Science and Software Engineering

International Journal of Advanced Research in Computer Science and Software Engineering Volume 3, Issue 4, April 203 ISSN: 77 2X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Stock Market Prediction

More information

Experimental Investigation and Development of Multi Response ANN Modeling in Turning Al-SiCp MMC using Polycrystalline Diamond Tool

Experimental Investigation and Development of Multi Response ANN Modeling in Turning Al-SiCp MMC using Polycrystalline Diamond Tool Research Article International Journal of Current Engineering and Technology E-ISSN 2277 4106, P-ISSN 2347-5161 2014 INPRESSCO, All Rights Reserved Available at http://inpressco.com/category/ijcet Experimental

More information

Design and Performance Analysis of and Gate using Synaptic Inputs for Neural Network Application

Design and Performance Analysis of and Gate using Synaptic Inputs for Neural Network Application IJIRST International Journal for Innovative Research in Science & Technology Volume 1 Issue 12 May 2015 ISSN (online): 2349-6010 Design and Performance Analysis of and Gate using Synaptic Inputs for Neural

More information

Climate Precipitation Prediction by Neural Network

Climate Precipitation Prediction by Neural Network Journal of Mathematics and System Science 5 (205) 207-23 doi: 0.7265/259-529/205.05.005 D DAVID PUBLISHING Juliana Aparecida Anochi, Haroldo Fraga de Campos Velho 2. Applied Computing Graduate Program,

More information

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)

International Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017) APPLICATION OF LRN AND BPNN USING TEMPORAL BACKPROPAGATION LEARNING FOR PREDICTION OF DISPLACEMENT Talvinder Singh, Munish Kumar C-DAC, Noida, India talvinder.grewaal@gmail.com,munishkumar@cdac.in Manuscript

More information

Optimization of Turning Process during Machining of Al-SiCp Using Genetic Algorithm

Optimization of Turning Process during Machining of Al-SiCp Using Genetic Algorithm Optimization of Turning Process during Machining of Al-SiCp Using Genetic Algorithm P. G. Karad 1 and D. S. Khedekar 2 1 Post Graduate Student, Mechanical Engineering, JNEC, Aurangabad, Maharashtra, India

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 INTERNATIONAL JOURNAL OF MECHANICAL

International Journal of Mechanical Engineering and Technology (IJMET), ISSN 0976 INTERNATIONAL JOURNAL OF MECHANICAL INTERNATIONAL JOURNAL OF MECHANICAL ENGINEERING AND TECHNOLOGY (IJMET) ISSN 0976 6340 (Print) ISSN 0976 6359 (Online) Volume 3, Issue 2, May-August (2012), pp. 162-170 IAEME: www.iaeme.com/ijmet.html Journal

More information

Volume 1, Issue 3 (2013) ISSN International Journal of Advance Research and Innovation

Volume 1, Issue 3 (2013) ISSN International Journal of Advance Research and Innovation Application of ANN for Prediction of Surface Roughness in Turning Process: A Review Ranganath M S *, Vipin, R S Mishra Department of Mechanical Engineering, Dehli Technical University, New Delhi, India

More information

CHAPTER VI BACK PROPAGATION ALGORITHM

CHAPTER VI BACK PROPAGATION ALGORITHM 6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted

More information

Neural Networks CMSC475/675

Neural Networks CMSC475/675 Introduction to Neural Networks CMSC475/675 Chapter 1 Introduction Why ANN Introduction Some tasks can be done easily (effortlessly) by humans but are hard by conventional paradigms on Von Neumann machine

More information

A Data Classification Algorithm of Internet of Things Based on Neural Network

A Data Classification Algorithm of Internet of Things Based on Neural Network A Data Classification Algorithm of Internet of Things Based on Neural Network https://doi.org/10.3991/ijoe.v13i09.7587 Zhenjun Li Hunan Radio and TV University, Hunan, China 278060389@qq.com Abstract To

More information

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016

CS 4510/9010 Applied Machine Learning. Neural Nets. Paula Matuszek Fall copyright Paula Matuszek 2016 CS 4510/9010 Applied Machine Learning 1 Neural Nets Paula Matuszek Fall 2016 Neural Nets, the very short version 2 A neural net consists of layers of nodes, or neurons, each of which has an activation

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK

KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK Proceedings of the National Conference on Trends and Advances in Mechanical Engineering, YMCA Institute of Engineering, Faridabad, Haryana., Dec 9-10, 2006. KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 8 Artificial neural networks: Unsupervised learning Introduction Hebbian learning Generalised Hebbian learning algorithm Competitive learning Self-organising computational map: Kohonen network

More information

Development of an Artificial Neural Network Surface Roughness Prediction Model in Turning of AISI 4140 Steel Using Coated Carbide Tool

Development of an Artificial Neural Network Surface Roughness Prediction Model in Turning of AISI 4140 Steel Using Coated Carbide Tool ISSN (Online) : 2319-8753 ISSN (Print) : 2347-6710 International Journal of Innovative Research in Science, Engineering and Technology An ISO 3297: 2007 Certified Organization, Volume 2, Special Issue

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

CS6220: DATA MINING TECHNIQUES

CS6220: DATA MINING TECHNIQUES CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining

More information

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks

Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Simulation of Zhang Suen Algorithm using Feed- Forward Neural Networks Ritika Luthra Research Scholar Chandigarh University Gulshan Goyal Associate Professor Chandigarh University ABSTRACT Image Skeletonization

More information

Pattern Classification Algorithms for Face Recognition

Pattern Classification Algorithms for Face Recognition Chapter 7 Pattern Classification Algorithms for Face Recognition 7.1 Introduction The best pattern recognizers in most instances are human beings. Yet we do not completely understand how the brain recognize

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation

Learning. Learning agents Inductive learning. Neural Networks. Different Learning Scenarios Evaluation Learning Learning agents Inductive learning Different Learning Scenarios Evaluation Slides based on Slides by Russell/Norvig, Ronald Williams, and Torsten Reil Material from Russell & Norvig, chapters

More information

CORRELATION AMONG THE CUTTING PARAMETERS, SURFACE ROUGHNESS AND CUTTING FORCES IN TURNING PROCESS BY EXPERIMENTAL STUDIES

CORRELATION AMONG THE CUTTING PARAMETERS, SURFACE ROUGHNESS AND CUTTING FORCES IN TURNING PROCESS BY EXPERIMENTAL STUDIES 5 th International & 26 th All India Manufacturing Technology, Design and Research Conference (AIMTDR 2014) December 12 th 14 th, 2014, IIT Guwahati, Assam, India CORRELATION AMONG THE CUTTING PARAMETERS,

More information

Neural Networks. Neural Network. Neural Network. Neural Network 2/21/2008. Andrew Kusiak. Intelligent Systems Laboratory Seamans Center

Neural Networks. Neural Network. Neural Network. Neural Network 2/21/2008. Andrew Kusiak. Intelligent Systems Laboratory Seamans Center Neural Networks Neural Network Input Andrew Kusiak Intelligent t Systems Laboratory 2139 Seamans Center Iowa City, IA 52242-1527 andrew-kusiak@uiowa.edu http://www.icaen.uiowa.edu/~ankusiak Tel. 319-335

More information

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)

Artificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism) Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent

More information

ANN Based Prediction of Surface Roughness in Turning

ANN Based Prediction of Surface Roughness in Turning ANN Based Prediction of Surface Roughness in Turning Diwakar Reddy.V, Krishnaiah.G, A. Hemanth Kumar and Sushil Kumar Priya Abstract Surface roughness, an indicator of surface quality is one of the most

More information

Liquefaction Analysis in 3D based on Neural Network Algorithm

Liquefaction Analysis in 3D based on Neural Network Algorithm Liquefaction Analysis in 3D based on Neural Network Algorithm M. Tolon Istanbul Technical University, Turkey D. Ural Istanbul Technical University, Turkey SUMMARY: Simplified techniques based on in situ

More information

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra

1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation

More information

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016

Proceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016 Neural Network Viscosity Models for Multi-Component Liquid Mixtures Adel Elneihoum, Hesham Alhumade, Ibrahim Alhajri, Walid El Garwi, Ali Elkamel Department of Chemical Engineering, University of Waterloo

More information

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation

Assignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class

More information

Central Manufacturing Technology Institute, Bangalore , India,

Central Manufacturing Technology Institute, Bangalore , India, 5 th International & 26 th All India Manufacturing Technology, Design and Research Conference (AIMTDR 2014) December 12 th 14 th, 2014, IIT Guwahati, Assam, India Investigation on the influence of cutting

More information

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska

Classification Lecture Notes cse352. Neural Networks. Professor Anita Wasilewska Classification Lecture Notes cse352 Neural Networks Professor Anita Wasilewska Neural Networks Classification Introduction INPUT: classification data, i.e. it contains an classification (class) attribute

More information

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5

Artificial Neural Networks Lecture Notes Part 5. Stephen Lucci, PhD. Part 5 Artificial Neural Networks Lecture Notes Part 5 About this file: If you have trouble reading the contents of this file, or in case of transcription errors, email gi0062@bcmail.brooklyn.cuny.edu Acknowledgments:

More information

A COUPLED ARTIFICIAL NEURAL NETWORK AND RESPONSE SURFACE METHODOLOGY MODEL FOR THE PREDICTION OF AVERAGE SURFACE ROUGHNESS IN END MILLING OF PREHEATED

A COUPLED ARTIFICIAL NEURAL NETWORK AND RESPONSE SURFACE METHODOLOGY MODEL FOR THE PREDICTION OF AVERAGE SURFACE ROUGHNESS IN END MILLING OF PREHEATED A COUPLED ARTIFICIAL NEURAL NETWORK AND RESPONSE SURFACE METHODOLOGY MODEL FOR THE PREDICTION OF AVERAGE SURFACE ROUGHNESS IN END MILLING OF PREHEATED Ti6Al4V ALLOY Md. Anayet U. PATWARI,, A.K.M. Nurul

More information

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey.

Knowledge Discovery and Data Mining. Neural Nets. A simple NN as a Mathematical Formula. Notes. Lecture 13 - Neural Nets. Tom Kelsey. Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

Knowledge Discovery and Data Mining

Knowledge Discovery and Data Mining Knowledge Discovery and Data Mining Lecture 13 - Neural Nets Tom Kelsey School of Computer Science University of St Andrews http://tom.home.cs.st-andrews.ac.uk twk@st-andrews.ac.uk Tom Kelsey ID5059-13-NN

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Image Compression: An Artificial Neural Network Approach

Image Compression: An Artificial Neural Network Approach Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and

More information

Parameter optimization model in electrical discharge machining process *

Parameter optimization model in electrical discharge machining process * 14 Journal of Zhejiang University SCIENCE A ISSN 1673-565X (Print); ISSN 1862-1775 (Online) www.zju.edu.cn/jzus; www.springerlink.com E-mail: jzus@zju.edu.cn Parameter optimization model in electrical

More information

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION

INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION http:// INVESTIGATING DATA MINING BY ARTIFICIAL NEURAL NETWORK: A CASE OF REAL ESTATE PROPERTY EVALUATION 1 Rajat Pradhan, 2 Satish Kumar 1,2 Dept. of Electronics & Communication Engineering, A.S.E.T.,

More information

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach

Optimizing Number of Hidden Nodes for Artificial Neural Network using Competitive Learning Approach Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 5, May 2015, pg.358

More information

Hierarchical Minimum Spanning Trees for Lossy Image Set Compression

Hierarchical Minimum Spanning Trees for Lossy Image Set Compression Hierarchical Minimum Spanning Trees for Lossy Image Set Compression Anthony Schmieder, Barry Gergel, and Howard Cheng Department of Mathematics and Computer Science University of Lethbridge, Alberta, Canada

More information

Multi-Objective Optimization of Milling Parameters for Machining Cast Iron on Machining Centre

Multi-Objective Optimization of Milling Parameters for Machining Cast Iron on Machining Centre Research Journal of Engineering Sciences ISSN 2278 9472 Multi-Objective Optimization of Milling Parameters for Machining Cast Iron on Machining Centre Abstract D.V.V. Krishna Prasad and K. Bharathi R.V.R

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

3 Nonlinear Regression

3 Nonlinear Regression CSC 4 / CSC D / CSC C 3 Sometimes linear models are not sufficient to capture the real-world phenomena, and thus nonlinear models are necessary. In regression, all such models will have the same basic

More information

Application of Artificial Neural Network for the Inversion of Electrical Resistivity Data

Application of Artificial Neural Network for the Inversion of Electrical Resistivity Data Journal of Informatics and Mathematical Sciences Vol. 9, No. 2, pp. 297 316, 2017 ISSN 0975-5748 (online); 0974-875X (print) Published by RGN Publications http://www.rgnpublications.com Proceedings of

More information

CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS

CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS 8.1 Introduction The recognition systems developed so far were for simple characters comprising of consonants and vowels. But there is one

More information

Neural network based Numerical digits Recognization using NNT in Matlab

Neural network based Numerical digits Recognization using NNT in Matlab Neural network based Numerical digits Recognization using NNT in Matlab ABSTRACT Amritpal kaur 1, Madhavi Arora 2 M.tech- ECE 1, Assistant Professor 2 Global institute of engineering and technology, Amritsar

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India.

Argha Roy* Dept. of CSE Netaji Subhash Engg. College West Bengal, India. Volume 3, Issue 3, March 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Training Artificial

More information

The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models

The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models Journal of Physics: Conference Series PAPER OPEN ACCESS The comparison of performance by using alternative refrigerant R152a in automobile climate system with different artificial neural network models

More information

Website: HOPEFIELD NETWORK. Inderjeet Singh Behl, Ankush Saini, Jaideep Verma. ID-

Website:   HOPEFIELD NETWORK. Inderjeet Singh Behl, Ankush Saini, Jaideep Verma.  ID- International Journal Of Scientific Research And Education Volume 1 Issue 7 Pages 154-162 2013 ISSN (e): 2321-7545 Website: http://ijsae.in HOPEFIELD NETWORK Inderjeet Singh Behl, Ankush Saini, Jaideep

More information

CSC 578 Neural Networks and Deep Learning

CSC 578 Neural Networks and Deep Learning CSC 578 Neural Networks and Deep Learning Fall 2018/19 7. Recurrent Neural Networks (Some figures adapted from NNDL book) 1 Recurrent Neural Networks 1. Recurrent Neural Networks (RNNs) 2. RNN Training

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

An Intelligent Technique for Image Compression

An Intelligent Technique for Image Compression An Intelligent Technique for Image Compression Athira Mayadevi Somanathan 1, V. Kalaichelvi 2 1 Dept. Of Electronics and Communications Engineering, BITS Pilani, Dubai, U.A.E. 2 Dept. Of Electronics and

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

APPLICATION OF MODELING TOOLS IN MANUFACTURING TO IMPROVE QUALITY AND PRODUCTIVITY WITH CASE STUDY

APPLICATION OF MODELING TOOLS IN MANUFACTURING TO IMPROVE QUALITY AND PRODUCTIVITY WITH CASE STUDY Proceedings in Manufacturing Systems, Volume 7, Issue, ISSN 7- APPLICATION OF MODELING TOOLS IN MANUFACTURING TO IMPROVE QUALITY AND PRODUCTIVITY WITH CASE STUDY Mahesh B. PARAPPAGOUDAR,*, Pandu R. VUNDAVILLI

More information

European Journal of Science and Engineering Vol. 1, Issue 1, 2013 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR

European Journal of Science and Engineering Vol. 1, Issue 1, 2013 ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR ADAPTIVE NEURO-FUZZY INFERENCE SYSTEM IDENTIFICATION OF AN INDUCTION MOTOR Ahmed A. M. Emam College of Engineering Karrary University SUDAN ahmedimam1965@yahoo.co.in Eisa Bashier M. Tayeb College of Engineering

More information

Optimization of Roughness Value by using Tool Inserts of Nose Radius 0.4mm in Finish Hard-Turning of AISI 4340 Steel

Optimization of Roughness Value by using Tool Inserts of Nose Radius 0.4mm in Finish Hard-Turning of AISI 4340 Steel http:// Optimization of Roughness Value by using Tool Inserts of Nose Radius 0.4mm in Finish Hard-Turning of AISI 4340 Steel Mr. Pratik P. Mohite M.E. Student, Mr. Vivekanand S. Swami M.E. Student, Prof.

More information

Optimization Methods for Machine Learning (OMML)

Optimization Methods for Machine Learning (OMML) Optimization Methods for Machine Learning (OMML) 2nd lecture Prof. L. Palagi References: 1. Bishop Pattern Recognition and Machine Learning, Springer, 2006 (Chap 1) 2. V. Cherlassky, F. Mulier - Learning

More information

MODELING FOR RESIDUAL STRESS, SURFACE ROUGHNESS AND TOOL WEAR USING AN ADAPTIVE NEURO FUZZY INFERENCE SYSTEM

MODELING FOR RESIDUAL STRESS, SURFACE ROUGHNESS AND TOOL WEAR USING AN ADAPTIVE NEURO FUZZY INFERENCE SYSTEM CHAPTER-7 MODELING FOR RESIDUAL STRESS, SURFACE ROUGHNESS AND TOOL WEAR USING AN ADAPTIVE NEURO FUZZY INFERENCE SYSTEM 7.1 Introduction To improve the overall efficiency of turning, it is necessary to

More information

3 Nonlinear Regression

3 Nonlinear Regression 3 Linear models are often insufficient to capture the real-world phenomena. That is, the relation between the inputs and the outputs we want to be able to predict are not linear. As a consequence, nonlinear

More information

Early tube leak detection system for steam boiler at KEV power plant

Early tube leak detection system for steam boiler at KEV power plant Early tube leak detection system for steam boiler at KEV power plant Firas B. Ismail 1a,, Deshvin Singh 1, N. Maisurah 1 and Abu Bakar B. Musa 1 1 Power Generation Research Centre, College of Engineering,

More information

Solar Radiation Data Modeling with a Novel Surface Fitting Approach

Solar Radiation Data Modeling with a Novel Surface Fitting Approach Solar Radiation Data Modeling with a Novel Surface Fitting Approach F. Onur Hocao glu, Ömer Nezih Gerek, Mehmet Kurban Anadolu University, Dept. of Electrical and Electronics Eng., Eskisehir, Turkey {fohocaoglu,ongerek,mkurban}

More information

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

Neural Network Classifier for Isolated Character Recognition

Neural Network Classifier for Isolated Character Recognition Neural Network Classifier for Isolated Character Recognition 1 Ruby Mehta, 2 Ravneet Kaur 1 M.Tech (CSE), Guru Nanak Dev University, Amritsar (Punjab), India 2 M.Tech Scholar, Computer Science & Engineering

More information

Key Words: DOE, ANOVA, RSM, MINITAB 14.

Key Words: DOE, ANOVA, RSM, MINITAB 14. ISO 9:28 Certified Volume 4, Issue 4, October 24 Experimental Analysis of the Effect of Process Parameters on Surface Finish in Radial Drilling Process Dayal Saran P BalaRaju J Associate Professor, Department

More information

Pradeep Kumar J, Giriprasad C R

Pradeep Kumar J, Giriprasad C R ISSN: 78 7798 Investigation on Application of Fuzzy logic Concept for Evaluation of Electric Discharge Machining Characteristics While Machining Aluminium Silicon Carbide Composite Pradeep Kumar J, Giriprasad

More information

Neural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina

Neural Network and Deep Learning. Donglin Zeng, Department of Biostatistics, University of North Carolina Neural Network and Deep Learning Early history of deep learning Deep learning dates back to 1940s: known as cybernetics in the 1940s-60s, connectionism in the 1980s-90s, and under the current name starting

More information

Fast Learning for Big Data Using Dynamic Function

Fast Learning for Big Data Using Dynamic Function IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fast Learning for Big Data Using Dynamic Function To cite this article: T Alwajeeh et al 2017 IOP Conf. Ser.: Mater. Sci. Eng.

More information

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM

IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM Annals of the University of Petroşani, Economics, 12(4), 2012, 185-192 185 IMPROVEMENTS TO THE BACKPROPAGATION ALGORITHM MIRCEA PETRINI * ABSTACT: This paper presents some simple techniques to improve

More information

An Integer Recurrent Artificial Neural Network for Classifying Feature Vectors

An Integer Recurrent Artificial Neural Network for Classifying Feature Vectors An Integer Recurrent Artificial Neural Network for Classifying Feature Vectors Roelof K Brouwer PEng, PhD University College of the Cariboo, Canada Abstract: The main contribution of this report is the

More information

COMP 551 Applied Machine Learning Lecture 14: Neural Networks

COMP 551 Applied Machine Learning Lecture 14: Neural Networks COMP 551 Applied Machine Learning Lecture 14: Neural Networks Instructor: (jpineau@cs.mcgill.ca) Class web page: www.cs.mcgill.ca/~jpineau/comp551 Unless otherwise noted, all material posted for this course

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

Finding Dominant Parameters For Fault Diagnosis Of a Single Bearing System Using Back Propagation Neural Network

Finding Dominant Parameters For Fault Diagnosis Of a Single Bearing System Using Back Propagation Neural Network International Journal of Mechanical & Mechatronics Engineering IJMME-IJENS Vol:13 No:01 40 Finding Dominant Parameters For Fault Diagnosis Of a Single Bearing System Using Back Propagation Neural Network

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

Optimization of process parameters in CNC milling for machining P20 steel using NSGA-II

Optimization of process parameters in CNC milling for machining P20 steel using NSGA-II IOSR Journal of Mechanical and Civil Engineering (IOSR-JMCE) e-issn: 2278-1684,p-ISSN: 2320-334X, Volume 14, Issue 3 Ver. V. (May - June 2017), PP 57-63 www.iosrjournals.org Optimization of process parameters

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

Rank Measures for Ordering

Rank Measures for Ordering Rank Measures for Ordering Jin Huang and Charles X. Ling Department of Computer Science The University of Western Ontario London, Ontario, Canada N6A 5B7 email: fjhuang33, clingg@csd.uwo.ca Abstract. Many

More information

Logical Rhythm - Class 3. August 27, 2018

Logical Rhythm - Class 3. August 27, 2018 Logical Rhythm - Class 3 August 27, 2018 In this Class Neural Networks (Intro To Deep Learning) Decision Trees Ensemble Methods(Random Forest) Hyperparameter Optimisation and Bias Variance Tradeoff Biological

More information

Optimization of Process Parameter for Surface Roughness in Drilling of Spheroidal Graphite (SG 500/7) Material

Optimization of Process Parameter for Surface Roughness in Drilling of Spheroidal Graphite (SG 500/7) Material Optimization of Process Parameter for Surface Roughness in ing of Spheroidal Graphite (SG 500/7) Prashant Chavan 1, Sagar Jadhav 2 Department of Mechanical Engineering, Adarsh Institute of Technology and

More information

In this assignment, we investigated the use of neural networks for supervised classification

In this assignment, we investigated the use of neural networks for supervised classification Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric

More information

Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry

Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry ISSN 1060-992X, Optical Memory and Neural Networks, 2017, Vol. 26, No. 1, pp. 47 54. Allerton Press, Inc., 2017. Ensembles of Neural Networks for Forecasting of Time Series of Spacecraft Telemetry E. E.

More information