Department of applied mathematics. Mat Individual Research Projects in Applied Mathematics course
|
|
- Kory Dorsey
- 5 years ago
- Views:
Transcription
1 Department of applied mathematics Mat Individual Research Projects in Applied Mathematics course Use of neural networks for imitating and improving heuristic system performance Jens Wilke 41946R Tf - N
2 Table of contents 1 Introduction Overview Setup overview Study phases Creation of heuristic system Creation of imitating neural network system Fine tuning of neural network weights Neural networks overview 5 3 Study setup Simulation environment Seeker inputs and outputs Seeker inputs Seeker outputs Heuristic seeker Turning Acceleration and deceleration Performance of the heuristic seeker Selected neural network architecture Network performance evaluation Tuning of network weights Searching the variable to modify Modifying the selected variable 18 4 Results Performance of the initial neural network Performance after iterative training Quantitative performance Qualitative performance 22 5 Conclusions References Literary references WWW-references 24 2
3 1 Introduction This document has been made to document the work done of Mat Individual Research Projects. The aim of this work is to study the use of neural networks when seeking solutions for improving the performance of heuristic rule based systems. This work evolves from a problem where a seeker system has to track a target object. A rule based heuristic system is created as a reference system. Then a neural network is trained to imitate that system. After successful neural network training, the network weights of neural network are altered in iterative manner in order to improve the performance of the neural network system. The objective is to create a better performing neural network based system than the original heuristic system. This work has been inspired by a system with pursuing and evading units whose performance was gradually improved. In the mentioned work, from an initial population of random network designs, successful designs in each generation were selected for reproduction with recombination, mutation, and gene duplication (Cliff et al. 1996). The underlying solution of this work and the referenced system are fundamentally different. This work uses neural networks for imitating the behavior of heuristic systems. This was inspired by a master s thesis done at HUT TML laboratory (Teirilä 1999). 2 Overview This section takes a look at the study setup and underlying fundamentals of neural networks. 3
4 2.1 Setup overview The setup of this study consists of a system, called seeker, trying to trace another item called target. Seeker has several attempts in different scenarios. After going through all the scenarios, the performance of the given seeker is evaluated. Seeker and target have both some general characteristics. They move on twodimensional plane. Target has constant heading and speed. Seeker can turn and accelerate or decelerate. The turning is limited by maximum turning angle per iteration. Seeker has also a limiting top speed. During the study different seeker control solutions are tested, i.e., heuristic and neural network based systems. The fundamental aim of this work is to test, if the performance of a neural network can be improved by gradually modifying the neural network parameters in order to improve its performance. This can be a useful feature if it is difficult to understand the functionality of the system, but it is easy to evaluate the performance of individual slightly altered systems. The tracing simulations have been done using Mathworks Matlab (Mathworks 2002). The Matlab files created are available for download and testing from author s homepage (Wilke 2002). 2.2 Study phases Following three sections list the activities in different phases of this work Creation of heuristic system First, a heuristic system was generated. Its tracing performance was tracked and the inputs and outputs of the heuristic system were stored. In other words, during the simulated evaluation of the heuristic system, in every situation the inputs and outputs of the system were stored into a file for later retrieval. 4
5 2.2.2 Creation of imitating neural network system After assessing the performance of the heuristic system, a neural network based system was created. The neural network based system was trained to imitate the actions of the heuristic system. During the training, the system was trained using the stored heuristic data, which was generated as the heuristic system s performance was assessed Fine tuning of neural network weights After creating the initial neural network and assessing its performance, finetuning of its performance started. Fine-tuning was done by modifying the weights and threshold values of the neural network. The work consisted of iterative rounds gradually improving the performance of the neural network based system. 2.3 Neural networks overview A neural network is a usually highly interconnected network of multi-input nonlinear processing units called neurons. The name neural network becomes from the fact that the neurons used in the artificial neural networks resemble to certain extent the human neural cells. Basically, three entities characterize artificial neural networks: the interconnection of neurons, the characteristics of individual neurons and the strategy for pattern training. Many network architectures exist including multilayer perceptron, radial basis, recurrent, selforganizing networks to name but a few. The manner in which the neurons of a neural network are structured is intimately linked with the learning algorithm used in the training of the network (Haykin 1994,1-22). Schalkoff discusses the use of neural networks in pattern recognition and pattern association, using multilayer feedforward networks (Schalkoff 1992, 237). Accordingly we chose to use multilayered feedforward networks in this work. The architecture of such a network consists of several layers of neurons. Each neuron has a number of inputs x i and respective weights w i. If the weighted sum of the inputs exceeds a certain threshold limit θ, an output is produced. An individual neuron is depicted in the figure 1. 5
6 X 1 W 1 X 2 X 3 W 2 W 3 Output Y k X p W p Threshold θ Figure 1 And individual perceptron The neurons are then arranged into layers where each layer s outputs function as the inputs of the following layer. A two-layered network is pictured in the figure 2. The behavior of the network is altered by changing the parameters of individual neurons. Input layer Hidden layer Output layer Figure 2 A network of interconnected perceptrons Before its use as a black box element the neural network has to be trained to its task. The design procedure for neural network pattern associator involves the following phases (Schalkoff 1992,221): 6
7 1) Defining suitable inputs/outputs and network structure 2) Choosing training method 3) Training the network 4) Performance assessing 3 Study setup 3.1 Simulation environment The testing scenario should be rich in that sense that it would have a large set of tests so that system behavior would be consistent in other somewhat different scenarios. In the test scenarios, seeker starts always from the origin of the system with the same initial speed as the target. Altogether 24 test cases are executed in order to assess the performance of each individual system. The target starts from 8 different initial locations. From each initial location the target starts traversing to 3 different initial directions. In figure 3, the 24 test cases are visualized: the seeker is marked with triangle with its initial speed vector and the 24 target setups are marked with 8 initial target locations and 3 initial speed vectors for each initial target location. Figure 3 Initial scenarios 7
8 The motivation of identical initial setups when comparing the system performances is the fact that performance differences between the systems are often minimal. If the start setup would vary, it would be probable that in some case not the best performing system would be selected for later iterations. A more advantageous setup could allow a weaker system to outperform a superior system. When the setup remains the same, it is assured that the selected system is the best performing one for the given environment. 3.2 Seeker inputs and outputs The inputs and outputs of the heuristic and neural network systems were selected to be identical. Consequently, direct system performance comparison and evaluation is possible. Following sections define the inputs and outputs of the seeker system Seeker inputs Seeker should receive information about the location and movement of the target. The number of inputs has to be limited but sufficient so that the seeker has all needed information for directing itself efficiently towards the target. Intuitive selections for the seeker inputs are the relative location and the relative heading of the target. The figure 4 below represents the relative positions and headings of seeker and target. The seeker is located in the origin and is heading in the direction of the Y-axis, and the location and heading of the target are given relative to the seeker. On the right side of the figure 4, the velocity vectors of the seeker and target are broken into components. 8
9 V see ker h see ker h perpendicu lar V x T arg et V seeker V T arg et V y T arg et V t arg et Figure 4 Relative locations and speeds of the seeker and the target If the location of the target is marked with X t arg et and the location of the seeker is marked with X see ker, the difference vector and heading vector are: d = X t arg et - see ker X h seeker = v v seeker seeker d directional = d h see ker d perpendicular = d h perpendicular The two values d directional and d perpendicular give information about the location of the target relative to the seeker. The first value d directional is the dot product of seeker s heading and the vector defining the location of the target relative to the seeker. This scalar value ranges from minus infinite to positive infinite. d directional is positive, if the target is located ahead of the seeker and vice versa. The other value d perpendicular is formed similarly, but measures the deviation perpendicularly to the seeker. d directional is positive if the target is located on the right side of the seeker and vice versa. 9
10 v directional = v see ker v t arg et v perpendicular = v perpendicular v t arg et The values v directional and v perpendicular give information about the speed of the target relative to the seeker. The first value v directional is dot product of seeker s speed and the target s speed vectors. It gives information about their relative speeds along the axis of seeker s heading. The second value v perpendicular on the other hand gives information about the speed of the target perpendicular to the seeker s heading. v perpendicular is positive, if the target is heading rightwards relative to the seeker and vice versa. The input should be scaled so that later on the input of the neural network is limited to some specific range. This is done in order to improve the performance of the neural network. Sigmoid function is used to scale the value of the inputs. The equations below define the hyperbolic tangent sigmoid scaling of the inputs of the neural network. d dirscaled 1+ e 2 = 2 d directiona l 1 d perpscaled 1+ e 2 = 2 d perpendicu lar 1 d dirscaled 2 1+ e = 2 V directiona l 1 The figure 5 below shows how the sigmoid function scales the input space to be between the values -1 and 1. It should be noted that the scaling preserves the sign of the input. 10
11 Figure 5 Sigmoid function Seeker outputs The amount of outputs should be kept as low as possible due computational reasons. When one thinks about a car, it has basically two controls: 1) Turning 2) Acceleration and deceleration Similarly, the number of seeker system outputs was set to two. 3.3 Heuristic seeker Heuristic seeker is guided by rigid rules when tracing the target. As defined in the section Seeker outputs, the heuristic logistic has to make following decisions when tracing the target: 1) How much to turn its course 2) How much to accelerate or decelerate Turning Turning is based on the location of the target relative to seeker. Figure 6 visualizes the distribution of relative locations to four distinct zones. 11
12 X X Zone 1 Zone 2 Zone 4 Zone 3 Figure 6 Heuristic turning logic The actions of the seeker are summarized in the table below. In the figure 6, the seeker is presented as a triangle in the origin. Target is represented as a circle, which is located more than X to the right of the seeker. In the represented situation, the seeker would turn right. When the heuristic seeker turns, the absolute magnitude of the turn is always constant. Table 1 Zones and corresponding activities Target location Zone 1 and Zone 3 Zone 2 Zone 4 Action Turn Left Don t turn Turn right Acceleration and deceleration As with turning, the acceleration and deceleration of the seeker is based on the location of the target relative to seeker. Additionally, the heading of the target is also observed. The target can be either in front of the seeker, i.e., in zone 1, or behind the seeker, i.e., in zone 2. Secondly the target can be heading in the 12
13 same or opposite direction as the seeker. The situations are visualized in figure 7. Zone 1 Target 1 Target 2 Target 3 Zone 2 Target 4 Figure 7 Heuristic turning logic 4 decision rules are constructed. They summarized in the table below. Table 2 Acceleration and deceleration Target location Target heading relative to seeker Seeker action Example in figure Zone 1 Same Accelerate Target 1 Opposite Accelerate Target 2 Zone 2 Same Decelerate Target 3 Opposite Decelerate Target Performance of the heuristic seeker After creating the heuristic seeker, its performance was analyzed qualitatively. The seeker managed to catch the target always, if enough time was given. Generally, the heuristic seeker was very efficient. The images in figure 8 13
14 illustrate seeker performance. In the figure, seeker starts tracing from the origin. Seeker s tracing path is curved and marked with darker blue color. The target starts from varied locations. Its path is straight and marked with brighter green color. Figure 8 a, b Heuristic seeker behavior Figure 8 c, d Heuristic seeker behavior 3.4 Selected neural network architecture Due to computational efficiency it was decided, that the neural network should be as simple as possible. The evaluation of neural network architecture started with multilayer perceptron network having 4 inputs and 2 outputs similarly as the seeker. Testing started with two perceptron layers each having 2 neurons. Consequently, the first perceptron layer has 4 weighted inputs and the second layer has 2 weighted inputs. Each of the 4 perceptrons has also threshold values as described in section 2.3 Neural networks overview. Consequently the 14
15 network had 4*2 + 2*2 * 4 = 16 parameters that could be varied. After initial tests, it became apparent that the neural network could imitate the performance of the heuristic seeker very well. The neural network training results vary on case-by-case, as the initial values of the network are somewhat randomly set. After several rounds of training and testing, the best performing neural network based seeker was only less than 1% weaker in performance than the heuristic seeker. The performance was measured using evaluation measure described in section 3.5 Network performance evaluation. Visually, no difference in tracing performance between the systems could be noted. 3.5 Network performance evaluation In order to evaluate the performance of each system, their performance is evaluated quantitatively. In each scenario, a scoring measure for the network performance was calculated. The aim of the seeker is to catch the target. Consequently, the score is based on two factors: 1) Steps needed to reach the target 2) Penalty for not catching the target If target was reached, the score is defined by the number of iterative steps needed in order to reach the target: Score = Steps needed If target was not reached, a penalty is added. The penalty is selected so that the penalty for missing the target is 10 times higher as the penalty for additional steps needed for reaching the target. The high penalty for target missing was selected because it was seen that target reaching is the primary goal of the seeker. Score = Steps needed + 10 * distance from the target After all 24 test cases, the individual scores are added in order to form a cumulated score for the tested system. The selection of the best performing 15
16 system is then purely based on the cumulated score. Shortly put, smaller score is better and the system with smallest cumulated score is selected for later iterative rounds. 3.6 Tuning of network weights Multilayer perceptron network is defined by the transfer functions of each layer, network weights and biases. Sigmoid function, described in section Seeker inputs was used as the transfer function in both layers, and it wasn t modified during the iterations. Network weights and threshold values were the only varied values. In the beginning of this work, different approaches to network tuning were tried. The first brute force tuning attempts were tried by using random network weight tuning. After initial attempts, no significant results were achieved and more sophisticated techniques had to be developed. Our goal is to minimize the score of the system, i.e., to maximize the efficiency of the seeker. The score of each system is evaluated after each round of simulations through all test scenarios. As the system is otherwise defined, the system performance depends purely on the network weights and perceptron thresholds. Tuning can be divided to two phases: finding the variable to modify, and modifying the given variable Searching the variable to modify Altogether there are 16 parameters to vary as noted in the section 3.4 Selected neural network architecture. Consequently we can view the system performance to be a function whose value depends on those 16 values: f ( x1, x2,..., x16) When searching for multidimensional optimum, the method of steepest descent is a fundamental procedure for minimizing a differentiable function of several variables. f ( x) / f ( x) is the direction of steepest descent minimizing the 16
17 value of the function (Bazaraa et al. 1993, 300). Unfortunately in this case, the neural network system is not differentiable. Consequently, the system has to be differentiated numerically. The values of the network variables are varied a small amount x i, and the simulation is conducted in order to define the difference to the previously best performing system. The differentiation is conducted using all 16 variables. f ( x 1, x2,..., x16 ) x 1 f ( x 1, x2,..., x16 ) x 16 It was decided conservatively to vary only one variable by experimenting with different amounts of variable variation. The variable, which gives deepest descent, i.e., largest improvement, is selected as the variable to be modified. The modification of this variable is described in the next section. After each simulation, it comes clear to which direction (if any) each network variable should be modified in order to decrease the performance measure. Unfortunately, it is not clear how much those values should be varied. On the first round of simulations, the value x i was defined to be 1% of each value x i. Due to somewhat long simulation times, on the second round of simulations, this value was increased so that network variables could be considerably altered already during the first phase when seeking for the network variable to modify. On this second simulation round, the value x i was defined to be 50% of each value x i. This could lead to situations where none of the systems created using varied weight and threshold would perform better than the original system. This would be due to too heavy variable modifications taking the system further from the optimum than in the first place. In such cases, the problem was addressed by halving the relative variation, e.g., decreasing the relative variation from 50% to 25%. Significant savings in simulation time were reached as seen in section Quantitative performance. 17
18 3.6.2 Modifying the selected variable As the variable to be varied had been selected according to previous section, different amounts of variation were tested in order to improve the system performance. Taking into account the scope of this work, no overly sophisticated system was generated for this purpose. The variations generated can be divided to relative and absolute. The relative variations are relative amounts of the variable value, and absolute variations are multiples of the value x used to differentiate the system in section Searching the variable to modify. The motivation of relative changes is to allow large changes when variables have large absolute values. The motivation of absolute changes is to allow the variables to change sign, when the variable value is close to zero. For example, if the value x would be 1, relative variations would have been defined to be 10 % and 20%, and the direction of variations would be positive, the results of varying value 100 could be following: * x = * x = * 100 = * 100 = 120 After modifying the given variable, each system (differing from each other only by one variable) is evaluated through simulation. The selected system is then defined to be the best performing system. Then the iterations continue by searching again for the variable giving the deepest descent as defined in section Searching the variable to modify. 18
19 Search for the variable to modify using small differentiations x i Decrease the differentiations x i A better performing system was found No Yes Modify selected variable by larger amounts Select new best performing system Figure 9 Iterative change of network variables If the minimal network weight value changes won't produce any improvement in the system performance, it can be assumed that the system is close to the local optimum of its performance. Further simulating using similar small changes won't improve the situation. This state is hypothetical, as it would take a very long to reach this state computationally. 4 Results This section takes a look at the results of this study. The quantitative estimates are based on the scoring algorithm. Qualitative analysis was based on visual analysis of the system performance. 4.1 Performance of the initial neural network As the initial neural network was trained to imitate the actions of the heuristic algorithm, it was assumed that the performance of the initial neural network would be lower than the performance of the heuristic algorithm. This was due to the fact that during the neural network training the weights of the neural network were adjusted so that sum square error would be minimized. As such 19
20 imitator is created, it is possible that the performance might be close the initial system performance, but unlikely that the imitating system s performance would exceed the performance of the initial system. This initial assumption proved to be true in all executed test cases. The heuristic system got performance score: The imitating neural networks were trained using the trainlm algorithm and the training data generated by the heuristic system. trainlm is a network training function that updates weight and bias values according to Levenberg-Marquardt optimization (Demuth 1994). The training method was readily available in Matlab s Neural Network Toolbox. The performance of the trained network varied due to randomized network initialization. During this work, in the beginning of simulations, each time a neural network was trained to imitate the heuristic system, its performance was compared with the previously trained networks. If the newly trained network s performance exceeded the performance of the previous networks, it was selected as the new best performing network. After some 20 attempts, the best imitating neural network system had a score: In practice the performance of the heuristic system and the imitating neural network system were identical as the performance score of the heuristic system was only % smaller than the performance score of the imitating neural network system. 4.2 Performance after iterative training This section takes a look at the quantitative and qualitative performance of the system that emerged after consecutive rounds of network variable adjustments Quantitative performance The neural network based system was initially weaker in performance than the heuristic algorithm that it imitated. The initial neural network based system 20
21 outperformed the heuristic system in 3 out of 24 test cases where it was set against the seeker based on the heuristic algorithm. The cumulated total score of the neural network based system was as opposed to the slightly better score of of the heuristic system. The simulations were done twice. On the first round, a differentation of 1% was used during the first round when searching for the variable to modify. On the second round, in order to accelerate the search, a differentation of 50% was used during the first round when searching for the variable to modify. The procedure is more precisely described in section Searching the variable to modify. First, a smaller differentiation was used of 1 % used. After 50 evolutionary rounds, the neural network based system could outperform the heuristic system in 14 out of 24 test scenarios where it was set against the seeker based on the heuristic algorithm. The final cumulated total score of the neural network based system after iterative training was with which it outperformed the score of the heuristic system. Figure 10 illustrates the development of system performance. Figure 10 System performance progress On the second simulation round, a larger differentiation was used in order to accelerate the search. After 50 evolutionary rounds, the neural network based system could outperform the heuristic system in 15 out of 24 test scenarios where it was set against the seeker based on the heuristic algorithm. The final 21
22 cumulated total score of the neural network based system after iterative training was with which it outperformed the score of the heuristic system. Thus the performance increase of the initial neural network based system was 6,2 %, i.e., decrease in system performance measure. Figure 11 illustrates the development of system performance measure. Figure 11 System performance progress Qualitative performance The behavior of the seeker didn't become dramatically different, but it became refined and somewhat smoother than the behavior of the heuristic seeker. As the neural network based system evolved, it started to anticipate the movement of the target more smoothly; incrementally improving its performance. 5 Conclusions This work demonstrated that the evolutionary variation of neural network variables can be used for improving the performance of heuristic rule based systems. The performance increase was somewhat slight as indicated in section Quantitative performance. On the other hand, the scope of this work is very limited and further study would be needed to determine the feasibility of this approach. Possible applications could be found in situations where one has a conception of the needed system functionality, but profound knowledge about the needed 22
23 system functionality is missing. In this case the system performance could be imitated and refined using the approach described in this work. Problems would be probably faced, as the complexity of the systems modeled would increase. More complex neural network architecture would be needed to model the functionality of the system. This would in turn increase the number of network parameters considerably. The increased number of network weights would certainly increase the time needed for incrementally seeking better performing systems. On the other hand, when training more complex networks, more sophisticated algorithms could be used to accelerate the iterative search for better performing systems. Another limitation is the fact that this approach is not capable of introducing major changes in system behavior. It would be difficult for the seeker to radically change its actions when the weights of the network are evolved slowly. Bigger changes of several variables could be needed when looking for radical changes in the seeker behavior. An example could be a situation when the seeker should anticipate the movement of target as the target is passing seeker from behind. Instead of turning left all the way and starting to track the target, it would be better if the seeker could anticipate the target's movement and turn right and cut in front of the target. This situation is visualized in figure 12. Early catch Late catch Seeker Initial target location Figure 12 Major change in seeker behavior 23
24 6 References 6.1 Literary references Bazaraa, Mokhtar and Sherali, Hanif and Shetty, C.M Nonlinear Programming. Theory and Algorithms. John Wiley & Sons, Inc. New York. Cliff, Dave and Miller, Geoffrey F Co-evolution of Pursuit and Evasion II: Simulation Methods and Results. From animals to animats 4: Proceedings of the Fourth International Conference on Simulation and Adaptive Behavior Demuth, Howard and Beale, Mark Neural Network Toolbox User s Guide. Third printing. The MathWorks, Inc. Massachusets. Haykin Simon Neural Networks a Comprehensive Foundation. Macmillan College Publishing Company, Inc. New York. Schalkoff, Robert J Pattern recognition: statistical, structural and neural approaches. John Wiley & Sons, Inc. New York. Teirilä, Juha Ihmishahmon käänteiskinemaattisen ongelman ratkaiseminen reaaliajassa. Diplomityö. Teknillinen korkeakoulu. 6.2 WWW-references The MathWorks Inc Matlab overview. Wilke Jens Homepage of Jens Wilke. 24
Department of applied mathematics. Mat Individual Research Projects in Applied Mathematics course
Department of applied mathematics Mat-2.108 Individual Research Projects in Applied Mathematics course Use of neural networks for trajectory performance optimization in human gait animation Jens Wilke
More informationCHAPTER VI BACK PROPAGATION ALGORITHM
6.1 Introduction CHAPTER VI BACK PROPAGATION ALGORITHM In the previous chapter, we analysed that multiple layer perceptrons are effectively applied to handle tricky problems if trained with a vastly accepted
More informationImproving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah
Improving the way neural networks learn Srikumar Ramalingam School of Computing University of Utah Reference Most of the slides are taken from the third chapter of the online book by Michael Nielson: neuralnetworksanddeeplearning.com
More informationArtificial Neural Network based Curve Prediction
Artificial Neural Network based Curve Prediction LECTURE COURSE: AUSGEWÄHLTE OPTIMIERUNGSVERFAHREN FÜR INGENIEURE SUPERVISOR: PROF. CHRISTIAN HAFNER STUDENTS: ANTHONY HSIAO, MICHAEL BOESCH Abstract We
More informationImage Compression: An Artificial Neural Network Approach
Image Compression: An Artificial Neural Network Approach Anjana B 1, Mrs Shreeja R 2 1 Department of Computer Science and Engineering, Calicut University, Kuttippuram 2 Department of Computer Science and
More informationCharacter Recognition Using Convolutional Neural Networks
Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract
More informationDeep Learning With Noise
Deep Learning With Noise Yixin Luo Computer Science Department Carnegie Mellon University yixinluo@cs.cmu.edu Fan Yang Department of Mathematical Sciences Carnegie Mellon University fanyang1@andrew.cmu.edu
More informationData mining with Support Vector Machine
Data mining with Support Vector Machine Ms. Arti Patle IES, IPS Academy Indore (M.P.) artipatle@gmail.com Mr. Deepak Singh Chouhan IES, IPS Academy Indore (M.P.) deepak.schouhan@yahoo.com Abstract: Machine
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More information4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.
1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when
More informationA *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,
The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure
More informationInstantaneously trained neural networks with complex inputs
Louisiana State University LSU Digital Commons LSU Master's Theses Graduate School 2003 Instantaneously trained neural networks with complex inputs Pritam Rajagopal Louisiana State University and Agricultural
More informationSupervised Learning in Neural Networks (Part 2)
Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning
More informationAssignment # 5. Farrukh Jabeen Due Date: November 2, Neural Networks: Backpropation
Farrukh Jabeen Due Date: November 2, 2009. Neural Networks: Backpropation Assignment # 5 The "Backpropagation" method is one of the most popular methods of "learning" by a neural network. Read the class
More informationWeek 3: Perceptron and Multi-layer Perceptron
Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,
More informationParticle Swarm Optimization applied to Pattern Recognition
Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...
More informationResearch on Evaluation Method of Product Style Semantics Based on Neural Network
Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:
More informationRandom Search Report An objective look at random search performance for 4 problem sets
Random Search Report An objective look at random search performance for 4 problem sets Dudon Wai Georgia Institute of Technology CS 7641: Machine Learning Atlanta, GA dwai3@gatech.edu Abstract: This report
More informationNeural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.
Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric
More information11/14/2010 Intelligent Systems and Soft Computing 1
Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in
More informationReification of Boolean Logic
Chapter Reification of Boolean Logic Exercises. (a) Design a feedforward network to divide the black dots from other corners with fewest neurons and layers. Please specify the values of weights and thresholds.
More informationCS6220: DATA MINING TECHNIQUES
CS6220: DATA MINING TECHNIQUES Image Data: Classification via Neural Networks Instructor: Yizhou Sun yzsun@ccs.neu.edu November 19, 2015 Methods to Learn Classification Clustering Frequent Pattern Mining
More informationA Neural Network Model Of Insurance Customer Ratings
A Neural Network Model Of Insurance Customer Ratings Jan Jantzen 1 Abstract Given a set of data on customers the engineering problem in this study is to model the data and classify customers
More informationNetwork Routing Protocol using Genetic Algorithms
International Journal of Electrical & Computer Sciences IJECS-IJENS Vol:0 No:02 40 Network Routing Protocol using Genetic Algorithms Gihan Nagib and Wahied G. Ali Abstract This paper aims to develop a
More informationArtificial Neural Network-Based Prediction of Human Posture
Artificial Neural Network-Based Prediction of Human Posture Abstract The use of an artificial neural network (ANN) in many practical complicated problems encourages its implementation in the digital human
More informationPERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION
I J C T A, 9(28) 2016, pp. 431-437 International Science Press PERFORMANCE COMPARISON OF BACK PROPAGATION AND RADIAL BASIS FUNCTION WITH MOVING AVERAGE FILTERING AND WAVELET DENOISING ON FETAL ECG EXTRACTION
More informationTHE preceding chapters were all devoted to the analysis of images and signals which
Chapter 5 Segmentation of Color, Texture, and Orientation Images THE preceding chapters were all devoted to the analysis of images and signals which take values in IR. It is often necessary, however, to
More informationOptimization Methods for Machine Learning (OMML)
Optimization Methods for Machine Learning (OMML) 2nd lecture Prof. L. Palagi References: 1. Bishop Pattern Recognition and Machine Learning, Springer, 2006 (Chap 1) 2. V. Cherlassky, F. Mulier - Learning
More informationFunction approximation using RBF network. 10 basis functions and 25 data points.
1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data
More informationResidual Advantage Learning Applied to a Differential Game
Presented at the International Conference on Neural Networks (ICNN 96), Washington DC, 2-6 June 1996. Residual Advantage Learning Applied to a Differential Game Mance E. Harmon Wright Laboratory WL/AAAT
More informationReview on Methods of Selecting Number of Hidden Nodes in Artificial Neural Network
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationAdaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces
Adaptive Robotics - Final Report Extending Q-Learning to Infinite Spaces Eric Christiansen Michael Gorbach May 13, 2008 Abstract One of the drawbacks of standard reinforcement learning techniques is that
More informationInternational Research Journal of Computer Science (IRJCS) ISSN: Issue 09, Volume 4 (September 2017)
APPLICATION OF LRN AND BPNN USING TEMPORAL BACKPROPAGATION LEARNING FOR PREDICTION OF DISPLACEMENT Talvinder Singh, Munish Kumar C-DAC, Noida, India talvinder.grewaal@gmail.com,munishkumar@cdac.in Manuscript
More informationNeural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani
Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer
More informationDeep Learning. Practical introduction with Keras JORDI TORRES 27/05/2018. Chapter 3 JORDI TORRES
Deep Learning Practical introduction with Keras Chapter 3 27/05/2018 Neuron A neural network is formed by neurons connected to each other; in turn, each connection of one neural network is associated
More information2. Neural network basics
2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented
More informationModule 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method
Module 4 : Solving Linear Algebraic Equations Section 11 Appendix C: Steepest Descent / Gradient Search Method 11 Appendix C: Steepest Descent / Gradient Search Method In the module on Problem Discretization
More informationArtificial Intelligence Introduction Handwriting Recognition Kadir Eren Unal ( ), Jakob Heyder ( )
Structure: 1. Introduction 2. Problem 3. Neural network approach a. Architecture b. Phases of CNN c. Results 4. HTM approach a. Architecture b. Setup c. Results 5. Conclusion 1.) Introduction Artificial
More informationALGORITHMS FOR INITIALIZATION OF NEURAL NETWORK WEIGHTS
ALGORITHMS FOR INITIALIZATION OF NEURAL NETWORK WEIGHTS A. Pavelka and A. Procházka Institute of Chemical Technology, Department of Computing and Control Engineering Abstract The paper is devoted to the
More informationIn this assignment, we investigated the use of neural networks for supervised classification
Paul Couchman Fabien Imbault Ronan Tigreat Gorka Urchegui Tellechea Classification assignment (group 6) Image processing MSc Embedded Systems March 2003 Classification includes a broad range of decision-theoric
More informationQuery Learning Based on Boundary Search and Gradient Computation of Trained Multilayer Perceptrons*
J.N. Hwang, J.J. Choi, S. Oh, R.J. Marks II, "Query learning based on boundary search and gradient computation of trained multilayer perceptrons", Proceedings of the International Joint Conference on Neural
More informationThe Fly & Anti-Fly Missile
The Fly & Anti-Fly Missile Rick Tilley Florida State University (USA) rt05c@my.fsu.edu Abstract Linear Regression with Gradient Descent are used in many machine learning applications. The algorithms are
More informationAn Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.
An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar
More informationLinear Separability. Linear Separability. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons. Capabilities of Threshold Neurons
Linear Separability Input space in the two-dimensional case (n = ): - - - - - - w =, w =, = - - - - - - w = -, w =, = - - - - - - w = -, w =, = Linear Separability So by varying the weights and the threshold,
More informationCHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS
CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS 8.1 Introduction The recognition systems developed so far were for simple characters comprising of consonants and vowels. But there is one
More informationChapter 5 Components for Evolution of Modular Artificial Neural Networks
Chapter 5 Components for Evolution of Modular Artificial Neural Networks 5.1 Introduction In this chapter, the methods and components used for modular evolution of Artificial Neural Networks (ANNs) are
More informationFundamentals of Operations Research. Prof. G. Srinivasan. Department of Management Studies. Indian Institute of Technology, Madras. Lecture No.
Fundamentals of Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture No. # 13 Transportation Problem, Methods for Initial Basic Feasible
More informationADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL
ADAPTIVE TILE CODING METHODS FOR THE GENERALIZATION OF VALUE FUNCTIONS IN THE RL STATE SPACE A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY BHARAT SIGINAM IN
More informationData Mining. Neural Networks
Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most
More informationData Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University
Data Mining Chapter 8: Search and Optimization Methods Fall 2011 Ming Li Department of Computer Science and Technology Nanjing University Search & Optimization Search and Optimization method deals with
More informationAssignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions
ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan
More informationA Shattered Perfection: Crafting a Virtual Sculpture
A Shattered Perfection: Crafting a Virtual Sculpture Robert J. Krawczyk College of Architecture, Illinois Institute of Technology, USA krawczyk@iit.edu Abstract In the development of a digital sculpture
More informationAero-engine PID parameters Optimization based on Adaptive Genetic Algorithm. Yinling Wang, Huacong Li
International Conference on Applied Science and Engineering Innovation (ASEI 215) Aero-engine PID parameters Optimization based on Adaptive Genetic Algorithm Yinling Wang, Huacong Li School of Power and
More informationMetaheuristic Optimization with Evolver, Genocop and OptQuest
Metaheuristic Optimization with Evolver, Genocop and OptQuest MANUEL LAGUNA Graduate School of Business Administration University of Colorado, Boulder, CO 80309-0419 Manuel.Laguna@Colorado.EDU Last revision:
More informationCOMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT WITH VIEW ANGLE IN DEPTH ESTIMATION WITH VARYING BRIGHTNESS
International Journal of Advanced Trends in Computer Science and Engineering, Vol.2, No.1, Pages : 171-177 (2013) COMPARISION OF REGRESSION WITH NEURAL NETWORK MODEL FOR THE VARIATION OF VANISHING POINT
More informationAPPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB
APPLICATIONS OF INTELLIGENT HYBRID SYSTEMS IN MATLAB Z. Dideková, S. Kajan Institute of Control and Industrial Informatics, Faculty of Electrical Engineering and Information Technology, Slovak University
More informationLECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS
LECTURE NOTES Professor Anita Wasilewska NEURAL NETWORKS Neural Networks Classifier Introduction INPUT: classification data, i.e. it contains an classification (class) attribute. WE also say that the class
More informationMeta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic Algorithm and Particle Swarm Optimization
2017 2 nd International Electrical Engineering Conference (IEEC 2017) May. 19 th -20 th, 2017 at IEP Centre, Karachi, Pakistan Meta- Heuristic based Optimization Algorithms: A Comparative Study of Genetic
More information.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar..
.. Spring 2017 CSC 566 Advanced Data Mining Alexander Dekhtyar.. Machine Learning: Support Vector Machines: Linear Kernel Support Vector Machines Extending Perceptron Classifiers. There are two ways to
More informationNeural Network Weight Selection Using Genetic Algorithms
Neural Network Weight Selection Using Genetic Algorithms David Montana presented by: Carl Fink, Hongyi Chen, Jack Cheng, Xinglong Li, Bruce Lin, Chongjie Zhang April 12, 2005 1 Neural Networks Neural networks
More informationTime Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph. Laura E. Carter-Greaves
http://neuroph.sourceforge.net 1 Introduction Time Series prediction with Feed-Forward Neural Networks -A Beginners Guide and Tutorial for Neuroph Laura E. Carter-Greaves Neural networks have been applied
More informationKINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL NETWORK
Proceedings of the National Conference on Trends and Advances in Mechanical Engineering, YMCA Institute of Engineering, Faridabad, Haryana., Dec 9-10, 2006. KINEMATIC ANALYSIS OF ADEPT VIPER USING NEURAL
More informationDesign Automation MAE 2250
Design Automation MAE 2250 Team not functioning? 1. Ask the TA to call a counselling meeting Include head TAs Jeff/Katie 2. Assign clear goals and responsibilities Deliverables and dates for each member
More informationNEURAL NETWORK VISUALIZATION
Neural Network Visualization 465 NEURAL NETWORK VISUALIZATION Jakub Wejchert Gerald Tesauro IB M Research T.J. Watson Research Center Yorktown Heights NY 10598 ABSTRACT We have developed graphics to visualize
More informationImproving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator
JOURNAL OF ENGINEERING RESEARCH AND TECHNOLOGY, VOLUME 1, ISSUE 2, JUNE 2014 Improving Trajectory Tracking Performance of Robotic Manipulator Using Neural Online Torque Compensator Mahmoud M. Al Ashi 1,
More informationMachine Learning 13. week
Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of
More informationInternational Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System
Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, and D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart
More informationYuki Osada Andrew Cannon
Yuki Osada Andrew Cannon 1 Humans are an intelligent species One feature is the ability to learn The ability to learn comes down to the brain The brain learns from experience Research shows that the brain
More informationA Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2
Chapter 5 A Genetic Algorithm for Graph Matching using Graph Node Characteristics 1 2 Graph Matching has attracted the exploration of applying new computing paradigms because of the large number of applications
More informationGenetic Algorithm for Seismic Velocity Picking
Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Genetic Algorithm for Seismic Velocity Picking Kou-Yuan Huang, Kai-Ju Chen, and Jia-Rong Yang Abstract
More informationRecall: Basic Ray Tracer
1 Recall: Ray Tracing Generate an image by backwards tracing the path of light through pixels on an image plane Simulate the interaction of light with objects Recall: Basic Ray Tracer Trace a primary ray
More informationThe Chase Problem (Part 1) David C. Arney
The Chase Problem (Part 1) David C. Arney We build systems like the Wright brothers built airplanes build the whole thing, push it off a cliff, let it crash, and start all over again. --- R. M. Graham
More informationAccelerating the convergence speed of neural networks learning methods using least squares
Bruges (Belgium), 23-25 April 2003, d-side publi, ISBN 2-930307-03-X, pp 255-260 Accelerating the convergence speed of neural networks learning methods using least squares Oscar Fontenla-Romero 1, Deniz
More information6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION
6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm
More informationThis blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane?
Intersecting Circles This blog addresses the question: how do we determine the intersection of two circles in the Cartesian plane? This is a problem that a programmer might have to solve, for example,
More informationInternational Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 7, February 2013)
Performance Analysis of GA and PSO over Economic Load Dispatch Problem Sakshi Rajpoot sakshirajpoot1988@gmail.com Dr. Sandeep Bhongade sandeepbhongade@rediffmail.com Abstract Economic Load dispatch problem
More informationADAPTATION OF REPRESENTATION IN GP
1 ADAPTATION OF REPRESENTATION IN GP CEZARY Z. JANIKOW University of Missouri St. Louis Department of Mathematics and Computer Science St Louis, Missouri RAHUL A DESHPANDE University of Missouri St. Louis
More informationA neural network that classifies glass either as window or non-window depending on the glass chemistry.
A neural network that classifies glass either as window or non-window depending on the glass chemistry. Djaber Maouche Department of Electrical Electronic Engineering Cukurova University Adana, Turkey
More informationNeural Networks (Overview) Prof. Richard Zanibbi
Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)
More informationCHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION
CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant
More informationUse of multilayer perceptrons as Inverse Kinematics solvers
Use of multilayer perceptrons as Inverse Kinematics solvers Nathan Mitchell University of Wisconsin, Madison December 14, 2010 1 of 12 Introduction 1. Scope 2. Background 3. Methodology 4. Expected Results
More informationGauss-Sigmoid Neural Network
Gauss-Sigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract- Recently RBF(Radial Basis Function)-based networks have
More informationCS229 Final Project: Predicting Expected Response Times
CS229 Final Project: Predicting Expected Email Response Times Laura Cruz-Albrecht (lcruzalb), Kevin Khieu (kkhieu) December 15, 2017 1 Introduction Each day, countless emails are sent out, yet the time
More informationCHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM
20 CHAPTER 2 CONVENTIONAL AND NON-CONVENTIONAL TECHNIQUES TO SOLVE ORPD PROBLEM 2.1 CLASSIFICATION OF CONVENTIONAL TECHNIQUES Classical optimization methods can be classified into two distinct groups:
More informationMorphogenesis. Simulation Results
Morphogenesis Simulation Results This document contains the results of the simulations designed to investigate the regeneration strength of the computational model of the planarium. Specific portions of
More informationSurfaces and Partial Derivatives
Surfaces and James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 15, 2017 Outline 1 2 Tangent Planes Let s go back to our simple surface
More informationLearning and Generalization in Single Layer Perceptrons
Learning and Generalization in Single Layer Perceptrons Neural Computation : Lecture 4 John A. Bullinaria, 2015 1. What Can Perceptrons do? 2. Decision Boundaries The Two Dimensional Case 3. Decision Boundaries
More informationA GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS
A GENTLE INTRODUCTION TO THE BASIC CONCEPTS OF SHAPE SPACE AND SHAPE STATISTICS HEMANT D. TAGARE. Introduction. Shape is a prominent visual feature in many images. Unfortunately, the mathematical theory
More informationMachine Learning Classifiers and Boosting
Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve
More informationThree-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization
Three-Dimensional Off-Line Path Planning for Unmanned Aerial Vehicle Using Modified Particle Swarm Optimization Lana Dalawr Jalal Abstract This paper addresses the problem of offline path planning for
More informationThe Mathematics Behind Neural Networks
The Mathematics Behind Neural Networks Pattern Recognition and Machine Learning by Christopher M. Bishop Student: Shivam Agrawal Mentor: Nathaniel Monson Courtesy of xkcd.com The Black Box Training the
More informationOpening the Black Box Data Driven Visualizaion of Neural N
Opening the Black Box Data Driven Visualizaion of Neural Networks September 20, 2006 Aritificial Neural Networks Limitations of ANNs Use of Visualization (ANNs) mimic the processes found in biological
More informationProceedings of the 2016 International Conference on Industrial Engineering and Operations Management Detroit, Michigan, USA, September 23-25, 2016
Neural Network Viscosity Models for Multi-Component Liquid Mixtures Adel Elneihoum, Hesham Alhumade, Ibrahim Alhajri, Walid El Garwi, Ali Elkamel Department of Chemical Engineering, University of Waterloo
More informationGraphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2
Graphical Approach to Solve the Transcendental Equations Salim Akhtar 1 Ms. Manisha Dawra 2 1 M.Tech. Scholar 2 Assistant Professor 1,2 Department of Computer Science & Engineering, 1,2 Al-Falah School
More informationAutomatic basis selection for RBF networks using Stein s unbiased risk estimator
Automatic basis selection for RBF networks using Stein s unbiased risk estimator Ali Ghodsi School of omputer Science University of Waterloo University Avenue West NL G anada Email: aghodsib@cs.uwaterloo.ca
More informationResearch on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm
Acta Technica 61, No. 4A/2016, 189 200 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Jianrong Bu 1, Junyan
More informationSuppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you?
Gurjit Randhawa Suppose you have a problem You don t know how to solve it What can you do? Can you use a computer to somehow find a solution for you? This would be nice! Can it be done? A blind generate
More informationLofting 3D Shapes. Abstract
Lofting 3D Shapes Robby Prescott Department of Computer Science University of Wisconsin Eau Claire Eau Claire, Wisconsin 54701 robprescott715@gmail.com Chris Johnson Department of Computer Science University
More informationPredicting User Ratings Using Status Models on Amazon.com
Predicting User Ratings Using Status Models on Amazon.com Borui Wang Stanford University borui@stanford.edu Guan (Bell) Wang Stanford University guanw@stanford.edu Group 19 Zhemin Li Stanford University
More informationSEMANTIC COMPUTING. Lecture 8: Introduction to Deep Learning. TU Dresden, 7 December Dagmar Gromann International Center For Computational Logic
SEMANTIC COMPUTING Lecture 8: Introduction to Deep Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 7 December 2018 Overview Introduction Deep Learning General Neural Networks
More information