A Hybrid Maximum Error Algorithm with Neighborhood Training for CMAC

Size: px
Start display at page:

Download "A Hybrid Maximum Error Algorithm with Neighborhood Training for CMAC"

Transcription

1 A Hybrid Maximum Error Algorithm with Neighborhood Training for CMAC Selahattin Sayil Department of Electrical Engineering Pamukkale University Kinikli-Denizli, TURKEY Kwang Y. Lee, Fellow IEEE Department of Electrical Engineering The Pennsylvania State University University Park, PA 16803, U.S.A Abstract In this paper, several possible algorithms and training methods for the CMAC network are analyzed thoroughly. Improvements are then examined and a Hybrid approach has been developed for the Maximum Error Algorithm by using the Neighborhood Training for the initial training period. The employment of the technique yielded faster initial convergence which is very important for many control applications. The proposed hybrid approach is demonstrated in an inverse kinematics problem of a two-link robot arm. Intex Terms: CMAC algorithms, Maximum error algorithm, Neighborhood training. I. INTRODUCTION The CMAC (Cerebellar Modular Articulation Controller) is a mathematical formalism developed by Albus to model the information processing characteristics of the cerebellum [1],[2]. The CMAC, as a controller, computes control values by referring a memory look-up table where those control values are stored. Memory table basically stores the relationship between input and output or the control function. In comparison to other neural networks, CMAC has the advantage of very fast learning and it has the unique property of quickly training certain areas of memory without affecting the whole memory structure. The convergence of a CMAC depends on how input training points are chosen. It has been shown that CMAC learning always converges to some arbitrary accuracy on any set of training data [3],[4]. The CMAC has an advantage in training speed and this property is very important in real-time applications such as in load forecasting [5]. The same benefits may be gained in adaptive control systems [6]. The performance of a CMAC neural network depends on the training algorithms and the selection of input points. Papers have been published that explain CMAC algorithms, however, little work has been done to improve existing algorithms [16]. In this paper, using computational results and the algorithm properties, the existing algorithms for CMAC are first compared. Improvements are then examined and a "Hybrid Algorithm" approach has been developed. In this method, a CMAC network is first trained by using Neighborhood Training Algorithm and then trained by the Maximum Error Algorithm for fine-tuning of the CMAC network. With the hybrid approach, faster initial convergence is achieved. The proposed "Hybrid Algorithm" approach is also applied to solve the inverse kinematics problem of a twolink robot arm and successful results are obtained. The new algorithm may reduce the training time and increase the initial learning rate, which is very important for control applications. The paper is organized as follows: A brief description of CMAC is given in Section II. Various CMAC algorithms are reviewed and their performances are compared in Section III, and a hybrid of the Maximum Error Algorithm and the Neighborhood Training is developed in Section IV. The proposed hybrid algorithm is demonstrated in Section V to solve an inverse kinematics problem of two-link robot arm, and conclusions are drawn in Section VI. II. CMAC In CMAC, learning is iterative and 'teacher' supplies important information about the desired function to be learned and CMAC adapts itself using this feedback information (Figure 1). This is called supervised learning. The input selects the active of CMAC depending on its value. Then active are summed and averaged to calculate an output. This output is compared with the desired output, and CMAC uses the difference to adjust its to get the desired response. Input value s(k) select Adjustable... sum active Adjust active Average Weights Figure 1. Supervised learning in the CMAC structure. Computed output r(k) + Desired output d(k) -

2 Albus based CMAC on the perceptron by Rosenblatt and the mappings [7]: S A P (1) where S is sensory input vector, A is the binary association matrix, and P is the output response. The first mapping is a non-linear transformation that maps the input variable s, into a binary vector that indeed represents the location of chosen by the corresponding input. The generalization factor, ρ, is a design parameter and determines the number of active cells with "ones" in the association vector. In the second mapping of the CMAC, the output values are formed first by summing of all the of the cells in the association vector which have been excited by a particular input and then taking their average. In the training phase, are adjusted in such a way that the output of the CMAC approaches the desired output. Training of takes place in a supervised learning environment. Given the k-th training sample the error between the desired output d(k) and the calculated output r(k), is e(k) = d(k) - r(k). This error is then used as a correction to each of the memory cells excited by the k-th input. For each excited memory element, the correction of weight is W = γ e( k), where, γ is the learning factor. Miller identifies the CMAC training algorithm as the well-known Least Mean Squares (LMS) algorithm [8]: 1 T Wk () = Wk ( 1) + γ{() dk a() kwk ( 1)}() ak (2) ρ where a(k) is the binary association vector, and ρ is the generalization parameter. III. COMPARISON OF CMAC TRAINING ALGORITHMS A. Selection of Training Input Points Parks and Militzer (1992) discusses Cyclic and Random Training methods for the CMAC algorithms. In Cyclic (Sequential) Training, a cycle is defined for the input training points and during the training this cycle is repeated until a desired performance is reached. If the training points are in the same neighborhood of the previous input points, good output convergence may not be obtained. Random Training method could be used to prevent the cross-talk (interference) between the input training points. A pseudo-random number generator could be used to generate uniformly distributed random numbers at each training session. In this manner, learning interference is greatly minimized because of the reduction of repetitive learning at the same neighborhood. B. Comparison of Training Algorithms: The following algorithms were investigated: 1. Albus Instantenous Learning (Nonbatch) Algorithm. 2. Batch Learning Algorithm. 3. Maximum Error Algorithm. 4. Gram-Schmidt Procedure. 5. Partially Optimized Step Length Algorithm. 6. Moving Average Algorithm. In CMAC, weight updates can be made after each presentation (Non-batch Algorithm) or after a batch of presentations (Batch Algorithm). Instantaneous Error Correction Algorithm developed by Albus updates the network weight vector after each training pair is presented to the network [2]. The rate of convergence of the algorithm depends directly on the orthogonality (no interference between patterns) of the training patterns [9]. A new learning algorithm, Gram-Schmidt Procedure are derived which produce weight changes perpendicular to the previous L weight changes (L = user defined parameter) and so the learning is orthogonal for the last L training examples, [9],[10]. The Moving Average Algorithm uses a counter vector c, to indicate how often a weight has been updated and the update is forwarded to less frequently changed weight elements for a particular input [10]. The Maximum Error Training Algorithm, developed by Parks and Militzer [10], is based on the algorithm proposed by Albus [2]. The algorithm takes L training input points consisting of the current training input point and L-1 past input points and selects the one with the largest output error. Partially Optimized Step Length Algorithm tries to minimize the sum of errors for the last L training patterns rather than minimizing the square of error for one input. The details of these algorithms can be found in [1]-[2], and [9]- [11]. In the comparison, the computational results and algorithm properties have been utilized. The training is performed either in a cyclic or in a random manner. The following two functions are used as target functions for generating the training data: Case A) An arbitrary mathematical function f (x) which is defined for 180 x 180 : 1 1 f( x) = 2 sin( x) sin(2 x) + sin(3 x) cos( x) cos(2 x) + cos(3 x) 2 3 Case B) A composite function which has a discontinuous first derivative at x = (N-1)/3 where N=361, and g(x) that is defined for 0 x 360 3x 1 3x 1 g x e N 3 (3) ( ) = (4) e N 1 Desirable properties of learning algorithms are initial fast learning and long-term convergence. The computational time of each algorithm is also taken into consideration. These

3 criteria formed basis for comparison. The following abbreviations are made: ALC: ALR: AVC: AVR: GRC: GRR: MAC: MAR: OSC: OSR: BATC: BATR: Albus (nonbatch) Cyclic Training Albus (nonbatch) Random Training Moving Average Cyclic Training Moving Average Random Training Gram-Schmidt Proc. Cyclic Training Gram-Schmidt Proc. Random Training Maximum Error Cyclic Training Maximum Error Random Training Optimized Step Cyclic Training Optimized Step Random Training Batch Alg. with Cyclic Training Batch Alg. with Random Training In both cases, the target functions are the given functions in (3) and (4), and the parameters in the training are: ρ=10 and B=361 (Resolution of 1 degree). Comparing the two cases, the second function (Case B) is actually harder than the first one in terms of CMAC learning because of its discontinuous first derivative. From the results it can easily be seen that the Albus Nonbatch Algorithm is the least time requiring algorithm for training data and is convenient for on-line learning because of its easy algorithm implementation. But there is a risk if modelling error or measurement noise exists in the network. In that case of mismatch, the weight vector never converges to its optimal value, instead it converges in a domain which surrounds this optimal value, called minimal capture zone [9]. As it can be seen in Tables 1 and 2, when Cyclic Training is applied, slow initial convergence occurs. This is due to the interference which is inherent in the Cyclic Training nature. The employment of Random Training may improve initial convergence. The drawback in the Random Training is that it may sometimes get stuck since the Random Training process can choose points at which it is already reasonably well trained. At some point CMAC may not learn anything. The work done by Ellison also explains this fact [12]. Initial convergence in the Moving Average Algorithm is better than Albus Algorithm when Cyclic Training is used, but later on convergence rate gets slower and slower. The computational effort is the second best after Albus Nonbatch Algorithm. The Optimized Step Algorithm with Cyclic Training shows a slow convergence because the transformed input patterns are parallel to each other. Since the weight vectors are optimized with respect to the last L pieces of data, inputs are still correlated. With Random Training the initial convergence improves but long term convergence still remains the same. The best result is recorded with small L value. Large values may slow down the algorithm. Gram-Schmidt procedure gives the fastest convergence, but it takes almost a quarter of a second just to store one single point. Larger L values, gives more accuracy and faster convergence but also requires tremendous amount of computation time. Algorit. L TABLE I. COMPARISON OF TRAINING ALGORITHMS FOR CASE A. 361 pts. =1epoch 3610 pts. =10 epoch pts. =50 epoch pts. 100 epoch Exec.time / single point t(msec) ALC % 10.20% 0.139% 0.076% 3.30 ALR % 1.130% 0.198% 0.073% 3.18 AVC % 0.910% 0.533% 0.460% 5.50 AVR % 0.920% 0.543% 0.492% 3.18 GRC % % % % GRC % % % % GRR % % % % GRR % % % % MAC % % % % MAC % % % % MAR % % % % MAR % % % % OSC % % % % OSC % % % % OSR % % % % OSR % % % % epch 3610 epoch epoch BATC % 0.176% 0.060% BATR % 0.220% 0.067%

4 Algorit. L TABLE II. COMPARISON OF TRAINING ALGORITHMS FOR CASE B. 361 pts. =1epoch 3610 pts. =10 ep pts. =50 ep pt. =100 ep. Exec.time / single point t(msec) ALC % 13.61% 0.323% 0.165% 3.30 ALR % 1.23% 0.207% 0.110% 3.18 AVC % 1.10% 0.669% 0.566% 7.90 AVR % 1.22% 0.685% 0.592% 7.91 GRC % 0.12 % % % GRC % 0.08 % % % GRR % 1.19 % % % GRR % 0.41 % % % MAC % 0.48 % % % MAC % 0.46 % % % MAR % 0.98 % % % MAR % 0.62 % % % OSC % 2.21 % % % OSC % 0.97 % % % OSR % 1.29 % % % OSR % 1.23 % % % epch 3610 epoch epoch BATC % 0.303% 0.069% BATR % 0.490% 0.077% The Maximum Error Algorithm has a good convergence property. Initial learning rate is especially high and could be made higher using Neighborhood Training technique. The long-term convergence is the best after Gram-Schmidt procedure. It has a moderate computation time compared to others. Choosing L larger increases the accuracy but requires much computation time and effort, and a trade-off may be made between accuracy and the time requirement. Batch Training doesn't look appropriate for our purposes. The requirement that all the information about the training data points in one pass should be known limits the usage of this technique. The recommended algorithm in this paper is the Maximum Error Algorithm because of its moderate computation time, fast initial and long term convergences. The works of Ellison and also Parks and Militzer agrees with this result [10],[12]. IV. HYBRID APPROACH: THE MAXIMUM ERROR ALGORITHM WITH NEIGHBORHOOD TRAINING The Maximum Error Algorithm has been the best algorithm in terms of its moderate computation time and good convergence properties. But can there be any improvement on Maximum Error Algorithm? From Tables I and II and discussions before, we conclude that the initial convergence may be improved by "Neighborhood Training". Thompson and Kwon suggested the Neighborhood Training method to avoid the interference resulting from the CMAC's generalization property [13]. In this method, input training points that lie outside the neighborhood of previous input training points are selected so that no interference will occur between the training sessions. The total number of input points to be trained depends on the resolution of the bins and ρ. For an arbitrary function (eqn. 3) with 180 x 180, a resolution of 1 degree and ρ=10, the total number of Neighborhood Training points, N t is calculated as N t = 360/10+1=37. The input values are computed by: s = s + nρr, n = 1,2,,N t (5) n 0 where, s 0 is the smallest input value, and r is the input resolution. After selecting the input values for training, the Albus Nonbatch Algorithm can be applied for training at the neighborhoods. W = W + µ a( k) k k 1 k T µ = ρ. d( k) a( k) W k k 1 n = 1,2,,N t (6) where, anis ( ) the normalized form of the n-th association vector, and d(n) is the desired output. Data for each memory element are adjusted only one time (with gain=1) during training. After applying the Albus Training Algorithm, the result is appreciable because of the speed advantage of this method. Since Neighborhood Training minimizes the learning interference, Lin and Fisher later used this form of training in a arc-welding process control system [14]. In the proposed hybrid algorithm approach, the CMAC is pre-trained with "Neighborhood Training" before it is fine tuned with the recommended Maximum Error Algorithm. In pre-training, instead of using only one pass of neighborhood at 37 points, a second pass of Neighborhood Training are also utilized in the midpoints. These mid-points corresponded to the half of the previous neighborhoods, but among them there were no interference as in the first session.

5 After the initial neighborhood training, the rms error for Case A was 1.45% and the rms error for Case B was only % (Figures 2 and 3). For both functions, L value is chosen to be 10. From Table III, it can be concluded that the initial convergence improves appreciably. For Case A, after the first epoch of Maximum Error Algorithm in the hybrid approach, the rms error becomes 1.18%. One compares this with 21.24% for Cyclic Trained Maximum Error Algorithm and 5.7% for Randomly trained Maximum Error Algorithm after 1 epoch (cf. Table I). The hybrid approach requires 73 extra training points. But, since we are using Albus Nonbatch Algorithm in pre-training, the computational time is quite low (cf. Tables I and II). After 10 epoch, the rms error further drops down to 0.23% instead of 0.375% and 1.159% corresponding to the Cyclic and Random Trainings, respectively. The initial convergence rate was better for the second function. After the first epoch of Maximum Error Algorithm 0.065% rms error is obtained compared to % for Cyclic Trained Maximum Error Algorithm and 6.71 % for Randomly trained Maximum Error Algorithm (cf. Tables II and III). Long term convergence also well improves from 0.035% to 0.017% for Case B (cf. Tables II and III). In conclusion, the combination of these two algorithms improves CMAC neural network convergence considerably. Figure 2. f(x) after the 2-pass Neighborhood Training. Figure 3. g(x) after the 2-pass Neighborhood Training. TABLE 3. RECOMMENDED ALGORITHM (L=10). Neighborhood Training Algorithm Maximum Error Algorithm with Neighborhood Training (L=10) Case 1st pass 2nd pass 361 pts. =1 epoch 3610 pts. =10 epoch pts. =50 epoch pts 100 epoch Mean Exec. time t (msec) A 3.20% 1.45% 1.180% 0.230% 0.040% 0.016% 30 B 0.14% 0.11% 0.065% 0.034% 0.024% 0.017% 30 V. AN INVERSE KINEMATICS PROBLEM The Hybrid Maximum Error Algorithm with Neighborhood Training can be applied to solve an inverse kinematics problem of a two-link robot arm to demonstrate the effectiveness of the hybrid method. The two-link robot has rotary joints and the tip position is commanded to follow a straight line trajectory. Inverse kinematics deals with determining what joint angles of the robot are needed to force the robot tip to follow the line joining specified end points. The joint angles are given by Yao and Zhang [15]: x + y L1 L2 θ2 = arccos( ) 2LL 1 2 L2sin( θ2) θ1=arctan(y,x) - arctan( ) L + L cos( θ ) where, θ 1 and θ 2 is the first and the second joint angle, respectively, with 0 θ 1, θ 2 π, (x,y) are the coordinates of the arm, L 1 and L 2 are the link lengths. Initial position and the final position of the straight line trajectory were chosen as (X 1,Y 1 ) = (2,3) and (X 2, Y 2 )=(-2,3), respectively, with link lengths given as L 1 = L 2 = 2. The CMAC network can learn the inverse kinematics of a two-link robot arm and then predict the joint angles. Two (7)

6 different CMAC networks have been utilized for learning θ 1 and θ 2 in this problem. The generalization factor ρ chosen was 4 for the two networks. The initial training with Neighborhood method showed a low rms error value of % for θ 1 and an rms error value of % for θ 2. The Maximum Error Algorithm has been employed to reduce these rms error values further down. After the 100-th epoch of the Maximum Error Algorithm, rms values were only 0.025% and %, for θ 1 and θ 2, respectively. These results can also be visualized in Figures 4 and 5. The figures also compare the "Hybrid Algorithm" approach to the original Maximum Error Algorithm if it were to applied alone without Neighborhood Training to predict the joint angles. From these figures, it can be concluded that the initial convergence rates for both joint angles were made considerably higher than the original Maximum Error Algorithm by employing the proposed "Hybrid Algorithm" approach. VI. CONCLUSION Several possible algorithms and training methods for the CMAC network are analyzed thoroughly. The selection of the input training points was found to be important, and for that reason beside the Cyclic Training, Random Training is also taken into consideration in comparing different algorithms. Among the algorithms, Maximum Error Algorithm has been recommended because of its moderate computation time, and fast initial and long-term convergence. An improvement has been made for the recommended Maximum Error Algorithm by using the Neighborhood Training technique and this resulted in faster initial convergence. The hybrid Neighborhood Training and Maximum Error Algorithm was able to reach an appreciable accuracy in a small period of time. The acceleration in the convergence may depend on shape and complexity of a function as well as other network parameters. The hybrid Neighborhood Training and Maximum Error Algorithm is also applied to an inverse kinematics problem of a two-link robot arm and successful results were obtained with given a trajectory. Figure 5. The training results for θ 2. REFERENCES [1] J.S. Albus, "A New Approach To Manipulator Contol: The Cerebellar Model Articulation Controller (CMAC)", Transactions of the ASME, Vol. 97, , [2] J.S. Albus, "Data Storage In The Cerbeller Model Articulation Controller (CMAC)", Transactions of the ASME, Vol. 97, , [3] Y. F. Wong and A. Sideridis, Learning Convergence In The Cerebellar Model Articulation Controller, IEEE Transactions on Neural Networks, Vol.3, , [4] C.-S.Lin and C.-T.Chiang, "Learning Convergence of CMAC Technique", IEEE Transactions on Neural Networks, vol.8, , [5] T. H. Lee and K.Y. Lee, "Application of CMAC for Short-Term Load Forecasting", Proc. International Conference on Intelligent System Applications to Power Systems, pp ,1997. [6] G. Cembrano, G. Wells, J. Sarda, and A Ruggeri, "Dynamic Control Of A Robot Arm Using CMAC Neural Networks", Control Engineering Practice, Vol.5, pp ,1999. [7] F. Rosenblatt, Principles of Neurodynamics: Perceptrons and The Theory of Brain Mechanisms. Washington, DC: Spartan Books, [8] W. T. Miller, H. G. Filson, and L. G. Craft, "Application of A General Learning Algorithm To The Control of Robotic Manipulators", The International Journal of Robotics Research, Vol. 6, , [9] M. Brown and C.J. Harris, Neurofuzzy Adaptive Modelling and Control, Prentice Hall, [10] P. C. Parks and J.Militzer, "A Comparison Of Five Algorithms For The Training of CMAC Memories For Learning Control Systems", Automatica, Vol.28, , [11] M. Brown and C.J. Harris, "Least Mean Square Learning In Associative Memory Networks", Proc. IEEE Int. Symposium on Intelligent Control, Glasgow,UK, [12] D. Ellison, On The Convergence Of The Albus Perceptron, IMA J. Math. Cont. and Information, Vol.5, ,1988. [13] D. E. Thompson and S. Kwon, Neighborhood And Random Training Techniques For CMAC IEEE Transactions on Neural Networks, Vol.6, , [14] R. -H.Lin and G. W. Fischer, "An Online Arc Welding and Quality Monitor and Process Control System", Proc. Int. IEEE/IAS Conf. on Industrial Automation and Control, pp ,1995. [15] S. Yao and B. Zhang, The Learning Convergence Of CMAC In Cyclic Learning", Proc. International Joint Conference on Neural Networks, , 1993 [16] T.Szabo and G.Horvath, "Improving The Generalization Capability Of The Binary CMAC", Proc. International Joint Conference on Neural Networks, Vol.3, 85-90, Figure 4. The training results for θ 1.

HALOGEN AUTOMATIC DAYLIGHT CONTROL SYSTEM BASED ON CMAC CONTROLLER WITH TRIANGULAR BASIS FUNCTIONS

HALOGEN AUTOMATIC DAYLIGHT CONTROL SYSTEM BASED ON CMAC CONTROLLER WITH TRIANGULAR BASIS FUNCTIONS HALOGEN AUTOMATIC DAYLIGHT CONTROL SYSTEM BASED ON CMAC CONTROLLER WITH TRIANGULAR BASIS FUNCTIONS Horatiu Stefan Grif, Mircea Dulău Petru Maior University of Târgu Mureş, Romania hgrif@emgineering.upm.ro,

More information

Dynamic Memory Allocation for CMAC using Binary Search Trees

Dynamic Memory Allocation for CMAC using Binary Search Trees Proceedings of the 8th WSEAS International Conference on Neural Networks, Vancouver, British Columbia, Canada, June 19-21, 2007 61 Dynamic Memory Allocation for CMAC using Binary Search Trees PETER SCARFE

More information

Drawing using the Scorbot-ER VII Manipulator Arm

Drawing using the Scorbot-ER VII Manipulator Arm Drawing using the Scorbot-ER VII Manipulator Arm Luke Cole Adam Ferenc Nagy-Sochacki Jonathan Symonds cole@lc.homedns.org u2546772@anu.edu.au u3970199@anu.edu.au October 29, 2007 Abstract This report discusses

More information

Industrial Robots : Manipulators, Kinematics, Dynamics

Industrial Robots : Manipulators, Kinematics, Dynamics Industrial Robots : Manipulators, Kinematics, Dynamics z z y x z y x z y y x x In Industrial terms Robot Manipulators The study of robot manipulators involves dealing with the positions and orientations

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

Using Algebraic Geometry to Study the Motions of a Robotic Arm

Using Algebraic Geometry to Study the Motions of a Robotic Arm Using Algebraic Geometry to Study the Motions of a Robotic Arm Addison T. Grant January 28, 206 Abstract In this study we summarize selected sections of David Cox, John Little, and Donal O Shea s Ideals,

More information

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics

Applying Neural Network Architecture for Inverse Kinematics Problem in Robotics J. Software Engineering & Applications, 2010, 3: 230-239 doi:10.4236/jsea.2010.33028 Published Online March 2010 (http://www.scirp.org/journal/jsea) Applying Neural Network Architecture for Inverse Kinematics

More information

Kernel Recursive Least Squares for the CMAC Neural Network

Kernel Recursive Least Squares for the CMAC Neural Network International Journal of Computer heory and Engineering, Vol. 5, No. 3, June 013 Kernel Recursive Least Squares for the CMAC Neural Network C. W. Laufer and G. Coghill learning of low dimensional problems

More information

1 Trajectories. Class Notes, Trajectory Planning, COMS4733. Figure 1: Robot control system.

1 Trajectories. Class Notes, Trajectory Planning, COMS4733. Figure 1: Robot control system. Class Notes, Trajectory Planning, COMS4733 Figure 1: Robot control system. 1 Trajectories Trajectories are characterized by a path which is a space curve of the end effector. We can parameterize this curve

More information

Use of multilayer perceptrons as Inverse Kinematics solvers

Use of multilayer perceptrons as Inverse Kinematics solvers Use of multilayer perceptrons as Inverse Kinematics solvers Nathan Mitchell University of Wisconsin, Madison December 14, 2010 1 of 12 Introduction 1. Scope 2. Background 3. Methodology 4. Expected Results

More information

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots

Intermediate Desired Value Approach for Continuous Transition among Multiple Tasks of Robots 2 IEEE International Conference on Robotics and Automation Shanghai International Conference Center May 9-3, 2, Shanghai, China Intermediate Desired Value Approach for Continuous Transition among Multiple

More information

A DH-parameter based condition for 3R orthogonal manipulators to have 4 distinct inverse kinematic solutions

A DH-parameter based condition for 3R orthogonal manipulators to have 4 distinct inverse kinematic solutions Wenger P., Chablat D. et Baili M., A DH-parameter based condition for R orthogonal manipulators to have 4 distinct inverse kinematic solutions, Journal of Mechanical Design, Volume 17, pp. 150-155, Janvier

More information

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor

Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor International Journal of the Korean Society of Precision Engineering Vol. 4, No. 1, January 2003. Autonomous Sensor Center Position Calibration with Linear Laser-Vision Sensor Jeong-Woo Jeong 1, Hee-Jun

More information

Using Artificial Neural Networks for Prediction Of Dynamic Human Motion

Using Artificial Neural Networks for Prediction Of Dynamic Human Motion ABSTRACT Using Artificial Neural Networks for Prediction Of Dynamic Human Motion Researchers in robotics and other human-related fields have been studying human motion behaviors to understand and mimic

More information

13. Learning Ballistic Movementsof a Robot Arm 212

13. Learning Ballistic Movementsof a Robot Arm 212 13. Learning Ballistic Movementsof a Robot Arm 212 13. LEARNING BALLISTIC MOVEMENTS OF A ROBOT ARM 13.1 Problem and Model Approach After a sufficiently long training phase, the network described in the

More information

A CORDIC Algorithm with Improved Rotation Strategy for Embedded Applications

A CORDIC Algorithm with Improved Rotation Strategy for Embedded Applications A CORDIC Algorithm with Improved Rotation Strategy for Embedded Applications Kui-Ting Chen Research Center of Information, Production and Systems, Waseda University, Fukuoka, Japan Email: nore@aoni.waseda.jp

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

INFO0948 Fitting and Shape Matching

INFO0948 Fitting and Shape Matching INFO0948 Fitting and Shape Matching Renaud Detry University of Liège, Belgium Updated March 31, 2015 1 / 33 These slides are based on the following book: D. Forsyth and J. Ponce. Computer vision: a modern

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Improving the Efficiency of Fast Using Semantic Similarity Algorithm

Improving the Efficiency of Fast Using Semantic Similarity Algorithm International Journal of Scientific and Research Publications, Volume 4, Issue 1, January 2014 1 Improving the Efficiency of Fast Using Semantic Similarity Algorithm D.KARTHIKA 1, S. DIVAKAR 2 Final year

More information

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1.

(X 1:n η) 1 θ e 1. i=1. Using the traditional MLE derivation technique, the penalized MLEs for η and θ are: = n. (X i η) = 0. i=1 = 1. EXAMINING THE PERFORMANCE OF A CONTROL CHART FOR THE SHIFTED EXPONENTIAL DISTRIBUTION USING PENALIZED MAXIMUM LIKELIHOOD ESTIMATORS: A SIMULATION STUDY USING SAS Austin Brown, M.S., University of Northern

More information

Inverse Kinematics (part 1) CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter 2018

Inverse Kinematics (part 1) CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter 2018 Inverse Kinematics (part 1) CSE169: Computer Animation Instructor: Steve Rotenberg UCSD, Winter 2018 Welman, 1993 Inverse Kinematics and Geometric Constraints for Articulated Figure Manipulation, Chris

More information

Machine Learning (CSE 446): Unsupervised Learning

Machine Learning (CSE 446): Unsupervised Learning Machine Learning (CSE 446): Unsupervised Learning Sham M Kakade c 2018 University of Washington cse446-staff@cs.washington.edu 1 / 19 Announcements HW2 posted. Due Feb 1. It is long. Start this week! Today:

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object.

This week. CENG 732 Computer Animation. Warping an Object. Warping an Object. 2D Grid Deformation. Warping an Object. CENG 732 Computer Animation Spring 2006-2007 Week 4 Shape Deformation Animating Articulated Structures: Forward Kinematics/Inverse Kinematics This week Shape Deformation FFD: Free Form Deformation Hierarchical

More information

Planar Robot Kinematics

Planar Robot Kinematics V. Kumar lanar Robot Kinematics The mathematical modeling of spatial linkages is quite involved. t is useful to start with planar robots because the kinematics of planar mechanisms is generally much simpler

More information

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm

Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm The 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems October 18-22, 2010, Taipei, Taiwan Motion Planning for Dynamic Knotting of a Flexible Rope with a High-speed Robot Arm Yuji

More information

Torque-Position Transformer for Task Control of Position Controlled Robots

Torque-Position Transformer for Task Control of Position Controlled Robots 28 IEEE International Conference on Robotics and Automation Pasadena, CA, USA, May 19-23, 28 Torque-Position Transformer for Task Control of Position Controlled Robots Oussama Khatib, 1 Peter Thaulad,

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) How to generate Delaunay Triangulation? (3 pts) Explain the difference

More information

COMPEL 17,1/2/3. This work was supported by the Greek General Secretariat of Research and Technology through the PENED 94 research project.

COMPEL 17,1/2/3. This work was supported by the Greek General Secretariat of Research and Technology through the PENED 94 research project. 382 Non-destructive testing of layered structures using generalised radial basis function networks trained by the orthogonal least squares learning algorithm I.T. Rekanos, T.V. Yioultsis and T.D. Tsiboukis

More information

System identification from multiple short time duration signals

System identification from multiple short time duration signals System identification from multiple short time duration signals Sean Anderson Dept of Psychology, University of Sheffield www.eye-robot.org 25th October, 2007 Presentation Content Context Motor Control

More information

INSTITUTE OF AERONAUTICAL ENGINEERING

INSTITUTE OF AERONAUTICAL ENGINEERING Name Code Class Branch Page 1 INSTITUTE OF AERONAUTICAL ENGINEERING : ROBOTICS (Autonomous) Dundigal, Hyderabad - 500 0 MECHANICAL ENGINEERING TUTORIAL QUESTION BANK : A7055 : IV B. Tech I Semester : MECHANICAL

More information

Fitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision!

Fitting. Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! EECS Fall 2014! Foundations of Computer Vision! Fitting EECS 598-08 Fall 2014! Foundations of Computer Vision!! Instructor: Jason Corso (jjcorso)! web.eecs.umich.edu/~jjcorso/t/598f14!! Readings: FP 10; SZ 4.3, 5.1! Date: 10/8/14!! Materials on these

More information

2009 GCSE Maths Tutor All Rights Reserved

2009 GCSE Maths Tutor All Rights Reserved 2 This book is under copyright to GCSE Maths Tutor. However, it may be distributed freely provided it is not sold for profit. Contents angles 3 bearings 8 triangle similarity 9 triangle congruency 11 Pythagoras

More information

Alternating Projections

Alternating Projections Alternating Projections Stephen Boyd and Jon Dattorro EE392o, Stanford University Autumn, 2003 1 Alternating projection algorithm Alternating projections is a very simple algorithm for computing a point

More information

A modular neural network architecture for inverse kinematics model learning

A modular neural network architecture for inverse kinematics model learning Neurocomputing 38}40 (2001) 797}805 A modular neural network architecture for inverse kinematics model learning Eimei Oyama*, Arvin Agah, Karl F. MacDorman, Taro Maeda, Susumu Tachi Intelligent System

More information

Arm Trajectory Planning by Controlling the Direction of End-point Position Error Caused by Disturbance

Arm Trajectory Planning by Controlling the Direction of End-point Position Error Caused by Disturbance 28 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Xi'an, China, July, 28. Arm Trajectory Planning by Controlling the Direction of End- Position Error Caused by Disturbance Tasuku

More information

Unsupervised Learning of a Kinematic Arm Model

Unsupervised Learning of a Kinematic Arm Model Unsupervised Learning of a Kinematic Arm Model Heiko Hoffmann and Ralf Möller Cognitive Robotics, Max Planck Institute for Psychological Research, Amalienstr. 33, D-80799 Munich, Germany hoffmann@psy.mpg.de,

More information

PV211: Introduction to Information Retrieval

PV211: Introduction to Information Retrieval PV211: Introduction to Information Retrieval http://www.fi.muni.cz/~sojka/pv211 IIR 15-1: Support Vector Machines Handout version Petr Sojka, Hinrich Schütze et al. Faculty of Informatics, Masaryk University,

More information

Non-holonomic Planning

Non-holonomic Planning Non-holonomic Planning Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering http://users.wpi.edu/~zli11 Recap We have learned about RRTs. q new q init q near q rand But the standard

More information

Short on camera geometry and camera calibration

Short on camera geometry and camera calibration Short on camera geometry and camera calibration Maria Magnusson, maria.magnusson@liu.se Computer Vision Laboratory, Department of Electrical Engineering, Linköping University, Sweden Report No: LiTH-ISY-R-3070

More information

DIMENSIONAL SYNTHESIS OF SPATIAL RR ROBOTS

DIMENSIONAL SYNTHESIS OF SPATIAL RR ROBOTS DIMENSIONAL SYNTHESIS OF SPATIAL RR ROBOTS ALBA PEREZ Robotics and Automation Laboratory University of California, Irvine Irvine, CA 9697 email: maperez@uci.edu AND J. MICHAEL MCCARTHY Department of Mechanical

More information

A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation

A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation A System for Joining and Recognition of Broken Bangla Numerals for Indian Postal Automation K. Roy, U. Pal and B. B. Chaudhuri CVPR Unit; Indian Statistical Institute, Kolkata-108; India umapada@isical.ac.in

More information

Adaptive Filtering using Steepest Descent and LMS Algorithm

Adaptive Filtering using Steepest Descent and LMS Algorithm IJSTE - International Journal of Science Technology & Engineering Volume 2 Issue 4 October 2015 ISSN (online): 2349-784X Adaptive Filtering using Steepest Descent and LMS Algorithm Akash Sawant Mukesh

More information

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically

Dr. Qadri Hamarsheh Supervised Learning in Neural Networks (Part 1) learning algorithm Δwkj wkj Theoretically practically Supervised Learning in Neural Networks (Part 1) A prescribed set of well-defined rules for the solution of a learning problem is called a learning algorithm. Variety of learning algorithms are existing,

More information

Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis

Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis Manipulator Path Control : Path Planning, Dynamic Trajectory and Control Analysis Motion planning for industrial manipulators is a challenging task when obstacles are present in the workspace so that collision-free

More information

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector

Inverse Kinematics. Given a desired position (p) & orientation (R) of the end-effector Inverse Kinematics Given a desired position (p) & orientation (R) of the end-effector q ( q, q, q ) 1 2 n Find the joint variables which can bring the robot the desired configuration z y x 1 The Inverse

More information

Geometrical Feature Extraction Using 2D Range Scanner

Geometrical Feature Extraction Using 2D Range Scanner Geometrical Feature Extraction Using 2D Range Scanner Sen Zhang Lihua Xie Martin Adams Fan Tang BLK S2, School of Electrical and Electronic Engineering Nanyang Technological University, Singapore 639798

More information

Lecture 9 Fitting and Matching

Lecture 9 Fitting and Matching Lecture 9 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi- model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation

More information

EECS 442 Computer vision. Fitting methods

EECS 442 Computer vision. Fitting methods EECS 442 Computer vision Fitting methods - Problem formulation - Least square methods - RANSAC - Hough transforms - Multi-model fitting - Fitting helps matching! Reading: [HZ] Chapters: 4, 11 [FP] Chapters:

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works The neuron as a simple computing element The perceptron Multilayer neural networks Accelerated learning in

More information

Trajectory Optimization

Trajectory Optimization Trajectory Optimization Jane Li Assistant Professor Mechanical Engineering & Robotics Engineering http://users.wpi.edu/~zli11 Recap We heard about RRT*, a sampling-based planning in high-dimensional cost

More information

Fuzzy Mod. Department of Electrical Engineering and Computer Science University of California, Berkeley, CA Generalized Neural Networks

Fuzzy Mod. Department of Electrical Engineering and Computer Science University of California, Berkeley, CA Generalized Neural Networks From: AAAI-91 Proceedings. Copyright 1991, AAAI (www.aaai.org). All rights reserved. Fuzzy Mod Department of Electrical Engineering and Computer Science University of California, Berkeley, CA 94 720 1

More information

Identifying Layout Classes for Mathematical Symbols Using Layout Context

Identifying Layout Classes for Mathematical Symbols Using Layout Context Rochester Institute of Technology RIT Scholar Works Articles 2009 Identifying Layout Classes for Mathematical Symbols Using Layout Context Ling Ouyang Rochester Institute of Technology Richard Zanibbi

More information

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm

Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Inverse Kinematics Analysis for Manipulator Robot With Wrist Offset Based On the Closed-Form Algorithm Mohammed Z. Al-Faiz,MIEEE Computer Engineering Dept. Nahrain University Baghdad, Iraq Mohammed S.Saleh

More information

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms

Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 13, NO. 5, SEPTEMBER 2002 1225 Efficient Tuning of SVM Hyperparameters Using Radius/Margin Bound and Iterative Algorithms S. Sathiya Keerthi Abstract This paper

More information

Pi at School. Arindama Singh Department of Mathematics Indian Institute of Technology Madras Chennai , India

Pi at School. Arindama Singh Department of Mathematics Indian Institute of Technology Madras Chennai , India Pi at School rindama Singh epartment of Mathematics Indian Institute of Technology Madras Chennai-600036, India Email: asingh@iitm.ac.in bstract: In this paper, an attempt has been made to define π by

More information

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute

Jane Li. Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute Jane Li Assistant Professor Mechanical Engineering Department, Robotic Engineering Program Worcester Polytechnic Institute (3 pts) Compare the testing methods for testing path segment and finding first

More information

Homework Assignment /645 Fall Instructions and Score Sheet (hand in with answers)

Homework Assignment /645 Fall Instructions and Score Sheet (hand in with answers) Homework Assignment 4 600.445/645 Fall 2018 Instructions and Score Sheet (hand in with answers Name Email Other contact information (optional Signature (required I have followed the rules in completing

More information

A NOVEL METHOD FOR THE DESIGN OF 2-DOF PARALLEL MECHANISMS FOR MACHINING APPLICATIONS

A NOVEL METHOD FOR THE DESIGN OF 2-DOF PARALLEL MECHANISMS FOR MACHINING APPLICATIONS A NOVEL METHOD FOR THE DESIGN OF 2-DOF PARALLEL MECHANISMS FOR MACHINING APPLICATIONS Félix Majou Institut de Recherches en Communications et Cybernétique de Nantes 1, 1 rue de la Noë, 44321 Nantes, FRANCE

More information

Lecture 4: Transforms. Computer Graphics CMU /15-662, Fall 2016

Lecture 4: Transforms. Computer Graphics CMU /15-662, Fall 2016 Lecture 4: Transforms Computer Graphics CMU 15-462/15-662, Fall 2016 Brief recap from last class How to draw a triangle - Why focus on triangles, and not quads, pentagons, etc? - What was specific to triangles

More information

Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions

Assignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan

More information

MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT

MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT MOTION TRAJECTORY PLANNING AND SIMULATION OF 6- DOF MANIPULATOR ARM ROBOT Hongjun ZHU ABSTRACT:In order to better study the trajectory of robot motion, a motion trajectory planning and simulation based

More information

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures:

Homework. Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression Pod-cast lecture on-line. Next lectures: Homework Gaussian, Bishop 2.3 Non-parametric, Bishop 2.5 Linear regression 3.0-3.2 Pod-cast lecture on-line Next lectures: I posted a rough plan. It is flexible though so please come with suggestions Bayes

More information

Performance Analysis of Adaptive Beamforming Algorithms for Smart Antennas

Performance Analysis of Adaptive Beamforming Algorithms for Smart Antennas Available online at www.sciencedirect.com ScienceDirect IERI Procedia 1 (214 ) 131 137 214 International Conference on Future Information Engineering Performance Analysis of Adaptive Beamforming Algorithms

More information

Measurement of Deformations by MEMS Arrays, Verified at Sub-millimetre Level Using Robotic Total Stations

Measurement of Deformations by MEMS Arrays, Verified at Sub-millimetre Level Using Robotic Total Stations 163 Measurement of Deformations by MEMS Arrays, Verified at Sub-millimetre Level Using Robotic Total Stations Beran, T. 1, Danisch, L. 1, Chrzanowski, A. 2 and Bazanowski, M. 2 1 Measurand Inc., 2111 Hanwell

More information

EEE 187: Robotics Summary 2

EEE 187: Robotics Summary 2 1 EEE 187: Robotics Summary 2 09/05/2017 Robotic system components A robotic system has three major components: Actuators: the muscles of the robot Sensors: provide information about the environment and

More information

Lecture #11: The Perceptron

Lecture #11: The Perceptron Lecture #11: The Perceptron Mat Kallada STAT2450 - Introduction to Data Mining Outline for Today Welcome back! Assignment 3 The Perceptron Learning Method Perceptron Learning Rule Assignment 3 Will be

More information

A Real Time GIS Approximation Approach for Multiphase Spatial Query Processing Using Hierarchical-Partitioned-Indexing Technique

A Real Time GIS Approximation Approach for Multiphase Spatial Query Processing Using Hierarchical-Partitioned-Indexing Technique International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2017 IJSRCSEIT Volume 2 Issue 6 ISSN : 2456-3307 A Real Time GIS Approximation Approach for Multiphase

More information

Experiments with Edge Detection using One-dimensional Surface Fitting

Experiments with Edge Detection using One-dimensional Surface Fitting Experiments with Edge Detection using One-dimensional Surface Fitting Gabor Terei, Jorge Luis Nunes e Silva Brito The Ohio State University, Department of Geodetic Science and Surveying 1958 Neil Avenue,

More information

Kinematics, Kinematics Chains CS 685

Kinematics, Kinematics Chains CS 685 Kinematics, Kinematics Chains CS 685 Previously Representation of rigid body motion Two different interpretations - as transformations between different coord. frames - as operators acting on a rigid body

More information

Combine the PA Algorithm with a Proximal Classifier

Combine the PA Algorithm with a Proximal Classifier Combine the Passive and Aggressive Algorithm with a Proximal Classifier Yuh-Jye Lee Joint work with Y.-C. Tseng Dept. of Computer Science & Information Engineering TaiwanTech. Dept. of Statistics@NCKU

More information

WHERE THEORY MEETS PRACTICE

WHERE THEORY MEETS PRACTICE world from others, leica geosystems WHERE THEORY MEETS PRACTICE A NEW BULLETIN COLUMN BY CHARLES GHILANI ON PRACTICAL ASPECTS OF SURVEYING WITH A THEORETICAL SLANT february 2012 ² ACSM BULLETIN ² 27 USGS

More information

A Comparative Study of Prediction of Inverse Kinematics Solution of 2-DOF, 3-DOF and 5-DOF Redundant Manipulators by ANFIS

A Comparative Study of Prediction of Inverse Kinematics Solution of 2-DOF, 3-DOF and 5-DOF Redundant Manipulators by ANFIS IJCS International Journal of Computer Science and etwork, Volume 3, Issue 5, October 2014 ISS (Online) : 2277-5420 www.ijcs.org 304 A Comparative Study of Prediction of Inverse Kinematics Solution of

More information

Exercise 2: Hopeld Networks

Exercise 2: Hopeld Networks Articiella neuronnät och andra lärande system, 2D1432, 2004 Exercise 2: Hopeld Networks [Last examination date: Friday 2004-02-13] 1 Objectives This exercise is about recurrent networks, especially the

More information

Model learning for robot control: a survey

Model learning for robot control: a survey Model learning for robot control: a survey Duy Nguyen-Tuong, Jan Peters 2011 Presented by Evan Beachly 1 Motivation Robots that can learn how their motors move their body Complexity Unanticipated Environments

More information

Lecture 8 Fitting and Matching

Lecture 8 Fitting and Matching Lecture 8 Fitting and Matching Problem formulation Least square methods RANSAC Hough transforms Multi-model fitting Fitting helps matching! Reading: [HZ] Chapter: 4 Estimation 2D projective transformation

More information

Kinematics of the Stewart Platform (Reality Check 1: page 67)

Kinematics of the Stewart Platform (Reality Check 1: page 67) MATH 5: Computer Project # - Due on September 7, Kinematics of the Stewart Platform (Reality Check : page 7) A Stewart platform consists of six variable length struts, or prismatic joints, supporting a

More information

Perceptron: This is convolution!

Perceptron: This is convolution! Perceptron: This is convolution! v v v Shared weights v Filter = local perceptron. Also called kernel. By pooling responses at different locations, we gain robustness to the exact spatial location of image

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Optimum Design of Truss Structures using Neural Network

Optimum Design of Truss Structures using Neural Network Optimum Design of Truss Structures using Neural Network Seong Beom Kim 1.a, Young Sang Cho 2.b, Dong Cheol Shin 3.c, and Jun Seo Bae 4.d 1 Dept of Architectural Engineering, Hanyang University, Ansan,

More information

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing

Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Visual Tracking of Unknown Moving Object by Adaptive Binocular Visual Servoing Minoru Asada, Takamaro Tanaka, and Koh Hosoda Adaptive Machine Systems Graduate School of Engineering Osaka University, Suita,

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

Shortest-path calculation of first arrival traveltimes by expanding wavefronts

Shortest-path calculation of first arrival traveltimes by expanding wavefronts Stanford Exploration Project, Report 82, May 11, 2001, pages 1 144 Shortest-path calculation of first arrival traveltimes by expanding wavefronts Hector Urdaneta and Biondo Biondi 1 ABSTRACT A new approach

More information

Optimal Design of Three-Link Planar Manipulators using Grashof's Criterion

Optimal Design of Three-Link Planar Manipulators using Grashof's Criterion See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/256465031 Optimal Design of Three-Link Planar Manipulators using Grashof's Criterion Chapter

More information

Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI

Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI ICCAS25 June 2-5, KINTEX, Gyeonggi-Do, Korea Obstacle Avoidance of Redundant Manipulator Using Potential and AMSI K. Ikeda, M. Minami, Y. Mae and H.Tanaka Graduate school of Engineering, University of

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Advance Convergence Characteristic Based on Recycling Buffer Structure in Adaptive Transversal Filter

Advance Convergence Characteristic Based on Recycling Buffer Structure in Adaptive Transversal Filter Advance Convergence Characteristic ased on Recycling uffer Structure in Adaptive Transversal Filter Gwang Jun Kim, Chang Soo Jang, Chan o Yoon, Seung Jin Jang and Jin Woo Lee Department of Computer Engineering,

More information

Development of system and algorithm for evaluating defect level in architectural work

Development of system and algorithm for evaluating defect level in architectural work icccbe 2010 Nottingham University Press Proceedings of the International Conference on Computing in Civil and Building Engineering W Tizani (Editor) Development of system and algorithm for evaluating defect

More information

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators

Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically Redundant Manipulators 56 ICASE :The Institute ofcontrol,automation and Systems Engineering,KOREA Vol.,No.1,March,000 Redundancy Resolution by Minimization of Joint Disturbance Torque for Independent Joint Controlled Kinematically

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

An Iterative Algorithm for Inverse Kinematics of 5-DOF Manipulator with Offset Wrist

An Iterative Algorithm for Inverse Kinematics of 5-DOF Manipulator with Offset Wrist World Academy of Science, Engineering and echnology An Iterative Algorithm for Inverse Kinematics of 5-DOF Manipulator with Offset Wrist Juyi Park, Jung-Min Kim, Hee-Hwan Park, Jin-Wook Kim, Gye-Hyung

More information

Motion Control (wheeled robots)

Motion Control (wheeled robots) Motion Control (wheeled robots) Requirements for Motion Control Kinematic / dynamic model of the robot Model of the interaction between the wheel and the ground Definition of required motion -> speed control,

More information

OPTIMIZATION OF TURNING PROCESS USING A NEURO-FUZZY CONTROLLER

OPTIMIZATION OF TURNING PROCESS USING A NEURO-FUZZY CONTROLLER Sixteenth National Convention of Mechanical Engineers and All India Seminar on Future Trends in Mechanical Engineering, Research and Development, Deptt. Of Mech. & Ind. Engg., U.O.R., Roorkee, Sept. 29-30,

More information

CSE 554 Lecture 6: Fairing and Simplification

CSE 554 Lecture 6: Fairing and Simplification CSE 554 Lecture 6: Fairing and Simplification Fall 2012 CSE554 Fairing and simplification Slide 1 Review Iso-contours in grayscale images and volumes Piece-wise linear representations Polylines (2D) and

More information

An Improved Dynamic Modeling of a 3-RPS Parallel Manipulator using the concept of DeNOC Matrices

An Improved Dynamic Modeling of a 3-RPS Parallel Manipulator using the concept of DeNOC Matrices An Improved Dynamic Modeling of a 3-RPS Parallel Manipulator using the concept of DeNOC Matrices A. Rahmani Hanzaki, E. Yoosefi Abstract A recursive dynamic modeling of a three-dof parallel robot, namely,

More information

Department of Mathematics

Department of Mathematics Department of Mathematics TIME: 3 Hours Setter: DAS DATE: 07 August 2017 GRADE 12 PRELIM EXAMINATION MATHEMATICS: PAPER II Total marks: 150 Moderator: GP Name of student: PLEASE READ THE FOLLOWING INSTRUCTIONS

More information

Manipulability Analysis of Two-Arm Robotic Systems

Manipulability Analysis of Two-Arm Robotic Systems Manipulability Analysis of Two-Arm Robotic Systems N. M. Fonseca Ferreira Institute of Engineering of Coimbra Dept. of Electrical Engineering Quinta da Nora 3031-601 Coimbra Codex, Portugal tel: 351 239790200,

More information

10-701/15-781, Fall 2006, Final

10-701/15-781, Fall 2006, Final -7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly

More information

P1 REVISION EXERCISE: 1

P1 REVISION EXERCISE: 1 P1 REVISION EXERCISE: 1 1. Solve the simultaneous equations: x + y = x +y = 11. For what values of p does the equation px +4x +(p 3) = 0 have equal roots? 3. Solve the equation 3 x 1 =7. Give your answer

More information