THE discrete multi-valued neuron was presented by N.

Size: px
Start display at page:

Download "THE discrete multi-valued neuron was presented by N."

Transcription

1 Proceedings of International Joint Conference on Neural Networks, Dallas, Texas, USA, August 4-9, 2013 Multi-Valued Neuron with New Learning Schemes Shin-Fu Wu and Shie-Jue Lee Department of Electrical Engineering National Sun Yat-Sen University Kaohsiung 80424, Taiwan Abstract Multi-valued neuron (MVN) is an efficient technique for classification and regression. It is a neuron with complex-valued weights and inputs/output, and the output of the activation function is moving along the unit circle on the complex plane. Therefore, MVN may have more functionalities than sigmoidal or radial basis function neurons. In some cases, a pair of weighted sums would oscillate between two sectors and the learning process can hardly converge. Besides, many weighted sums may be located around the borders of each sector, which may cause bad performance in classification accuracy. In this paper, we propose two modifications of multivalued neuron. One is involved with moving boundaries and the other one with targets at the center of sectors. Experimental results show that the proposed modifications can improve the performance of MVN and help it to converge more efficiently. Index Terms Multi-Valued Neuron (MVN), complex-valued Neural Network (CVNN), classification, activation function, learning process. I. INTRODUCTION THE discrete multi-valued neuron was presented by N. Aizenberg and I. Aizenberg in [1]. This neuron operates with complex-valued weights. Its inputs and output are mapped onto the complex plane. They are located on the unit circle, and are exactly the k th roots of unity. The activation function of MVN k-valued logic maps a set of the k th roots of unity on itself. Two discrete-valued MVN learning algorithms are presented in [2]. They are based on errorcorrecting learning rule and are derivative-free. This makes MVN have higher functionality than sigmoidal or radial basis function neurons. Although the MVN has higher functionality than other neurons, it has difficulty learning highly non-linear functions. There are various methods to solve this problem, for example, multi-layer MVN (MLMVN) and MVN with periodic activation function (MVN-P). The MVN-based feedforward neural networks was proposed in [3] solving benchmark and real world problems. Rather than building the multilayer structure, MVN-P was introduced in [4] enhancing the ability of a single neuron with periodic activation function. Therefore, it is appealing to modify a single neuron in order to increase its functionality. In this paper, we first consider the discrete and continuous multi-valued neuron which activation function is modified with moving boundary. When learning the basic multi-valued neuron for classification, the weighted sum is moving toward the targeted sector on the complex plane. The locations of the k th roots are fixed on the unit circle that makes k sectors on the complex plane divided equally. The idea of multi-valued neuron with moving boundary is to dynamically change the size of each sector. The desired output and the weighted sum are cooperatively moving toward each other. MVN learning algorithm with this modification can improve the convergence of the learning process. Secondly, we consider improving the classification accuracy of multi-valued neuron by setting the learning target to the center of the targeted sector, not the k th roots of unity. The learning rule of basic MVN leads a weighted sum to a border of the targeted sector and eventually there may be many weighted sums located around the sector boundaries. Therefore, the classification accuracy suffers from this movement of weighted sums. Since the revised algorithm separates the learning instances from the sector boundaries, it can significantly improve the classification accuracy. A. Discrete MVN II. MULTI-VALUED NEURON A discrete-valued MVN is a function mapping from a n-feature input onto a single output. This mapping is described by a multiple-valued (k-valued) function of n-feature instances, f(x 1,..., x n ), which uses n 1 complex-valued weights: f(x 1,..., x n ) = P (ω 0 ω 1 x 1 ω n x n ) (1) where x 1,..., x n are the features of an instance, on which the performed function depends, and ω 0, ω 1,..., ω n are the weights. The values of the function and of the features are complex. They are the k th roots of unity: ɛ j = exp(i2πj/k), j 0, 1,..., k 1, and i is an imaginary unity. P is the activation function of the neuron: P (z) = exp(i2πj/k), if 2πj/k arg(z) < 2π(j 1)/k (2) where j = 0, 1,..., k 1 are values of the k-valued logic, z = ω 0 ω 1 x 1 ω n x n is the weighted sum, and arg(z) is the argument of the complex number z. Eq.(2) is illustrated in Fig. 1. Eq.(2) divides the complex plane into k equal sectors and maps the whole complex plane onto a subset of points belonging to the unit circle. This subset corresponds exactly to a set of the k th roots of unity. The MVN learning is reduced to the movement along the unit circle and is derivative-free. The movement is determined by the error which is the difference between the desired and actual output. The error-correcting learning rule /13/$ IEEE 58

2 Fig. 1: Geometrical interpretation of the discrete-valued MVN activation function. and the corresponding learning algorithm for the discretevalued MVN were described in [2] and modified by I. Aizenberg and C. Moraga [3]: i = ωi r C r (n 1) z r (ɛq ɛ s ) x i, (3) for i = 0, 1,..., n, where x i is the input of i th feature with the components complex-conjugated, n is the number of the input features, ɛ q is the desired output of the neuron, ɛ s = P (z) is the actual output of the neuron (see Fig. 2a), r is the number of the learning epoch, ωi r is the current weighting of the i th feature, i is the following weighting of the i th feature after correction, C r is the constant part of the learning rate (it may always equal to 1), and z r is the absolute value of the weighted sum obtained on the r th epoch. The 1 factor is useful when learning non-linear functions with z r a number of high irregular jumps. Eq.(3) ensures that the corrected weighted sum moves from sector s to sector q (see Fig. 2a). The direction of this movement is determined by the error δ = ɛ q ɛ s. The convergence of the learning algorithm was proven in [6]. (a) Fig. 2: Geometrical interpretation of the MVN learning rule: (a) Discrete-valued MVN, and (b) Continuous-valued MVN. B. Continuous MVN (b) The activation function Eq.(2) is piece-wise discontinuous. This function can be modified and generalized for the continuous case in the following way. When k in Eq.(2), the angle value of the sector (see in Fig. 1) will approach to zero. The activation function is transformed as follows: P (z) = exp(iarg(z)) = z (4) z where z is the weighted sum, arg(z) is the argument of complex number z, and z is the modulus of the complex number z. The activation function Eq.(4) maps the weighted sum into the whole unit circle (see Fig. 2b). Eq.(2) maps only to the discrete subsets of the points belonging to the unit circle. Eq.(2) and Eq.(4) are both not differentiable, but their differentiability is not required for MVN learning. The Learning rule of the continuous-valued MVN is shown as follows: C r i = ωi r (n 1) z r (ɛq ɛ s ) x i = ωi r C r (n 1) z r (ɛq z z ) x i, for i = 0, 1,..., n. III. PROPOSED MODIFICATIONS There are some problems associated with MVN: Weighted Sums Oscillating Between Two Sectors. The speed of convergence depends on the initial weights and the error. In some cases, a pair of weighted sum will oscillate between two sectors. Take Fig. 3 as an example. In this figure, p1 and p2 are two weighted (a) Fig. 3: Oscillation of a pair of weighted sums. sums to be corrected. Note that p1 belongs to the sector 2 and p2 belongs to sector 1. Due to their geometrical presentation and the correcting error, they will repeatedly oscillate between sector 1 and sector 2 (see in Fig. 3a and Fig. 3b). In such case, the learning algorithm can hardly converge. Weighted Sums being Crowded around the Sector Boundaries. Classification accuracy can be improved if the learning instances are separated from the sector boundaries. But the learning target of MVN is on the k th roots of unity which is exactly the boundaries of the k sectors. There may be many weighted sums located around the sector boundaries after numerous learning iterations. Hence, the classification accuracy suffers from this learning algorithm. (b) 59

3 We propose a new approach of MVN learning using moving boundary to solve the first problem. While learning the basic MVN, the k th roots of unity are located fixed on the unit circle and divides the unit circle into k equal sectors. Therefore the correcting error from one to a certain sector is static. The idea of learning MVN with moving boundary is to dynamically change the size of each sector. This new learning rule will iteratively train not only the weights but also the location of the k-valued logic thresholds. We expect this modification can improve the convergence of MVN efficiently. For the second problem, we were inspired by the learning method proposed in [5] and modify the learning rule such that the target of each learning iteration is at the center of target sector, instead of each sector. We expect the distribution of weighted sums will be around the center of each sector by this modified learning rule and the testing accuracy will be improved. A. MVN Learning with Moving Boundaries The activation function of as MVN sets the roots of unity separating the unit circle equally. Let E k be the set of the k th roots of unity: E k = {ɛ 0 k, ɛ 1 k, ɛ 2 k,..., ɛ k 1 k } boundary E k where ɛ k = exp(i2π/k) is the primitive k th root of unity and boundary is the set of boundaries of the k sectors, to be described later. Let K be the set of k-valued logic: K = {0, 1,..., k 1} Let O be a set of continuous values that are located on the unit circle and f(x 1,..., x n ) be a function with the mapping f : O n K. In order to obtain the function f(x 1,..., x n ) : O n K, we can normalize the domain of each feature into a value within the bounded sub-domain D n R n : f(y 1,..., y n ), y j [a j, b j [, a j, b j R, j = 1,..., n y j [a j, b j [ φ j = y j a j b j a j α, α [0, 2π) (5) The feature is transformed using the simple linear transformation in Eq.(5). Note that x j = exp(iφ j ) O, j = 1, 2,..., n is a complex number located on the unit circle. Hence, we obtain the function f(x 1,..., x n ) : O n K. Let the activation function be dynamically developed with boundaries of sectors. We can rewrite the activation function as follows: P d (z, boundary) = boundary(j), arg(boundary(j)) arg(z) < arg(boundary(j 1)) for j = 0, 1,..., k 1, where boundary(j) is the j th root of unity (boundary) on the unit circle. Therefore the boundary can be dynamic during the learning process. It is important to note that boundaries are moving along the unit circle while training, and boundary should be sorted by argument and normalized in each learning step. The idea of MVN learning is the movement of weighted sums, while the idea of MVN learning with moving boundary is the movement of both weighted sums and targeted boundaries (see Fig. 4). In order to obtain the dynamic boundaries, (a) Discrete-valued MVN. (b) Continuous-valued MVN. Fig. 4: Geometrical interpretation of the MVN learning with moving boundary. we introduce a parameter p which represents the proportion of correcting error ([0, 1]) for boundary movement. The learning rule of boundary movement can be expressed as follows: ɛ q r boundary(i) r ɛ q r1 = ɛq r p(ɛ q r ɛ s r) ɛ q r p(ɛ q r ɛ s r) boundary(i) r1 ɛ q r1 (6) where boundary(i) r is the current targeted boundary, and boundary(i) r1 is the updated boundary. The denominator of the corrected term is to normalize the trained boundary to the unit circle. Because error-correcting learning is very similar to competitive learning, they may have some common properties during the learning process. Competitive learning has a serious stability problem when clusters are close together. In certain cases, the weighted sum may invade other sectors, and therefore upset the current classification scheme. It is reasonable that targeted boundary and the weighted sum both contribute to the stability.

4 The learning process of the weights may be based on the same learning rule, Eq.(3), by incorporating the parameter p and applying P d as the dynamic activation function. The Learning rules of discrete and continuous MVN with moving boundary can be written as follows: C r i ωi r (n 1) z r (1 p)(ɛq r P d (z, boundary)) x i i ωi r C r (n 1) z r (1 p)(ɛq r z z ) x i (8) Since the learning algorithm still adopts the k-valued function and the error-correcting learning rule, the convergence of the learning algorithm can be proven based on the convergence of the learning rule Eq.(3) [6]. The implementation of the proposed learning algorithm in one iteration consists of the following steps: procedure MVN-MB(boundary, p) This is one learning epoch with N learning instances j = 1 Let z be the current value of weighted sum. P (z) = ɛ s δ ɛ q ɛ s while j N do Check equation (6) with activation function. for i = 0 to n do / n-feature instance / i ω r i C r (n 1) z r (1 p)δ x i end for ɛ q r1 ɛq r p δ ɛ q r p δ / ɛ q r is the targeted boundary in each learning step / boundary is updated with ɛ q r1 z = ω0 r1 ω1 r1 x 1 ωn r1 x n δ ɛ q r1 P d( z, boundary) / Complex sign function using dynamic boundaries / j = j 1 end while end procedure B. MVN Learning with Targets at the Centre of Sectors This MVN Learning rule is based on the basic discrete/continuous-valued MVN which we mentioned in section II. Let E k be the set of the k th roots of unity: E k = {ɛ 0 k, ɛ 1 k, ɛ 2 k,..., ɛ k 1 k } where ɛ k = exp(i2π/k) is the primitive k th root of unity and boundary is the set of boundaries of k sectors, to be described later. Let K be the set of k-valued logic: K = {0, 1,..., k 1} Let O be a set of continuous values that are located on the unit circle and f(x 1,..., x n ) be a function with the mapping f : O n K. In order to obtain the function f(x 1,..., x n ) : O n K, we can normalize the domain of each feature into a value within the bounded sub-domain D n R n. The (7) feature is transformed using the simple linear transformation in Eq.(5). Note that x j = exp(iφ j ) O, j = 1, 2,..., n is a complex number located on the unit circle. Hence, we obtain the function f(x 1,..., x n ) : O n K. The activation function of this learning algorithm is expressed in Eq.(2). The idea of MVN learning is to move the weighted sums to the target which is exactly the k th roots of unity. In order to increase the classification accuracy, it s reasonable to set the learning targets at the center of each sector (see in Fig. 5). Therefore, we introduce a new set of targets (a) Discrete-valued MVN. (b) Continuous-valued MVN. Fig. 5: Geometrical interpretation of the MVN learning with targets at the center of sectors. τ = {τ 0, τ 1,..., τ k 1 } for weighted sums moving to: τ q = µq µ q, µ q = 1 2 (ɛq ɛ q1 ) (9) where the τ q is the target of the q th sector and ɛ q is the q th roots of unity. There is an exception for binary classification problem, because µ 0 = µ 1 = 0. In this special case, we manually set µ 0 = exp(i π 2 ) and µ1 = exp( i π ). The 2 learning rule is based on the same learning rule, Eq.(3), by incorporating with modified new targets τ. The learning rules of discrete and continuous MVN with targets at the center 61

5 of sectors can be written as follows: i ωi r C r (n 1) z r (τ q P (z))) x i () i ωi r C r (n 1) z r (τ q z z ) x i (11) still adopts the k-valued function and the error-correcting learning rule, the convergence of the learning algorithm can be proven based on the convergence of the learning rule Eq.(3) [6]. The implementation of the proposed learning algorithm in one iteration consists of the following steps: procedure MVN-CS This is one learning epoch with N learning instances j = 1 Check equation (2) with activation function. Let z be the current value of weighted sum. Calculate τ using equation (9). P (z) = ɛ s δ τ q ɛ s while j N do for i = 0 to n do / n-feature instance / i ω r i C r (n 1) z r δ x i end for z = ω0 r1 ω1 r1 x 1 ωn r1 δ τ q P ( z) j = j 1 end while end procedure IV. SIMULATION RESULTS The proposed strategies and learning algorithms are implemented and checked over two benchmark datasets: Glass and Wine [9]. The software simulator is written in Matlab R2011b (64-bit) running on a computer with Intel R Core TM i5-3m 2.5 GHz CPU and 8 GB RAM. A. Glass Dataset The glass identification dataset is taken from the website of UC Irvine Machine Learning Repository [9]. It contains 214 instances and, for each instance, there are realvalued features. The instances belongs to any of the 6 classes indicating one of the following categories: float processed building windows, non-float processed building windows, float processed vehicle windows, non-float processed vehicle windows, containers, and tableware and headlamps. We merge these 6 classes into 3 which are building windows, vehicle windows, and the others. To solve this classification problem, we use 5-fold crossvalidation. The first fold includes 168 instances in the training set and 46 instances in the testing set. The other 4 folds include 172 instances in the training set and 42 instances in the testing set, respectively. After learning with the training set completely with no training error, i.e., training accuracy being 0%, we use the testing set to compute their x n average classification performance. In table (I), the number of learning epochs, the training time, and the testing accuracy involved with each fold are shown. Note that the parameter p is selected between 2 to 4. From this table, we can see that MVN runs faster than the other two versions. However, MVN achieves the lowest testing accuracy among the three methods. MVN with moving boundary runs very slow, but provides slightly better testing accuracy than MVN. MVN with new targets runs slightly slower than MVN, but provides much better testing accuracy than MVN. The distributions of the weighted sums after training are shown in Fig. 6. In Fig. 6(a), the weighted sums after discrete-valued MVN learning are close to certain border of a sector, which may reduce the classification accuracy. Fig. 6(b) shows that the boundaries actually moved during the learning process of MVN with moving boundary, and the moving range depends on the value of parameter p. In Fig. 6(c), the weighted sums moved to center of each sector, which meets our expectation. Sector 1([ π 3, 2π )) is 3 not obvious because the weighted sums in sector 1 are too close to the origin. B. Wine Dataset This dataset is also taken from the website of UC Irvine Machine Learning Repository [9]. It contains 178 instances and, for each instance, there are 13 real-valued features. The instances belongs to any of the 3 classes that indicates 3 different types of wines. To solve this classification problem, we use 5-fold crossvalidation. The first fold has 136 instances in the training set and 42 instances in the testing set. The other 4 folds have 144 instances in the training set and 34 instances in the testing set, respectively.after learning with the training set completely with no training error, i.e., training accuracy being 0%, we use the testing set to compute their average classification performance. In table (II), the number of learning epochs, the training time, and the testing accuracy involved with each fold are shown. Note that the parameter p is selected between 2 to 3. Again, we can see that MVN runs faster than the other two versions. However, MVN achieves the lowest testing accuracy among the three methods. MVN with moving boundary runs very slow, but provides slightly better testing accuracy than MVN. MVN with new targets runs slightly slower than MVN, but provides much better testing accuracy than MVN. The distributions of the weighted sums after training are shown in Fig. 7. In Fig. 7(a), the weighted sums after discrete-valued MVN learning are close to certain border of a sector, which may reduce the classification accuracy. Fig. 7(b) shows that the boundaries actually moved during the learning process of MVN with moving boundary, and the moving range depends on the value of parameter p. In Fig. 7(c), the weighted sums moved to center of each sector, which meets our expectation 62

6 TABLE I: Comparison of the three methods on the Glass dataset MVN MVN with moving boundary MVN with new targets* Epoch Time (sec) Accuracy (%) Epoch Time (sec) Accuracy (%) Epoch Time (sec) Accuracy (%) Average Average Average *:continuous learning rule (Eq.11) (a) MVN. (b) MVN-MB. Fig. 6: Distribution of weighted sums for the Glass dataset. (c) MVN-CS. TABLE II: Comparison of the three methods on the Wine dataset MVN MVN with moving boundary MVN with new targets* Epoch Time (sec) Accuracy (%) Epoch Time (sec) Accuracy (%) Epoch Time (sec) Accuracy (%) Average Average Average *:continuous learning rule (11) V. CONCLUSION We have proposed two new learning schemes for MVN. One is with a dynamic activation function which we call the moving boundary. For it we we introduced a parameter p in the boundary movement process. The other sets the learning targets at the center of the sectors. The simulation results obtained with benchmark datasets show that MVN with our modifications, especially the one with new targets, can improve convergence and testing accuracy. More work will be done to see their effects on MVN-P or the multilayer structure of MVN. REFERENCES [1] N. N. Aizenberg and I. N. Aizenberg, CNN based on multivalued neuron as a model of associative memory for grey scale images, in Proc. Workshop Second Int Cellular Neural Networks and their Applications CNNA-92, 1992, pp [Online]. Available: [2] I. Aizenberg, N. N. Aizenberg, and J. P. Vandewalle, Multi-Valued and Universal Binary Neurons: Theory, Learning and Applications. Springer,

7 (a) MVN. (b) MVN-MB. Fig. 7: Distribution of weighted sums for the Wine dataset. (c) MVN-CS. [3] I. Aizenberg and C. Moraga, Multilayer feedforward neural network based on multi-valued neurons (mlmvn) and a backpropagation learning algorithm, Soft Computing-A Fusion of Foundations, Methodologies and Applications, vol. 11, no. 2, pp , [Online]. Available: [4] I. Aizenberg, Periodic activation function and a modified learning algorithm for the multivalued neuron, vol. 21, no. 12, pp , 20. [Online]. Available: [5] I. Aizenberg, J. Jackson, and S. Alexander, Classification of blurred textures using multilayer neural network based on multi-valued neurons, Neural Networks (IJCNN), The 2011 International Joint Conference on pp.1328,1335, July Aug [Online]. Available: [6] I., Aizenberg, Complex-valued neural networks with multi-valued neurons. Springer, 2011, vol [7] I. Aizenberg, D. V. Paliy, J. M. Zurada, and J. T. Astola, Blur identification by multilayer neural network based on multivalued neurons, vol. 19, no. 5, pp , [Online]. Available: [8] M. T. Hagan, H. B. Demuth, M. H. Beale et al., Neural network design. Thomson Learning Stamford, CT, [9] UCI Machine Learning Repository. 64

AMULTILAYER neural network based on multivalued neurons

AMULTILAYER neural network based on multivalued neurons IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 5, MAY 2008 883 Blur Identification by Multilayer Neural Network Based on Multivalued Neurons Igor Aizenberg, Senior Member, IEEE, Dmitriy V. Paliy, Jacek

More information

Notes on Multilayer, Feedforward Neural Networks

Notes on Multilayer, Feedforward Neural Networks Notes on Multilayer, Feedforward Neural Networks CS425/528: Machine Learning Fall 2012 Prepared by: Lynne E. Parker [Material in these notes was gleaned from various sources, including E. Alpaydin s book

More information

Neural Network Neurons

Neural Network Neurons Neural Networks Neural Network Neurons 1 Receives n inputs (plus a bias term) Multiplies each input by its weight Applies activation function to the sum of results Outputs result Activation Functions Given

More information

Simulation of Back Propagation Neural Network for Iris Flower Classification

Simulation of Back Propagation Neural Network for Iris Flower Classification American Journal of Engineering Research (AJER) e-issn: 2320-0847 p-issn : 2320-0936 Volume-6, Issue-1, pp-200-205 www.ajer.org Research Paper Open Access Simulation of Back Propagation Neural Network

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Fundamentals Adrian Horzyk Preface Before we can proceed to discuss specific complex methods we have to introduce basic concepts, principles, and models of computational intelligence

More information

Generating the Reduced Set by Systematic Sampling

Generating the Reduced Set by Systematic Sampling Generating the Reduced Set by Systematic Sampling Chien-Chung Chang and Yuh-Jye Lee Email: {D9115009, yuh-jye}@mail.ntust.edu.tw Department of Computer Science and Information Engineering National Taiwan

More information

Character Recognition Using Convolutional Neural Networks

Character Recognition Using Convolutional Neural Networks Character Recognition Using Convolutional Neural Networks David Bouchain Seminar Statistical Learning Theory University of Ulm, Germany Institute for Neural Information Processing Winter 2006/2007 Abstract

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System

International Journal of Electrical and Computer Engineering 4: Application of Neural Network in User Authentication for Smart Home System Application of Neural Network in User Authentication for Smart Home System A. Joseph, D.B.L. Bong, and D.A.A. Mat Abstract Security has been an important issue and concern in the smart home systems. Smart

More information

CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS

CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS CHAPTER 8 COMPOUND CHARACTER RECOGNITION USING VARIOUS MODELS 8.1 Introduction The recognition systems developed so far were for simple characters comprising of consonants and vowels. But there is one

More information

Supervised Learning in Neural Networks (Part 2)

Supervised Learning in Neural Networks (Part 2) Supervised Learning in Neural Networks (Part 2) Multilayer neural networks (back-propagation training algorithm) The input signals are propagated in a forward direction on a layer-bylayer basis. Learning

More information

COMBINING NEURAL NETWORKS FOR SKIN DETECTION

COMBINING NEURAL NETWORKS FOR SKIN DETECTION COMBINING NEURAL NETWORKS FOR SKIN DETECTION Chelsia Amy Doukim 1, Jamal Ahmad Dargham 1, Ali Chekima 1 and Sigeru Omatu 2 1 School of Engineering and Information Technology, Universiti Malaysia Sabah,

More information

Machine Learning Classifiers and Boosting

Machine Learning Classifiers and Boosting Machine Learning Classifiers and Boosting Reading Ch 18.6-18.12, 20.1-20.3.2 Outline Different types of learning problems Different types of learning algorithms Supervised learning Decision trees Naïve

More information

Learning to bounce a ball with a robotic arm

Learning to bounce a ball with a robotic arm Eric Wolter TU Darmstadt Thorsten Baark TU Darmstadt Abstract Bouncing a ball is a fun and challenging task for humans. It requires fine and complex motor controls and thus is an interesting problem for

More information

Neural Networks (Overview) Prof. Richard Zanibbi

Neural Networks (Overview) Prof. Richard Zanibbi Neural Networks (Overview) Prof. Richard Zanibbi Inspired by Biology Introduction But as used in pattern recognition research, have little relation with real neural systems (studied in neurology and neuroscience)

More information

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm

Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Acta Technica 61, No. 4A/2016, 189 200 c 2017 Institute of Thermomechanics CAS, v.v.i. Research on time optimal trajectory planning of 7-DOF manipulator based on genetic algorithm Jianrong Bu 1, Junyan

More information

A Study on Clustering Method by Self-Organizing Map and Information Criteria

A Study on Clustering Method by Self-Organizing Map and Information Criteria A Study on Clustering Method by Self-Organizing Map and Information Criteria Satoru Kato, Tadashi Horiuchi,andYoshioItoh Matsue College of Technology, 4-4 Nishi-ikuma, Matsue, Shimane 90-88, JAPAN, kato@matsue-ct.ac.jp

More information

Gauss-Sigmoid Neural Network

Gauss-Sigmoid Neural Network Gauss-Sigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract- Recently RBF(Radial Basis Function)-based networks have

More information

Alpha-trimmed Image Estimation for JPEG Steganography Detection

Alpha-trimmed Image Estimation for JPEG Steganography Detection Proceedings of the 2009 IEEE International Conference on Systems, Man, and Cybernetics San Antonio, TX, USA - October 2009 Alpha-trimmed Image Estimation for JPEG Steganography Detection Mei-Ching Chen,

More information

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting.

An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. An Algorithm For Training Multilayer Perceptron (MLP) For Image Reconstruction Using Neural Network Without Overfitting. Mohammad Mahmudul Alam Mia, Shovasis Kumar Biswas, Monalisa Chowdhury Urmi, Abubakar

More information

Introduction to ANSYS DesignXplorer

Introduction to ANSYS DesignXplorer Lecture 4 14. 5 Release Introduction to ANSYS DesignXplorer 1 2013 ANSYS, Inc. September 27, 2013 s are functions of different nature where the output parameters are described in terms of the input parameters

More information

Nelder-Mead Enhanced Extreme Learning Machine

Nelder-Mead Enhanced Extreme Learning Machine Philip Reiner, Bogdan M. Wilamowski, "Nelder-Mead Enhanced Extreme Learning Machine", 7-th IEEE Intelligent Engineering Systems Conference, INES 23, Costa Rica, June 9-2., 29, pp. 225-23 Nelder-Mead Enhanced

More information

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS

IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS IMPLEMENTATION OF RBF TYPE NETWORKS BY SIGMOIDAL FEEDFORWARD NEURAL NETWORKS BOGDAN M.WILAMOWSKI University of Wyoming RICHARD C. JAEGER Auburn University ABSTRACT: It is shown that by introducing special

More information

COMPUTATIONAL INTELLIGENCE

COMPUTATIONAL INTELLIGENCE COMPUTATIONAL INTELLIGENCE Radial Basis Function Networks Adrian Horzyk Preface Radial Basis Function Networks (RBFN) are a kind of artificial neural networks that use radial basis functions (RBF) as activation

More information

Decision Jungles: Compact and Rich Models for Classification Supplementary Material

Decision Jungles: Compact and Rich Models for Classification Supplementary Material Decision Jungles: Compact and Rich Models for Classification Supplementary Material Jamie Shotton Toby Sharp Pushmeet Kohli Sebastian Nowozin John Winn Antonio Criminisi Microsoft Research, Cambridge,

More information

Learning and Generalization in Single Layer Perceptrons

Learning and Generalization in Single Layer Perceptrons Learning and Generalization in Single Layer Perceptrons Neural Computation : Lecture 4 John A. Bullinaria, 2015 1. What Can Perceptrons do? 2. Decision Boundaries The Two Dimensional Case 3. Decision Boundaries

More information

Robot localization method based on visual features and their geometric relationship

Robot localization method based on visual features and their geometric relationship , pp.46-50 http://dx.doi.org/10.14257/astl.2015.85.11 Robot localization method based on visual features and their geometric relationship Sangyun Lee 1, Changkyung Eem 2, and Hyunki Hong 3 1 Department

More information

A PMU-Based Three-Step Controlled Separation with Transient Stability Considerations

A PMU-Based Three-Step Controlled Separation with Transient Stability Considerations Title A PMU-Based Three-Step Controlled Separation with Transient Stability Considerations Author(s) Wang, C; Hou, Y Citation The IEEE Power and Energy Society (PES) General Meeting, Washington, USA, 27-31

More information

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization

More on Learning. Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization More on Learning Neural Nets Support Vectors Machines Unsupervised Learning (Clustering) K-Means Expectation-Maximization Neural Net Learning Motivated by studies of the brain. A network of artificial

More information

An Efficient Clustering Method for k-anonymization

An Efficient Clustering Method for k-anonymization An Efficient Clustering Method for -Anonymization Jun-Lin Lin Department of Information Management Yuan Ze University Chung-Li, Taiwan jun@saturn.yzu.edu.tw Meng-Cheng Wei Department of Information Management

More information

SVM Classification in Multiclass Letter Recognition System

SVM Classification in Multiclass Letter Recognition System Global Journal of Computer Science and Technology Software & Data Engineering Volume 13 Issue 9 Version 1.0 Year 2013 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals

More information

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains

Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,

More information

Two-step Modified SOM for Parallel Calculation

Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Petr Gajdoš and Pavel Moravec Petr Gajdoš and Pavel Moravec Department of Computer Science, FEECS, VŠB Technical

More information

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani

Neural Networks. CE-725: Statistical Pattern Recognition Sharif University of Technology Spring Soleymani Neural Networks CE-725: Statistical Pattern Recognition Sharif University of Technology Spring 2013 Soleymani Outline Biological and artificial neural networks Feed-forward neural networks Single layer

More information

Chapter 8 The C 4.5*stat algorithm

Chapter 8 The C 4.5*stat algorithm 109 The C 4.5*stat algorithm This chapter explains a new algorithm namely C 4.5*stat for numeric data sets. It is a variant of the C 4.5 algorithm and it uses variance instead of information gain for the

More information

An ELM-based traffic flow prediction method adapted to different data types Wang Xingchao1, a, Hu Jianming2, b, Zhang Yi3 and Wang Zhenyu4

An ELM-based traffic flow prediction method adapted to different data types Wang Xingchao1, a, Hu Jianming2, b, Zhang Yi3 and Wang Zhenyu4 6th International Conference on Information Engineering for Mechanics and Materials (ICIMM 206) An ELM-based traffic flow prediction method adapted to different data types Wang Xingchao, a, Hu Jianming2,

More information

An Empirical Study of Software Metrics in Artificial Neural Networks

An Empirical Study of Software Metrics in Artificial Neural Networks An Empirical Study of Software Metrics in Artificial Neural Networks WING KAI, LEUNG School of Computing Faculty of Computing, Information and English University of Central England Birmingham B42 2SU UNITED

More information

Gesture Recognition using Neural Networks

Gesture Recognition using Neural Networks Gesture Recognition using Neural Networks Jeremy Smith Department of Computer Science George Mason University Fairfax, VA Email: jsmitq@masonlive.gmu.edu ABSTRACT A gesture recognition method for body

More information

2. Neural network basics

2. Neural network basics 2. Neural network basics Next commonalities among different neural networks are discussed in order to get started and show which structural parts or concepts appear in almost all networks. It is presented

More information

Algorithms for Soft Document Clustering

Algorithms for Soft Document Clustering Algorithms for Soft Document Clustering Michal Rojček 1, Igor Mokriš 2 1 Department of Informatics, Faculty of Pedagogic, Catholic University in Ružomberok, Hrabovská cesta 1, 03401 Ružomberok, Slovakia

More information

Improving Classification Accuracy for Single-loop Reliability-based Design Optimization

Improving Classification Accuracy for Single-loop Reliability-based Design Optimization , March 15-17, 2017, Hong Kong Improving Classification Accuracy for Single-loop Reliability-based Design Optimization I-Tung Yang, and Willy Husada Abstract Reliability-based design optimization (RBDO)

More information

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC

Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Accurate and Dense Wide-Baseline Stereo Matching Using SW-POC Shuji Sakai, Koichi Ito, Takafumi Aoki Graduate School of Information Sciences, Tohoku University, Sendai, 980 8579, Japan Email: sakai@aoki.ecei.tohoku.ac.jp

More information

Keywords: ANN; network topology; bathymetric model; representability.

Keywords: ANN; network topology; bathymetric model; representability. Proceedings of ninth International Conference on Hydro-Science and Engineering (ICHE 2010), IIT Proceedings Madras, Chennai, of ICHE2010, India. IIT Madras, Aug 2-5,2010 DETERMINATION OF 2 NETWORK - 5

More information

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION

CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION CHAPTER 6 MODIFIED FUZZY TECHNIQUES BASED IMAGE SEGMENTATION 6.1 INTRODUCTION Fuzzy logic based computational techniques are becoming increasingly important in the medical image analysis arena. The significant

More information

Data mining with Support Vector Machine

Data mining with Support Vector Machine Data mining with Support Vector Machine Ms. Arti Patle IES, IPS Academy Indore (M.P.) artipatle@gmail.com Mr. Deepak Singh Chouhan IES, IPS Academy Indore (M.P.) deepak.schouhan@yahoo.com Abstract: Machine

More information

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES

APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES APPLICATION OF THE FUZZY MIN-MAX NEURAL NETWORK CLASSIFIER TO PROBLEMS WITH CONTINUOUS AND DISCRETE ATTRIBUTES A. Likas, K. Blekas and A. Stafylopatis National Technical University of Athens Department

More information

Particle Swarm Optimization applied to Pattern Recognition

Particle Swarm Optimization applied to Pattern Recognition Particle Swarm Optimization applied to Pattern Recognition by Abel Mengistu Advisor: Dr. Raheel Ahmad CS Senior Research 2011 Manchester College May, 2011-1 - Table of Contents Introduction... - 3 - Objectives...

More information

Data Mining. Neural Networks

Data Mining. Neural Networks Data Mining Neural Networks Goals for this Unit Basic understanding of Neural Networks and how they work Ability to use Neural Networks to solve real problems Understand when neural networks may be most

More information

Research on Evaluation Method of Product Style Semantics Based on Neural Network

Research on Evaluation Method of Product Style Semantics Based on Neural Network Research Journal of Applied Sciences, Engineering and Technology 6(23): 4330-4335, 2013 ISSN: 2040-7459; e-issn: 2040-7467 Maxwell Scientific Organization, 2013 Submitted: September 28, 2012 Accepted:

More information

CNN Template Design Using Back Propagation Algorithm

CNN Template Design Using Back Propagation Algorithm 2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA) CNN Template Design Using Back Propagation Algorithm Masashi Nakagawa, Takashi Inoue and Yoshifumi Nishio Department

More information

Subgraph Matching Using Graph Neural Network

Subgraph Matching Using Graph Neural Network Journal of Intelligent Learning Systems and Applications, 0,, -8 http://dxdoiorg/0/jilsa008 Published Online November 0 (http://wwwscirporg/journal/jilsa) GnanaJothi Raja Baskararaja, MeenaRani Sundaramoorthy

More information

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section

This leads to our algorithm which is outlined in Section III, along with a tabular summary of it's performance on several benchmarks. The last section An Algorithm for Incremental Construction of Feedforward Networks of Threshold Units with Real Valued Inputs Dhananjay S. Phatak Electrical Engineering Department State University of New York, Binghamton,

More information

Comparing Univariate and Multivariate Decision Trees *

Comparing Univariate and Multivariate Decision Trees * Comparing Univariate and Multivariate Decision Trees * Olcay Taner Yıldız, Ethem Alpaydın Department of Computer Engineering Boğaziçi University, 80815 İstanbul Turkey yildizol@cmpe.boun.edu.tr, alpaydin@boun.edu.tr

More information

Parallel Implementation of a Random Search Procedure: An Experimental Study

Parallel Implementation of a Random Search Procedure: An Experimental Study Parallel Implementation of a Random Search Procedure: An Experimental Study NIKOLAI K. KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 University Ave., St. Petersburg,

More information

Powered Outer Probabilistic Clustering

Powered Outer Probabilistic Clustering Proceedings of the World Congress on Engineering and Computer Science 217 Vol I WCECS 217, October 2-27, 217, San Francisco, USA Powered Outer Probabilistic Clustering Peter Taraba Abstract Clustering

More information

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15

Perceptrons and Backpropagation. Fabio Zachert Cognitive Modelling WiSe 2014/15 Perceptrons and Backpropagation Fabio Zachert Cognitive Modelling WiSe 2014/15 Content History Mathematical View of Perceptrons Network Structures Gradient Descent Backpropagation (Single-Layer-, Multilayer-Networks)

More information

Overlapping Swarm Intelligence for Training Artificial Neural Networks

Overlapping Swarm Intelligence for Training Artificial Neural Networks Overlapping Swarm Intelligence for Training Artificial Neural Networks Karthik Ganesan Pillai Department of Computer Science Montana State University EPS 357, PO Box 173880 Bozeman, MT 59717-3880 k.ganesanpillai@cs.montana.edu

More information

Image coding based on multiband wavelet and adaptive quad-tree partition

Image coding based on multiband wavelet and adaptive quad-tree partition Journal of Computational and Applied Mathematics 195 (2006) 2 7 www.elsevier.com/locate/cam Image coding based on multiband wavelet and adaptive quad-tree partition Bi Ning a,,1, Dai Qinyun a,b, Huang

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,  ISSN Comparative study of fuzzy logic and neural network methods in modeling of simulated steady-state data M. Järvensivu and V. Kanninen Laboratory of Process Control, Department of Chemical Engineering, Helsinki

More information

Deep Learning. Architecture Design for. Sargur N. Srihari

Deep Learning. Architecture Design for. Sargur N. Srihari Architecture Design for Deep Learning Sargur N. srihari@cedar.buffalo.edu 1 Topics Overview 1. Example: Learning XOR 2. Gradient-Based Learning 3. Hidden Units 4. Architecture Design 5. Backpropagation

More information

A heuristic approach to find the global optimum of function

A heuristic approach to find the global optimum of function Journal of Computational and Applied Mathematics 209 (2007) 160 166 www.elsevier.com/locate/cam A heuristic approach to find the global optimum of function M. Duran Toksarı Engineering Faculty, Industrial

More information

Using CODEQ to Train Feed-forward Neural Networks

Using CODEQ to Train Feed-forward Neural Networks Using CODEQ to Train Feed-forward Neural Networks Mahamed G. H. Omran 1 and Faisal al-adwani 2 1 Department of Computer Science, Gulf University for Science and Technology, Kuwait, Kuwait omran.m@gust.edu.kw

More information

ORT EP R RCH A ESE R P A IDI! " #$$% &' (# $!"

ORT EP R RCH A ESE R P A IDI!  #$$% &' (# $! R E S E A R C H R E P O R T IDIAP A Parallel Mixture of SVMs for Very Large Scale Problems Ronan Collobert a b Yoshua Bengio b IDIAP RR 01-12 April 26, 2002 Samy Bengio a published in Neural Computation,

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

A New Technique of Extraction of Edge Detection Using Digital Image Processing

A New Technique of Extraction of Edge Detection Using Digital Image Processing International OPEN ACCESS Journal Of Modern Engineering Research (IJMER) A New Technique of Extraction of Edge Detection Using Digital Image Processing Balaji S.C.K 1 1, Asst Professor S.V.I.T Abstract:

More information

Using Decision Boundary to Analyze Classifiers

Using Decision Boundary to Analyze Classifiers Using Decision Boundary to Analyze Classifiers Zhiyong Yan Congfu Xu College of Computer Science, Zhejiang University, Hangzhou, China yanzhiyong@zju.edu.cn Abstract In this paper we propose to use decision

More information

Kernel Combination Versus Classifier Combination

Kernel Combination Versus Classifier Combination Kernel Combination Versus Classifier Combination Wan-Jui Lee 1, Sergey Verzakov 2, and Robert P.W. Duin 2 1 EE Department, National Sun Yat-Sen University, Kaohsiung, Taiwan wrlee@water.ee.nsysu.edu.tw

More information

Support Vector Machines

Support Vector Machines Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining

More information

Optimal Segmentation and Understanding of Motion Capture Data

Optimal Segmentation and Understanding of Motion Capture Data Optimal Segmentation and Understanding of Motion Capture Data Xiang Huang, M.A.Sc Candidate Department of Electrical and Computer Engineering McMaster University Supervisor: Dr. Xiaolin Wu 7 Apr, 2005

More information

Week 3: Perceptron and Multi-layer Perceptron

Week 3: Perceptron and Multi-layer Perceptron Week 3: Perceptron and Multi-layer Perceptron Phong Le, Willem Zuidema November 12, 2013 Last week we studied two famous biological neuron models, Fitzhugh-Nagumo model and Izhikevich model. This week,

More information

Binarization of Color Character Strings in Scene Images Using K-means Clustering and Support Vector Machines

Binarization of Color Character Strings in Scene Images Using K-means Clustering and Support Vector Machines 2011 International Conference on Document Analysis and Recognition Binarization of Color Character Strings in Scene Images Using K-means Clustering and Support Vector Machines Toru Wakahara Kohei Kita

More information

Feature clustering and mutual information for the selection of variables in spectral data

Feature clustering and mutual information for the selection of variables in spectral data Feature clustering and mutual information for the selection of variables in spectral data C. Krier 1, D. François 2, F.Rossi 3, M. Verleysen 1 1,2 Université catholique de Louvain, Machine Learning Group

More information

For Monday. Read chapter 18, sections Homework:

For Monday. Read chapter 18, sections Homework: For Monday Read chapter 18, sections 10-12 The material in section 8 and 9 is interesting, but we won t take time to cover it this semester Homework: Chapter 18, exercise 25 a-b Program 4 Model Neuron

More information

Constructing Street-maps from GPS Trajectories

Constructing Street-maps from GPS Trajectories Constructing Street-maps from GPS Trajectories Mahmuda Ahmed, Carola Wenk The University of Texas @San Antonio Department of Computer Science Presented by Mahmuda Ahmed www.cs.utsa.edu/~mahmed Problem

More information

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES

CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES CHAPTER 6 HYBRID AI BASED IMAGE CLASSIFICATION TECHNIQUES 6.1 INTRODUCTION The exploration of applications of ANN for image classification has yielded satisfactory results. But, the scope for improving

More information

Calculation of Model of the Robot by Neural Network with Robot Joint Distinction

Calculation of Model of the Robot by Neural Network with Robot Joint Distinction Calculation of Model of the Robot by Neural Network with Robot Joint Distinction J. Możaryn and J. E. Kurek Warsaw University of Technology, Institute of Automatic Control and Robotics, Warszawa, ul. Sw.Andrzeja

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

UAV Motion-Blurred Image Restoration Using Improved Continuous Hopfield Network Image Restoration Algorithm

UAV Motion-Blurred Image Restoration Using Improved Continuous Hopfield Network Image Restoration Algorithm Journal of Information Hiding and Multimedia Signal Processing c 207 ISSN 2073-422 Ubiquitous International Volume 8, Number 4, July 207 UAV Motion-Blurred Image Restoration Using Improved Continuous Hopfield

More information

Bagging and Boosting Algorithms for Support Vector Machine Classifiers

Bagging and Boosting Algorithms for Support Vector Machine Classifiers Bagging and Boosting Algorithms for Support Vector Machine Classifiers Noritaka SHIGEI and Hiromi MIYAJIMA Dept. of Electrical and Electronics Engineering, Kagoshima University 1-21-40, Korimoto, Kagoshima

More information

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R.

Neural Network Learning. Today s Lecture. Continuation of Neural Networks. Artificial Neural Networks. Lecture 24: Learning 3. Victor R. Lecture 24: Learning 3 Victor R. Lesser CMPSCI 683 Fall 2010 Today s Lecture Continuation of Neural Networks Artificial Neural Networks Compose of nodes/units connected by links Each link has a numeric

More information

Texture Sensitive Image Inpainting after Object Morphing

Texture Sensitive Image Inpainting after Object Morphing Texture Sensitive Image Inpainting after Object Morphing Yin Chieh Liu and Yi-Leh Wu Department of Computer Science and Information Engineering National Taiwan University of Science and Technology, Taiwan

More information

CONTENT ADAPTIVE SCREEN IMAGE SCALING

CONTENT ADAPTIVE SCREEN IMAGE SCALING CONTENT ADAPTIVE SCREEN IMAGE SCALING Yao Zhai (*), Qifei Wang, Yan Lu, Shipeng Li University of Science and Technology of China, Hefei, Anhui, 37, China Microsoft Research, Beijing, 8, China ABSTRACT

More information

On Multiple Query Optimization in Data Mining

On Multiple Query Optimization in Data Mining On Multiple Query Optimization in Data Mining Marek Wojciechowski, Maciej Zakrzewicz Poznan University of Technology Institute of Computing Science ul. Piotrowo 3a, 60-965 Poznan, Poland {marek,mzakrz}@cs.put.poznan.pl

More information

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers

Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers Traffic Signs Recognition using HP and HOG Descriptors Combined to MLP and SVM Classifiers A. Salhi, B. Minaoui, M. Fakir, H. Chakib, H. Grimech Faculty of science and Technology Sultan Moulay Slimane

More information

Fast Learning for Big Data Using Dynamic Function

Fast Learning for Big Data Using Dynamic Function IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Fast Learning for Big Data Using Dynamic Function To cite this article: T Alwajeeh et al 2017 IOP Conf. Ser.: Mater. Sci. Eng.

More information

Machine Learning Final Project

Machine Learning Final Project Machine Learning Final Project Team: hahaha R01942054 林家蓉 R01942068 賴威昇 January 15, 2014 1 Introduction In this project, we are asked to solve a classification problem of Chinese characters. The training

More information

DOUBLE-CURVED SURFACE FORMING PROCESS MODELING

DOUBLE-CURVED SURFACE FORMING PROCESS MODELING 7th International DAAAM Baltic Conference INDUSTRIAL ENGINEERING 22-24 April 2010, Tallinn, Estonia DOUBLE-CURVED SURFACE FORMING PROCESS MODELING Velsker, T.; Majak, J.; Eerme, M.; Pohlak, M. Abstract:

More information

Support Vector Machines

Support Vector Machines Support Vector Machines RBF-networks Support Vector Machines Good Decision Boundary Optimization Problem Soft margin Hyperplane Non-linear Decision Boundary Kernel-Trick Approximation Accurancy Overtraining

More information

A New Algorithm for Measuring and Optimizing the Manipulability Index

A New Algorithm for Measuring and Optimizing the Manipulability Index DOI 10.1007/s10846-009-9388-9 A New Algorithm for Measuring and Optimizing the Manipulability Index Ayssam Yehia Elkady Mohammed Mohammed Tarek Sobh Received: 16 September 2009 / Accepted: 27 October 2009

More information

Some questions of consensus building using co-association

Some questions of consensus building using co-association Some questions of consensus building using co-association VITALIY TAYANOV Polish-Japanese High School of Computer Technics Aleja Legionow, 4190, Bytom POLAND vtayanov@yahoo.com Abstract: In this paper

More information

A Parallel Evolutionary Algorithm for Discovery of Decision Rules

A Parallel Evolutionary Algorithm for Discovery of Decision Rules A Parallel Evolutionary Algorithm for Discovery of Decision Rules Wojciech Kwedlo Faculty of Computer Science Technical University of Bia lystok Wiejska 45a, 15-351 Bia lystok, Poland wkwedlo@ii.pb.bialystok.pl

More information

Weighting and selection of features.

Weighting and selection of features. Intelligent Information Systems VIII Proceedings of the Workshop held in Ustroń, Poland, June 14-18, 1999 Weighting and selection of features. Włodzisław Duch and Karol Grudziński Department of Computer

More information

Parallel Neural Network Training with OpenCL

Parallel Neural Network Training with OpenCL Parallel Neural Network Training with OpenCL Nenad Krpan, Domagoj Jakobović Faculty of Electrical Engineering and Computing Unska 3, Zagreb, Croatia Email: nenadkrpan@gmail.com, domagoj.jakobovic@fer.hr

More information

Combining Neural Networks Based on Dempster-Shafer Theory for Classifying Data with Imperfect Labels

Combining Neural Networks Based on Dempster-Shafer Theory for Classifying Data with Imperfect Labels Combining Neural Networks Based on Dempster-Shafer Theory for Classifying Data with Imperfect Labels Mahdi Tabassian 1,2, Reza Ghaderi 1, and Reza Ebrahimpour 2,3 1 Faculty of Electrical and Computer Engineering,

More information

Grammar Rule Extraction and Transfer in Buildings

Grammar Rule Extraction and Transfer in Buildings Grammar Rule Extraction and Transfer in Buildings Asad Khalid Ismail Lahore University of Management Sciences Sector U, DHA, Lahore 13100004@lums.edu.pk Zuha Agha Lahore University of Management Sciences

More information

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects

Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Comparison of Some Motion Detection Methods in cases of Single and Multiple Moving Objects Shamir Alavi Electrical Engineering National Institute of Technology Silchar Silchar 788010 (Assam), India alavi1223@hotmail.com

More information

Fuzzy Ant Clustering by Centroid Positioning

Fuzzy Ant Clustering by Centroid Positioning Fuzzy Ant Clustering by Centroid Positioning Parag M. Kanade and Lawrence O. Hall Computer Science & Engineering Dept University of South Florida, Tampa FL 33620 @csee.usf.edu Abstract We

More information

Application of Finite Volume Method for Structural Analysis

Application of Finite Volume Method for Structural Analysis Application of Finite Volume Method for Structural Analysis Saeed-Reza Sabbagh-Yazdi and Milad Bayatlou Associate Professor, Civil Engineering Department of KNToosi University of Technology, PostGraduate

More information

Efficient Object Tracking Using K means and Radial Basis Function

Efficient Object Tracking Using K means and Radial Basis Function Efficient Object Tracing Using K means and Radial Basis Function Mr. Pradeep K. Deshmuh, Ms. Yogini Gholap University of Pune Department of Post Graduate Computer Engineering, JSPM S Rajarshi Shahu College

More information

An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve

An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve An Initial Seed Selection Algorithm for K-means Clustering of Georeferenced Data to Improve Replicability of Cluster Assignments for Mapping Application Fouad Khan Central European University-Environmental

More information