Influence of Neighbor Size for Initial Node Exchange of SOM Learning

Size: px
Start display at page:

Download "Influence of Neighbor Size for Initial Node Exchange of SOM Learning"

Transcription

1 FR-E3-3 Tokyo, Japan (September 20-24, 2006) Influence of Neighbor Size for Initial Node Exchange of SOM Learning MIYOSHI Tsutomu Department of Information and Knowledge Engineering, Tottori University, Tottori-shi Koyama-cho Minami 4-101, , Tottori, Japan. mijosxi@ike.tottori-u.ac.jp Abstract Self Organizing Map (SOM) is a kind of neural networks, that learns the feature of input data thorough unsupervised and competitive neighborhood learning. In SOM learning algorithm, every connection weights in SOM feature map are initialized at random values to covers whole space of input data, however, this is also set nodes at random point of feature map independently with data space. The move distance of output nodes increases and learning convergence becomes slow for this. As precedence research, I proposed the method that, initial node exchange by using a part of learning data. In this paper, I investigate how the average move distance of a node would change with the differences in initial size of neighbor area in node exchange process. As a result of experiments, I clarified the relation between average of move distance of nodes and initial neighbor size, drew the expression of relations, and showed the optimal domain of relations. K I. INTRODUCTION ohonen's self Organizing Map (SOM) [1] is a kind of neural networks [2], that algorithm learns the feature of input data thorough unsupervised and competitive neighbourhood learning. SOM is applied to many fields and there are a lot of studies [9]. SOM learning efficiency or learning speed must be essential to put to practical use of SOM, so some techniques are proposed that, batch-learning [10], incremental ordering of data units [11], rough comparison [12], etc. However, the studies about improvement methods based on conventional SOM learning algorithm are not a major stream. According to conventional SOM learning algorithm, SOM learning is influenced by the order of learning data and initial feature map. As precedence research, firstly we reported about the influence of the order of learning data [3-5]. We found that, a difference occurred at learning speed by the order of learning data even if all learning data and initial feature map are the same [3]. And I explained a part of this phenomenon by mathematical analysis [4-5]. Secondly I reported about initial feature map that also influence to SOM learning efficiency. In initialization process, every connection weights in feature map are initialized at random values to covers whole space of input data, however, this is also set nodes to random point of feature map independently with input data space. Unfortunately, learning speed or learning convergence becomes slow is expected for this relation missing, because this relation is self organized when convergence was completed. So, I proposed the method that, initial node exchange by using a part of learning data [6-8]. By using this method, both the average of move distance of all nodes and the time to the completion of learning were shortened to almost half [6]. And there is effect sufficient by performing exchange process by few number of learning data [7-8]. Since node exchange process is performed by adding to the conventional SOM learning algorithm, it is desirable for the load of processing to be smaller. It clarified about two of three factors which affect the load of processing, so, I thought that it was necessary to clarify influence of 2nd factor, initial size of neighbor area. In this paper, I investigate how the average move distance of a node would change with the differences in initial size of neighbor area in node exchange process. Hereafter, Chapter 2 explains SOM, Chapter 3 describes about initial node exchange, Chapter 4 describes experiments, and Chapter 5 describes a conclusion. II. SELF ORGANIZING MAP Kohonen's Self Organizing Maps (SOM) [1] is a kind of neural networks [2], that learns the feature of input data thorough unsupervised and competitive neighbourhood learning. SOM is mapping from a high dimensional space to a low dimensional space (usually a two dimensional map). It provides a feature map that arranged similar classes in near position, so it can visualize the high-dimensional information to a two dimensional feature map. Map representation makes us easy to understand the relation between data. Generally, SOM has two layers, input layer and output layer. The output layer nodes usually forms a two-dimensional grid and input layer nodes are fully connected with those at the output layer nodes. Each connection has connection weight, so every output layer nodes have pattern or vector to be learned

2 The image of SOM is shown in fig.1. Output Layer Connection (Connection Weights) Input Layer A. SOM Learning Algorithm Fig 1 : structure of SOM Output Layer Node Input Layer Node In learning process, when an input pattern or input vector is presented to the input layer as learning data, the output layer nodes compete with each other for the right to be declared the winner. The winner node will be the output layer node whose incoming connection weights are the closest to the input pattern in terms of Euclidean distance. The connection weights of the winner node and its neighbor nodes are then adjusted, i.e., moved closer in the direction of the input pattern. As learning process progresses, learning rate and the size of the neighbor area around the winner node will be made to decrease. So, large number of output layer nodes will be adjusted strongly in early stage of learning process, and only winner node will be adjusted weakly in final stage. SOM learning consists of double loop. By the inside loop, learning data is inputted in order and connection weights are adjusted. By the outside loop, learning rate and the size of the neighbor area around the winner node are made to decrease and an inside loop is repeated until learning is completed. The inside loop scheme of learning algorithm is shown as 1: select input data at random from learning data, 2: input data is presented to input nodes, 3: calculate distance between input data and all output nodes, 4: ordering output nodes by distance, 5: set the first order node to winner node, 6: adjust connection weights of the node, 7: select a neighbor node of winner node in feature map, 8: adjust connection weights of the node, 9: repeat 7 to 8 until all neighbor node are processed, 10: repeat 1 to 9 until all learning data are performed. In SOM learning algorithm, step 6 and step 8 of inside loop, connection weights of winner and its neighbors are moved closer in the direction of learning data or values of input nodes. Each connection weight is calculated by a formula (1). x n =x n 1 g x n 1 y n (1) x n : a connection weight learned by n-th learning data, y n : n-th learning data, g : learning rate. 0 g 1 The outside loop scheme of learning algorithm is shown as 1: initialize all connection weights at random, 2: initialize learning rate and the size of the neighbor area, 3: execute inside loop, 4: decrease learning rate and the size of the neighbor area, 5: repeat 3 to 4 until learning is completed. After learning, each node represents a group of individuals with similar features, the individual data correspond to the same node or to neighboring node. That is, SOM configures the output nodes into a topological representation of the original data, through a process called self organization. B. The Measure of Learning In conventional SOM algorithm, convergence of learning are mainly determined by following two measures: 1: the number of repetition becomes larger than the threshold, 2: the largest distance in all distances between learning data and its winner node becomes smaller than the threshold. At 1st measure, learning convergence itself is thought as important and learning speed is not taken into consideration. Usually long time or large number of repetition is set up as threshold, because it is only just sufficient length for convergence of learning. Learning takes the same time or same repetition limited by threshold, regardless of the performance of learning algorithms. At 2nd measure, distance between learning data and farthest winner node are used as a measure of learning. Since its attention is paid to a node with the slowest convergence, convergence of the whole feature map or whole nodes has not been measured. The average of the move distance of all nodes is proposed as a new measure [6], in order to consider both learning speed and convergence of the whole feature map. Since this will become a constant if learning converges, it can be said that, this is able to measure speed of convergence. And, since this is the average of the move distance instead of the move distance of a specific node, it can be said that, this has measured the whole feature map. The average of the move distance of all nodes at s-th learning loop M s is calculated by a formula (2). s M s = r k =1 j=1 q i=1 p x h,i, j, k x h,i, j 1, k 2 (2) h=1 x h,i, j, k : h-th dimension value of connection weight of the output node of i-th position of feature map learned by j-th learning data in k-th learning loop p : total number of connection weight dimension q : total number of output nodes r : total number of learning data s : number of learning loop

3 A. The Concept III. INITIAL NODE EXCHANGE In initialization process of SOM learning, step1 in the outside loop shown before section, every connection weights in feature map is initialized at random within the domain of each dimension. So, there is no relation between the positions of input space and the one of feature map. Unfortunately, it is expected that, learning speed becomes slow for this relation missing. The concept of initial node exchange is that, the neighbor nodes in input space are gathered to neighbor area in feature map by nodes exchange, without connection weights adjustment. The nodes, that are in the neighbor area of winner node in feature map, are exchanged to the nodes, that are in the neighbor area of winner node in input space. It is necessary to perform node exchange process by using selected learning data, because there are many mapping possibility from input space to feature map and desirable mapping changes depending on learning data. Through enough input data are chosen at random from leaning data, relation between the position of input space and the one of feature map must be made without connection weights adjustment. The image of node exchange process is shown in fig.2. input space feature map (a) initial random map (b) after node exchange Fig.2 : image of node exchange process B. Node Exchange Algorithm exchange input space feature map The scheme of initial node exchange process is shown as 1: select input data, 2: input data is presented to input nodes, 3: calculate distance between input data and all output nodes, 4: ordering output nodes by distance, 5: set the first order node to winner node, 6: select a neighbor node of winner node in feature map, 7: select next order node, 8: exchange their position in feature map or exchange connection weights, 9: repeat 6 to 8 until all neighbor node are processed, 10: repeat 1 to 9 until enough data are performed. initial node exchange process is easy to implement because the inside loop scheme of learning algorithm and initial node exchange process are resemblance. The differences are only step 6 and step 8 of inside loop. Step 6 in inside loop removed because it is connection weights adjustment of winner node. Step 8 in inside loop is rewrote from connection weights adjustment of neighbor nodes to select neighbor node in input space and exchange it to neighbor node in feature map. The leaning algorithm with initial node exchange is easy to realize by inserting node exchange process between feature map initialization and connection weights adjustment in outside loop scheme of learning process. The outside loop scheme of former method is shown as 1: initialize all connection weights at random, 2: initialize learning rate and the size of the neighbor area 3: execute initial node exchange process, 4: decrease learning rate and the size of the neighbor area, 5: repeat 3 to 4 until relation is made, 6: initialize learning rate and the size of the neighbor area, 7: execute inside loop, 8: decrease learning rate and the size of the neighbor area, 9: repeat 7 to 8 until learning is completed. C. Speed Factors of Node Exchange Since node exchange process is performed by adding to the conventional SOM learning algorithm, it is desirable for the load of processing to be smaller. In order to depend for the load of processing on the number of exchange nodes, the number of input data (1st factor), initial size of neighbor area (2nd factor), and the reduction speed of neighbor area (3rd factor) are related to processing load. In the previous paper [12], how the average move distance of nodes would change with the differences in the number of input data (1st factor) is investigated in the conditions that initial size of neighbor area and the reduction speed of neighbor were fixed to the same value of learning process. I reported in the paper, that the effect was the highest from 5% to 10% of the number of input data of the output node of the feature map, and the average move distance of nodes was shortened to about 70%. It is not necessary to repeat in node exchange, since node exchange process is completed by one exchange, although it is necessary to repeat learning process in order to adjust a weight vector gradually. Therefore, regulation of the reduction speed of neighbor area (3rd factor), required of learning process, is unnecessary in node exchange process. Node exchange process is always applied the fastest reduction speed. Since it clarified about two of three factors which affect the load of processing, I thought that it was necessary to clarify influence of the last one factor, initial size of neighbor area (2nd factor). Below, I investigate how the average move distance of a node would change with the differences in initial size of neighbor area. IV. EXPERIMENTS For the experiments, I used common parameters, 10x10 of 100 nodes two dimensional feature map, its connection weights are initialized at random values, learning rate is gradually converged on from 0.01 as learning progresses, neighbor size of learning is also converged from 7x7 area to single node. Learning data are created randomly

4 into 8 classes, the center of classes is one of the corner of 3 dimensional cube, and 20 data per class total 160 data are used. A. self Organization of Feature Map The experiment for confirming that learning is performed satisfactory by the method was conducted. An example of initial feature map is shown in fig.3. All connection weights are initialized at random values. The number, 0 to 7, shows the node belonging to the class in it, and different number shows different class. Fig.3 shows that nodes are distributed at random in initial feature map Fig.3 : initial feature map Fig.4 are the results of performing the method to initial feature map shown in fig.3. Fig.4(a) show that, in spite of not performing connection weights adjustment process, the node of same class has gathered each other by initial node exchange process. Fig.4(b) show that, the self organization of feature map is completely carried out after performing the method (a) after exchange (b) after learning Fig.4 : typical result of the method B. Initial Neighbor Size and Move Distance of Nodes The experiment which investigates how the average move distance of a node would change with the differences in initial size of neighbor area was conducted. Average value of 100 times of experiments is shown in fig.5, and from 0% to 18% part of fig.5 is shown in fig.6. Horizontal axis shows average of move distance of nodes by the percentage which set conventional SOM to 100. Vertical axis shows number of input data for exchange by the percentage which set number of all output nodes belonging to feature map to 100. Each line shows how average of move distance of nodes will change with initial size of neighbor area, and the number of input data. For example, 10% line indicated by lozenge expresses change in case that initial size of neighbor area is 10% of the feature map. From this line it understands that, before learning process, node exchange process is performed with 30% of input data and with 10% of initial neighbor size of feature map, average of move distance of nodes is being shortened to 75% compared with the case where it learns without performing node exchange. average of move distance (%) Fig.5: the average of the move distance and the number of data for exchange According to fig.5, if the number of input data exceeds 30% of feature map, it can observe that average of move distance of nodes becomes rather longer. average of move distance (%) number of input data (%) number of input data (%) Fig.6: from 0% to 18% part of fig.5 10% 25% 50% 80% 120% 170% 225% 290% 360% 440% 10% 25% 50% 80% 120% 170% 225% 290% 360% 440% According to fig.6, when an initial neighbor size is 50% or more, the average of move distance of nodes is the shortest (72% or less) from 4% to 10% of input data, and enough shorter (75% or less) at only 2%. When an initial neighbor size is 25%, the average of move distance of nodes is the shortest (about 72%) from 5% of input data. While there is a

5 small number of input data, average of move distance of node is little longer, but when it sees on the whole, there is almost no difference. When an initial neighbor size is 10%, the average of move distance of nodes is the shortest (about 76%) from 10% of input data. At 30% or less of input data, average of move distance of node is long clearly, and if it exceeds 30%, a difference will almost be lost. From these, it can observe that sufficient effect is acquired, if the product of the initial neighbor size and the number of exchange data are over 100, and the number of input data is under 30. This condition is shown in formula (3). Na 0 Nd 100 and Nd 30 (3) Na 0 : percentage of initial neighbor size, Nd : percentage of number of input data for exchange V. CONCLUSIONS In this paper, I investigate how the average move distance of a node would change with the differences in initial size of neighbor area in node exchange process. As a result of experiments, I clarified the relation between average of move distance of nodes and initial neighbor size, drew the expression of relations, and showed the optimal domain of relations. Sufficient effect is acquired, if the product of the initial neighbor size and the number of exchange data are over 100, and the number of input data is under 30. It is clear that, regulation of the reduction speed of neighbor area (3rd factor) is unnecessary in node exchange process. It becomes a future subject about sturdy of the processing method of the neighbor area optimized to initial node exchange process. [7] MIYOSHI Tsutomu : "Initial Node Exchange and Convergence of SOM Learning," Proceedings of The 6th International Symposium on Advanced Intelligent Systems (ISIS2005), Yeosu, Korea, pp (2005). [8] MIYOSHI Tsutomu : "Initial Node Exchange Using Learning Data and Convergence of SOM Learning," GESTS International Transactions on Computer Science and Engineering, Vol.21, No.1, pp (2005). [9] "Bibliography of SOM papers," Draft version , [10] Makoto KINOUCHI, KUDO Yoshihiro : "Much faster learning algorithm for batch-learning som and its application to bioinformatics." Proceedings of The Workshop on Self-Organizing Maps (WSOM03) Kitakyushu, Japan (2003). [11] Young Pyo Jun, Hyunsoo Yoon, Jung Wan Cho : "L* learning: a fast self-organizing feature map learning algorithm based on incremental ordering." IEICE TRANSACTIONS on Information and Systems, Vol.E76- D, No.6, pp , (1993). [12] Hakaru Tamukou, Keiichi Horio, Takeshi Yamakawa : "Fast learning algorithms for self-organizing map employing rough comparison WTA and its digital hardware implementation." IEICE TRANSACTIONS on Electronics, Vol.E87-C, No.11, pp (2004). REFERENCES [1] Teuvo Kohonen : "Self-Organizing Maps," Springer Verlag, ISBN (1995). [2] Robert Heclt-Nielsen : "Neurocomputing," Addison- Wesley Pub. Co., ISBN (1990). [3] Tsutomu Miyoshi, Hidenori Kawai, Hiroshi Masuyama : "Efficient SOM Learning by Data Order Adjustment," Proceedings of 2002 IEEE World Congress on Computational Intelligence (WCCI2002), pp (2002). [4] MIYOSHI Tsutomu : "Order of Learning Data and Convergence of SOM Learning," Proceedings of The 6th International Symposium on Advanced Intelligent Systems (ISIS2005), Yeosu, Korea, pp (2005). [5] MIYOSHI Tsutomu : "Learning Data Order and Convergence of SOM Learning," GESTS International Transactions on Computer Science and Engineering, Vol.22, No.1, pp (2005). [6] MIYOSHI Tsutomu : "Node Exchange for Improvement of SOM Learning," Proceedings of 9th International Conference on Knowledge-Based Intelligent Information and Engineering Systems (KES2005), Melbourne, Australia, pp (2005)

Relation Organization of SOM Initial Map by Improved Node Exchange

Relation Organization of SOM Initial Map by Improved Node Exchange JOURNAL OF COMPUTERS, VOL. 3, NO. 9, SEPTEMBER 2008 77 Relation Organization of SOM Initial Map by Improved Node Echange MIYOSHI Tsutomu Department of Information and Electronics, Tottori University, Tottori,

More information

A Study on Clustering Method by Self-Organizing Map and Information Criteria

A Study on Clustering Method by Self-Organizing Map and Information Criteria A Study on Clustering Method by Self-Organizing Map and Information Criteria Satoru Kato, Tadashi Horiuchi,andYoshioItoh Matsue College of Technology, 4-4 Nishi-ikuma, Matsue, Shimane 90-88, JAPAN, kato@matsue-ct.ac.jp

More information

Figure (5) Kohonen Self-Organized Map

Figure (5) Kohonen Self-Organized Map 2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;

More information

Two-step Modified SOM for Parallel Calculation

Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Petr Gajdoš and Pavel Moravec Petr Gajdoš and Pavel Moravec Department of Computer Science, FEECS, VŠB Technical

More information

11/14/2010 Intelligent Systems and Soft Computing 1

11/14/2010 Intelligent Systems and Soft Computing 1 Lecture 8 Artificial neural networks: Unsupervised learning Introduction Hebbian learning Generalised Hebbian learning algorithm Competitive learning Self-organising computational map: Kohonen network

More information

Controlling the spread of dynamic self-organising maps

Controlling the spread of dynamic self-organising maps Neural Comput & Applic (2004) 13: 168 174 DOI 10.1007/s00521-004-0419-y ORIGINAL ARTICLE L. D. Alahakoon Controlling the spread of dynamic self-organising maps Received: 7 April 2004 / Accepted: 20 April

More information

Stability Assessment of Electric Power Systems using Growing Neural Gas and Self-Organizing Maps

Stability Assessment of Electric Power Systems using Growing Neural Gas and Self-Organizing Maps Stability Assessment of Electric Power Systems using Growing Gas and Self-Organizing Maps Christian Rehtanz, Carsten Leder University of Dortmund, 44221 Dortmund, Germany Abstract. Liberalized competitive

More information

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation

More information

Cluster Analysis using Spherical SOM

Cluster Analysis using Spherical SOM Cluster Analysis using Spherical SOM H. Tokutaka 1, P.K. Kihato 2, K. Fujimura 2 and M. Ohkita 2 1) SOM Japan Co-LTD, 2) Electrical and Electronic Department, Tottori University Email: {tokutaka@somj.com,

More information

Self-Organizing Maps for cyclic and unbounded graphs

Self-Organizing Maps for cyclic and unbounded graphs Self-Organizing Maps for cyclic and unbounded graphs M. Hagenbuchner 1, A. Sperduti 2, A.C. Tsoi 3 1- University of Wollongong, Wollongong, Australia. 2- University of Padova, Padova, Italy. 3- Hong Kong

More information

Reducing topological defects in self-organizing maps using multiple scale neighborhood functions

Reducing topological defects in self-organizing maps using multiple scale neighborhood functions Reducing topological defects in self-organizing maps using multiple scale neighborhood functions Kazushi Murakoshi,YuichiSato Department of Knowledge-based Information Engineering, Toyohashi University

More information

A Self Organizing Map for dissimilarity data 0

A Self Organizing Map for dissimilarity data 0 A Self Organizing Map for dissimilarity data Aïcha El Golli,2, Brieuc Conan-Guez,2, and Fabrice Rossi,2,3 Projet AXIS, INRIA-Rocquencourt Domaine De Voluceau, BP 5 Bâtiment 8 7853 Le Chesnay Cedex, France

More information

Modular network SOM : Theory, algorithm and applications

Modular network SOM : Theory, algorithm and applications Modular network SOM : Theory, algorithm and applications Kazuhiro Tokunaga and Tetsuo Furukawa Kyushu Institute of Technology, Kitakyushu 88-96, Japan {tokunaga, furukawa}@brain.kyutech.ac.jp Abstract.

More information

Separation of Position and Direction Information of Robots by a Product Model of Self-Organizing Map and Neural Gas

Separation of Position and Direction Information of Robots by a Product Model of Self-Organizing Map and Neural Gas Systems and Computers in Japan, Vol. 36, No. 11, 2005 Translated from Denshi Joho Tsushin Gakkai Ronbunshi, Vol. J87-D-II, No. 7, July 2004, pp. 1529 1538 Separation of Position and Direction Information

More information

Self-Organizing Feature Map. Kazuhiro MINAMIMOTO Kazushi IKEDA Kenji NAKAYAMA

Self-Organizing Feature Map. Kazuhiro MINAMIMOTO Kazushi IKEDA Kenji NAKAYAMA Topology Analysis of Data Space Using Self-Organizing Feature Map Kazuhiro MINAMIMOTO Kazushi IKEDA Kenji NAKAYAMA Department of Electrical and Computer Eng., Faculty of Eng., Kanazawa Univ. 2-4-2, Kodatsuno,

More information

Validation for Data Classification

Validation for Data Classification Validation for Data Classification HILARIO LÓPEZ and IVÁN MACHÓN and EVA FERNÁNDEZ Departamento de Ingeniería Eléctrica, Electrónica de Computadores y Sistemas Universidad de Oviedo Edificio Departamental

More information

Look-ahead Type Detour Path Management Methods Yu Miyoshi, Tatsuyuki Kimura, and Yoshihiro Otsuka

Look-ahead Type Detour Path Management Methods Yu Miyoshi, Tatsuyuki Kimura, and Yoshihiro Otsuka Look-ahead Type Detour Path Management Methods Yu Miyoshi, Tatsuyuki Kimura, and Yoshihiro Otsuka Abstract We describe a look-ahead type detour path specification method that can specify in advance detour

More information

A Search Method with User s Preference Direction using Reference Lines

A Search Method with User s Preference Direction using Reference Lines A Search Method with User s Preference Direction using Reference Lines Tomohiro Yoshikawa Graduate School of Engineering, Nagoya University, Nagoya, Japan, {yoshikawa}@cse.nagoya-u.ac.jp Abstract Recently,

More information

Effect of Grouping in Vector Recognition System Based on SOM

Effect of Grouping in Vector Recognition System Based on SOM Effect of Grouping in Vector Recognition System Based on SOM Masayoshi Ohta Graduate School of Science and Engineering Kansai University Osaka, Japan Email: k287636@kansai-u.ac.jp Yuto Kurosaki Department

More information

Visualization of the Packet Flows using Self Organizing Maps

Visualization of the Packet Flows using Self Organizing Maps Visualization of the Packet Flows using Self Organizing Maps HIROSHI DOZONO Saga University Faculty of Science and Engineering 1 Honjyo Saga Saga JAPAN hiro@dna.ec.saga-u.ac.jp TAKERU KABASHIMA Saga University

More information

Extract an Essential Skeleton of a Character as a Graph from a Character Image

Extract an Essential Skeleton of a Character as a Graph from a Character Image Extract an Essential Skeleton of a Character as a Graph from a Character Image Kazuhisa Fujita University of Electro-Communications 1-5-1 Chofugaoka, Chofu, Tokyo, 182-8585 Japan k-z@nerve.pc.uec.ac.jp

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without a teacher No targets for the outputs Networks which discover patterns, correlations, etc. in the input data This is a self organisation Self organising networks An

More information

The Analysis of Traffic of IP Packets using CGH. Self Organizing Map

The Analysis of Traffic of IP Packets using CGH. Self Organizing Map 2015 International Conference on Computational Science and Computational Intelligence The Analysis of Traffic of IP Packets using CGH Self Organizing Maps Hiroshi Dozono Department of Advanced Fusion Saga

More information

Extraction and recognition of the thoracic organs based on 3D CT images and its application

Extraction and recognition of the thoracic organs based on 3D CT images and its application 1 Extraction and recognition of the thoracic organs based on 3D CT images and its application Xiangrong Zhou, PhD a, Takeshi Hara, PhD b, Hiroshi Fujita, PhD b, Yoshihiro Ida, RT c, Kazuhiro Katada, MD

More information

By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

By choosing to view this document, you agree to all provisions of the copyright laws protecting it. Jussi Pakkanen and Jukka Iivarinen, A Novel Self Organizing Neural Network for Defect Image Classification. In Proceedings of the International Joint Conference on Neural Networks, pages 2553 2558, Budapest,

More information

PATTERN RECOGNITION USING NEURAL NETWORKS

PATTERN RECOGNITION USING NEURAL NETWORKS PATTERN RECOGNITION USING NEURAL NETWORKS Santaji Ghorpade 1, Jayshree Ghorpade 2 and Shamla Mantri 3 1 Department of Information Technology Engineering, Pune University, India santaji_11jan@yahoo.co.in,

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks Series Prediction as a Problem of Missing Values: Application to ESTSP7 and NN3 Competition Benchmarks Antti Sorjamaa and Amaury Lendasse Abstract In this paper, time series prediction is considered as

More information

Supervised vs.unsupervised Learning

Supervised vs.unsupervised Learning Supervised vs.unsupervised Learning In supervised learning we train algorithms with predefined concepts and functions based on labeled data D = { ( x, y ) x X, y {yes,no}. In unsupervised learning we are

More information

Hierarchical Clustering 4/5/17

Hierarchical Clustering 4/5/17 Hierarchical Clustering 4/5/17 Hypothesis Space Continuous inputs Output is a binary tree with data points as leaves. Useful for explaining the training data. Not useful for making new predictions. Direction

More information

Automatic Group-Outlier Detection

Automatic Group-Outlier Detection Automatic Group-Outlier Detection Amine Chaibi and Mustapha Lebbah and Hanane Azzag LIPN-UMR 7030 Université Paris 13 - CNRS 99, av. J-B Clément - F-93430 Villetaneuse {firstname.secondname}@lipn.univ-paris13.fr

More information

Reducing the Size of Routing Tables for Large-scale Network Simulation

Reducing the Size of Routing Tables for Large-scale Network Simulation Reducing the Size of Routing Tables for Large-scale Network Simulation Akihito Hiromori, Hirozumi Yamaguchi, Keiichi Yasumoto, Teruo Higashino and Kenichi Taniguchi Graduate School of Engineering Science,

More information

Hidefumi Wakamatsu, Yuusuke Tanaka, Akira Tsumaya, Keiichi Shirase, and Eiji Arai

Hidefumi Wakamatsu, Yuusuke Tanaka, Akira Tsumaya, Keiichi Shirase, and Eiji Arai Representation and Planning of Deformable Linear Object Manipulation Including Knotting Hidefumi Wakamatsu, Yuusuke Tanaka, Akira Tsumaya, Keiichi Shirase, and Eiji Arai Dept. of Manufacturing Science,

More information

ON NEW STRATEGY FOR PRIORITISING THE SELECTED FLOW IN QUEUING SYSTEM

ON NEW STRATEGY FOR PRIORITISING THE SELECTED FLOW IN QUEUING SYSTEM ON NEW STRATEGY FOR PRIORITISING THE SELECTED FLOW IN QUEUING SYSTEM Wojciech Burakowski, Halina Tarasiuk,RyszardSyski Warsaw University of Technology, Poland Institute of Telecommunications 00-665-Warsaw,

More information

Gauss-Sigmoid Neural Network

Gauss-Sigmoid Neural Network Gauss-Sigmoid Neural Network Katsunari SHIBATA and Koji ITO Tokyo Institute of Technology, Yokohama, JAPAN shibata@ito.dis.titech.ac.jp Abstract- Recently RBF(Radial Basis Function)-based networks have

More information

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter

Review: Final Exam CPSC Artificial Intelligence Michael M. Richter Review: Final Exam Model for a Learning Step Learner initially Environm ent Teacher Compare s pe c ia l Information Control Correct Learning criteria Feedback changed Learner after Learning Learning by

More information

arxiv: v1 [physics.data-an] 27 Sep 2007

arxiv: v1 [physics.data-an] 27 Sep 2007 Classification of Interest Rate Curves Using Self-Organising Maps arxiv:0709.4401v1 [physics.data-an] 27 Sep 2007 M.Kanevski a,, M.Maignan b, V.Timonin a,1, A.Pozdnoukhov a,1 a Institute of Geomatics and

More information

Machine Learning : Clustering, Self-Organizing Maps

Machine Learning : Clustering, Self-Organizing Maps Machine Learning Clustering, Self-Organizing Maps 12/12/2013 Machine Learning : Clustering, Self-Organizing Maps Clustering The task: partition a set of objects into meaningful subsets (clusters). The

More information

Machine Learning Based Autonomous Network Flow Identifying Method

Machine Learning Based Autonomous Network Flow Identifying Method Machine Learning Based Autonomous Network Flow Identifying Method Hongbo Shi 1,3, Tomoki Hamagami 1,3, and Haoyuan Xu 2,3 1 Division of Physics, Electrical and Computer Engineering, Graduate School of

More information

Automated Network Drawing Using Self-Organizing Map

Automated Network Drawing Using Self-Organizing Map Automated Network Drawing Using Self-Organizing Map Xiangjun Xu Mladen Kezunovic* Electrical Engineering Department, Texas A&M University College Station, TX 77843-3128, USA (Phone) 979-845-7509, (Fax)

More information

Investigation of Alternative Strategies and Quality Measures for Controlling the Growth Process of the Growing Hierarchical Self-Organizing Map

Investigation of Alternative Strategies and Quality Measures for Controlling the Growth Process of the Growing Hierarchical Self-Organizing Map Investigation of Alternative Strategies and Quality Measures for Controlling the Growth Process of the Growing Hierarchical Self-Organizing Map Michael Dittenbach ispaces Group ecommerce Competence Center

More information

Slide07 Haykin Chapter 9: Self-Organizing Maps

Slide07 Haykin Chapter 9: Self-Organizing Maps Slide07 Haykin Chapter 9: Self-Organizing Maps CPSC 636-600 Instructor: Yoonsuck Choe Spring 2012 Introduction Self-organizing maps (SOM) is based on competitive learning, where output neurons compete

More information

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions

Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions Robust Lip Contour Extraction using Separability of Multi-Dimensional Distributions Tomokazu Wakasugi, Masahide Nishiura and Kazuhiro Fukui Corporate Research and Development Center, Toshiba Corporation

More information

Clustering with Reinforcement Learning

Clustering with Reinforcement Learning Clustering with Reinforcement Learning Wesam Barbakh and Colin Fyfe, The University of Paisley, Scotland. email:wesam.barbakh,colin.fyfe@paisley.ac.uk Abstract We show how a previously derived method of

More information

Chapter 7: Competitive learning, clustering, and self-organizing maps

Chapter 7: Competitive learning, clustering, and self-organizing maps Chapter 7: Competitive learning, clustering, and self-organizing maps António R. C. Paiva EEL 6814 Spring 2008 Outline Competitive learning Clustering Self-Organizing Maps What is competition in neural

More information

A motion planning method for mobile robot considering rotational motion in area coverage task

A motion planning method for mobile robot considering rotational motion in area coverage task Asia Pacific Conference on Robot IoT System Development and Platform 018 (APRIS018) A motion planning method for mobile robot considering rotational motion in area coverage task Yano Taiki 1,a) Takase

More information

Shape Modeling of A String And Recognition Using Distance Sensor

Shape Modeling of A String And Recognition Using Distance Sensor Proceedings of the 24th IEEE International Symposium on Robot and Human Interactive Communication Kobe, Japan, Aug 31 - Sept 4, 2015 Shape Modeling of A String And Recognition Using Distance Sensor Keisuke

More information

Content-based Management of Document Access. Control

Content-based Management of Document Access. Control Content-based Management of Document Access Control Edgar Weippl, Ismail Khalil Ibrahim Software Competence Center Hagenberg Hauptstr. 99, A-4232 Hagenberg, Austria {edgar.weippl, ismail.khalil-ibrahim}@scch.at

More information

Hierarchical analysis of GSM network performance data

Hierarchical analysis of GSM network performance data Hierarchical analysis of GSM network performance data Mikko Multanen, Kimmo Raivio and Pasi Lehtimäki Helsinki University of Technology Laboratory of Computer and Information Science P.O. Box 5400, FI-02015

More information

Evaluation of Hardware Oriented MRCoHOG using Logic Simulation

Evaluation of Hardware Oriented MRCoHOG using Logic Simulation Evaluation of Hardware Oriented MRCoHOG using Logic Simulation Yuta Yamasaki 1, Shiryu Ooe 1, Akihiro Suzuki 1, Kazuhiro Kuno 2, Hideo Yamada 2, Shuichi Enokida 3 and Hakaru Tamukoh 1 1 Graduate School

More information

Cross-layer Flow Control to Improve Bandwidth Utilization and Fairness for Short Burst Flows

Cross-layer Flow Control to Improve Bandwidth Utilization and Fairness for Short Burst Flows Cross-layer Flow Control to Improve Bandwidth Utilization and Fairness for Short Burst Flows Tomoko Kudo, Toshihiro Taketa, Yukio Hiranaka Graduate School of Science and Engineering Yamagata University

More information

Fast Associative Memory

Fast Associative Memory Fast Associative Memory Ricardo Miguel Matos Vieira Instituto Superior Técnico ricardo.vieira@tagus.ist.utl.pt ABSTRACT The associative memory concept presents important advantages over the more common

More information

Genetic Model Optimization for Hausdorff Distance-Based Face Localization

Genetic Model Optimization for Hausdorff Distance-Based Face Localization c In Proc. International ECCV 2002 Workshop on Biometric Authentication, Springer, Lecture Notes in Computer Science, LNCS-2359, pp. 103 111, Copenhagen, Denmark, June 2002. Genetic Model Optimization

More information

Data Mining. Kohonen Networks. Data Mining Course: Sharif University of Technology 1

Data Mining. Kohonen Networks. Data Mining Course: Sharif University of Technology 1 Data Mining Kohonen Networks Data Mining Course: Sharif University of Technology 1 Self-Organizing Maps Kohonen Networks developed in 198 by Tuevo Kohonen Initially applied to image and sound analysis

More information

2. CNeT Architecture and Learning 2.1. Architecture The Competitive Neural Tree has a structured architecture. A hierarchy of identical nodes form an

2. CNeT Architecture and Learning 2.1. Architecture The Competitive Neural Tree has a structured architecture. A hierarchy of identical nodes form an Competitive Neural Trees for Vector Quantization Sven Behnke and Nicolaos B. Karayiannis Department of Mathematics Department of Electrical and Computer Science and Computer Engineering Martin-Luther-University

More information

Evaluation of the Performance of O(log 2 M) Self-Organizing Map Algorithm without Neighborhood Learning

Evaluation of the Performance of O(log 2 M) Self-Organizing Map Algorithm without Neighborhood Learning 04 IJCSNS International Journal of Computer Science and Network Security, VOL.6 No.0, October 006 Evaluation of the Performance of O(log M) Self-Organizing Map Algorithm without Neighborhood Learning Hiroki

More information

Mineral Exploation Using Neural Netowrks

Mineral Exploation Using Neural Netowrks ABSTRACT I S S N 2277-3061 Mineral Exploation Using Neural Netowrks Aysar A. Abdulrahman University of Sulaimani, Computer Science, Kurdistan Region of Iraq aysser.abdulrahman@univsul.edu.iq Establishing

More information

Interactive Video Retrieval System Integrating Visual Search with Textual Search

Interactive Video Retrieval System Integrating Visual Search with Textual Search From: AAAI Technical Report SS-03-08. Compilation copyright 2003, AAAI (www.aaai.org). All rights reserved. Interactive Video Retrieval System Integrating Visual Search with Textual Search Shuichi Shiitani,

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 1, Jan-Feb 2015

International Journal of Computer Science Trends and Technology (IJCST) Volume 3 Issue 1, Jan-Feb 2015 RESEARCH ARTICLE Comparison between Square Pixel Structure and Hexagonal Pixel Structure in Digital Image Processing Illa Singh 1, Ashish Oberoi 2 M.Tech 1, Final Year Student, Associate Professor2 Department

More information

Data gathering using mobile agents for reducing traffic in dense mobile wireless sensor networks

Data gathering using mobile agents for reducing traffic in dense mobile wireless sensor networks Mobile Information Systems 9 (23) 295 34 295 DOI.3233/MIS-364 IOS Press Data gathering using mobile agents for reducing traffic in dense mobile wireless sensor networks Keisuke Goto, Yuya Sasaki, Takahiro

More information

AN OBSERVATION METHOD OF MOVING OBJECTS ON FREQUENCY DOMAIN

AN OBSERVATION METHOD OF MOVING OBJECTS ON FREQUENCY DOMAIN XVII IMEKO World Congress Metrology in the 3rd Millennium June 7, 003, Dubrovnik, Croatia AN OBSERVATION METHOD OF MOVING OBJECTS ON FREQUENCY DOMAIN Tsunehiko Nakanishi and Takeshi Fujisaki Faculty of

More information

Applying Kohonen Network in Organising Unstructured Data for Talus Bone

Applying Kohonen Network in Organising Unstructured Data for Talus Bone 212 Third International Conference on Theoretical and Mathematical Foundations of Computer Science Lecture Notes in Information Technology, Vol.38 Applying Kohonen Network in Organising Unstructured Data

More information

ScienceDirect. Analogy between immune system and sensor replacement using mobile robots on wireless sensor networks

ScienceDirect. Analogy between immune system and sensor replacement using mobile robots on wireless sensor networks Available online at www.sciencedirect.com ScienceDirect Procedia Computer Science 35 (2014 ) 1352 1359 18 th International Conference in Knowledge Based and Intelligent Information & Engineering Systems

More information

Character Recognition

Character Recognition Character Recognition 5.1 INTRODUCTION Recognition is one of the important steps in image processing. There are different methods such as Histogram method, Hough transformation, Neural computing approaches

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Unsupervised learning Until now, we have assumed our training samples are labeled by their category membership. Methods that use labeled samples are said to be supervised. However,

More information

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 19, 1997 WIT Press,  ISSN Gap Repair in Water Level Measurement Data Using Neural Networks P. van der Veer, J. Cser, O. Schleider & E. Kerckhoffs Delft University of Technology, Faculties of Civil Engineering and Informatics, Postbus

More information

A Neural Network Approach to the Inspection of Ball Grid Array Solder Joints on Printed Circuit Boards

A Neural Network Approach to the Inspection of Ball Grid Array Solder Joints on Printed Circuit Boards A Neural Network Approach to the Inspection of Ball Grid Array Solder Joints on Printed Circuit Boards Kuk Won KO*, Young Jun Roh", Hyung Suck Cho" * Dept. of Mechanical Engineering, Korea Advanced Institute

More information

An Accurate Method for Skew Determination in Document Images

An Accurate Method for Skew Determination in Document Images DICTA00: Digital Image Computing Techniques and Applications, 1 January 00, Melbourne, Australia. An Accurate Method for Skew Determination in Document Images S. Lowther, V. Chandran and S. Sridharan Research

More information

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION

6. NEURAL NETWORK BASED PATH PLANNING ALGORITHM 6.1 INTRODUCTION 6 NEURAL NETWORK BASED PATH PLANNING ALGORITHM 61 INTRODUCTION In previous chapters path planning algorithms such as trigonometry based path planning algorithm and direction based path planning algorithm

More information

Study on Compatibility of Diffusion- Type Flow Control and TCP

Study on Compatibility of Diffusion- Type Flow Control and TCP Study on Compatibility of Diffusion- Type Flow Control and TCP Kaori Muranaka*, Chisa Takano*, Masaki Aida** *NTT Advanced Technology Corporation, 2-4-15, Naka-cho, Musashino-shi, 18-6 Japan. TEL: +81

More information

Design optimization method for Francis turbine

Design optimization method for Francis turbine IOP Conference Series: Earth and Environmental Science OPEN ACCESS Design optimization method for Francis turbine To cite this article: H Kawajiri et al 2014 IOP Conf. Ser.: Earth Environ. Sci. 22 012026

More information

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks SOMSN: An Effective Self Organizing Map for Clustering of Social Networks Fatemeh Ghaemmaghami Research Scholar, CSE and IT Dept. Shiraz University, Shiraz, Iran Reza Manouchehri Sarhadi Research Scholar,

More information

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992

Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp , Kobe, Japan, September 1992 Proc. Int. Symp. Robotics, Mechatronics and Manufacturing Systems 92 pp.957-962, Kobe, Japan, September 1992 Tracking a Moving Object by an Active Vision System: PANTHER-VZ Jun Miura, Hideharu Kawarabayashi,

More information

Advanced visualization techniques for Self-Organizing Maps with graph-based methods

Advanced visualization techniques for Self-Organizing Maps with graph-based methods Advanced visualization techniques for Self-Organizing Maps with graph-based methods Georg Pölzlbauer 1, Andreas Rauber 1, and Michael Dittenbach 2 1 Department of Software Technology Vienna University

More information

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Software Recommended: NetSim Standard v11.0, Visual Studio 2015/2017, MATLAB 2016a Project Download Link: https://github.com/netsim-tetcos/wsn_som_optimization_v11.0/archive/master.zip

More information

COLLABORATIVE AGENT LEARNING USING HYBRID NEUROCOMPUTING

COLLABORATIVE AGENT LEARNING USING HYBRID NEUROCOMPUTING COLLABORATIVE AGENT LEARNING USING HYBRID NEUROCOMPUTING Saulat Farooque and Lakhmi Jain School of Electrical and Information Engineering, University of South Australia, Adelaide, Australia saulat.farooque@tenix.com,

More information

Centralities (4) By: Ralucca Gera, NPS. Excellence Through Knowledge

Centralities (4) By: Ralucca Gera, NPS. Excellence Through Knowledge Centralities (4) By: Ralucca Gera, NPS Excellence Through Knowledge Some slide from last week that we didn t talk about in class: 2 PageRank algorithm Eigenvector centrality: i s Rank score is the sum

More information

Graph-based High Level Motion Segmentation using Normalized Cuts

Graph-based High Level Motion Segmentation using Normalized Cuts Graph-based High Level Motion Segmentation using Normalized Cuts Sungju Yun, Anjin Park and Keechul Jung Abstract Motion capture devices have been utilized in producing several contents, such as movies

More information

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-"&"3 -"(' ( +-" " " % '.+ % ' -0(+$,

A *69>H>N6 #DJGC6A DG C<>C::G>C<,8>:C8:H /DA 'D 2:6G, ()-&3 -(' ( +-   % '.+ % ' -0(+$, The structure is a very important aspect in neural network design, it is not only impossible to determine an optimal structure for a given problem, it is even impossible to prove that a given structure

More information

On UML2.0 s Abandonment of the Actors-Call-Use-Cases Conjecture

On UML2.0 s Abandonment of the Actors-Call-Use-Cases Conjecture On UML2.0 s Abandonment of the Actors-Call-Use-Cases Conjecture Sadahiro Isoda Toyohashi University of Technology Toyohashi 441-8580, Japan isoda@tutkie.tut.ac.jp Abstract. UML2.0 recently made a correction

More information

Unsupervised Learning

Unsupervised Learning Networks for Pattern Recognition, 2014 Networks for Single Linkage K-Means Soft DBSCAN PCA Networks for Kohonen Maps Linear Vector Quantization Networks for Problems/Approaches in Machine Learning Supervised

More information

Application of genetic algorithms and Kohonen networks to cluster analysis

Application of genetic algorithms and Kohonen networks to cluster analysis Application of genetic algorithms and Kohonen networks to cluster analysis Marian B. Gorza lczany and Filip Rudziński Department of Electrical and Computer Engineering Kielce University of Technology Al.

More information

Visualizing Changes in Data Collections Using Growing Self-Organizing Maps *

Visualizing Changes in Data Collections Using Growing Self-Organizing Maps * Visualizing Changes in Data Collections Using Growing Self-Organizing Maps * Andreas Nürnberger and Marcin Detyniecki University of California at Berkeley EECS, Computer Science Division Berkeley, CA 94720,

More information

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION

CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 75 CHAPTER 6 COUNTER PROPAGATION NEURAL NETWORK IN GAIT RECOGNITION 6.1 INTRODUCTION Counter propagation network (CPN) was developed by Robert Hecht-Nielsen as a means to combine an unsupervised Kohonen

More information

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera

Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Three-Dimensional Measurement of Objects in Liquid with an Unknown Refractive Index Using Fisheye Stereo Camera Kazuki Sakamoto, Alessandro Moro, Hiromitsu Fujii, Atsushi Yamashita, and Hajime Asama Abstract

More information

Image Classification Using Wavelet Coefficients in Low-pass Bands

Image Classification Using Wavelet Coefficients in Low-pass Bands Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August -7, 007 Image Classification Using Wavelet Coefficients in Low-pass Bands Weibao Zou, Member, IEEE, and Yan

More information

Curvilinear Distance Analysis versus Isomap

Curvilinear Distance Analysis versus Isomap Curvilinear Distance Analysis versus Isomap John Aldo Lee, Amaury Lendasse, Michel Verleysen Université catholique de Louvain Place du Levant, 3, B-1348 Louvain-la-Neuve, Belgium {lee,verleysen}@dice.ucl.ac.be,

More information

Natural Viewing 3D Display

Natural Viewing 3D Display We will introduce a new category of Collaboration Projects, which will highlight DoCoMo s joint research activities with universities and other companies. DoCoMo carries out R&D to build up mobile communication,

More information

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 05 MELBOURNE, AUGUST 15-18, 2005

INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED 05 MELBOURNE, AUGUST 15-18, 2005 INTERNATIONAL CONFERENCE ON ENGINEERING DESIGN ICED MELBOURNE, AUGUST -, METHOD USING A SELF-ORGANISING MAP FOR DRIVER CLASSIFI- CATION AS A PRECONDITION FOR CUSTOMER ORIENTED DESIGN Albert Albers and

More information

On the need of unfolding preprocessing for time series clustering

On the need of unfolding preprocessing for time series clustering WSOM 5, 5th Workshop on Self-Organizing Maps Paris (France), 5-8 September 5, pp. 5-58. On the need of unfolding preprocessing for time series clustering Geoffroy Simon, John A. Lee, Michel Verleysen Machine

More information

Gaussian and Exponential Architectures in Small-World Associative Memories

Gaussian and Exponential Architectures in Small-World Associative Memories and Architectures in Small-World Associative Memories Lee Calcraft, Rod Adams and Neil Davey School of Computer Science, University of Hertfordshire College Lane, Hatfield, Herts AL1 9AB, U.K. {L.Calcraft,

More information

EXPLORING FORENSIC DATA WITH SELF-ORGANIZING MAPS

EXPLORING FORENSIC DATA WITH SELF-ORGANIZING MAPS Chapter 10 EXPLORING FORENSIC DATA WITH SELF-ORGANIZING MAPS B. Fei, J. ElofF, H. Venter and M. Olivier Abstract This paper discusses the application of a self-organizing map (SOM), an unsupervised learning

More information

Self-Organizing Map. presentation by Andreas Töscher. 19. May 2008

Self-Organizing Map. presentation by Andreas Töscher. 19. May 2008 19. May 2008 1 Introduction 2 3 4 5 6 (SOM) aka Kohonen Network introduced by Teuvo Kohonen implements a discrete nonlinear mapping unsupervised learning Structure of a SOM Learning Rule Introduction

More information

OBJECT-CENTERED INTERACTIVE MULTI-DIMENSIONAL SCALING: ASK THE EXPERT

OBJECT-CENTERED INTERACTIVE MULTI-DIMENSIONAL SCALING: ASK THE EXPERT OBJECT-CENTERED INTERACTIVE MULTI-DIMENSIONAL SCALING: ASK THE EXPERT Joost Broekens Tim Cocx Walter A. Kosters Leiden Institute of Advanced Computer Science Leiden University, The Netherlands Email: {broekens,

More information

Robot Manifolds for Direct and Inverse Kinematics Solutions

Robot Manifolds for Direct and Inverse Kinematics Solutions Robot Manifolds for Direct and Inverse Kinematics Solutions Bruno Damas Manuel Lopes Abstract We present a novel algorithm to estimate robot kinematic manifolds incrementally. We relate manifold learning

More information

K-Means Clustering With Initial Centroids Based On Difference Operator

K-Means Clustering With Initial Centroids Based On Difference Operator K-Means Clustering With Initial Centroids Based On Difference Operator Satish Chaurasiya 1, Dr.Ratish Agrawal 2 M.Tech Student, School of Information and Technology, R.G.P.V, Bhopal, India Assistant Professor,

More information

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network

Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Simulation of WSN in NetSim Clustering using Self-Organizing Map Neural Network Software Recommended: NetSim Standard v11.1 (32/64bit), Visual Studio 2015/2017, MATLAB (32/64 bit) Project Download Link:

More information

Softness Comparison of Stabilization Control in Remote Robot System with Force Feedback

Softness Comparison of Stabilization Control in Remote Robot System with Force Feedback Softness Comparison of Stabilization Control in Remote Robot System with Force Feedback Qin QIAN Graduate School of Engineering Nagoya Institute of Technology Nagoya 466-8555, Japan q.qian.924@stn.nitech.ac.jp

More information

A NON-ADAPTIVE DISTRIBUTED SYSTEM-LEVEL DIAGNOSIS METHOD FOR COMPUTER NETWORKS

A NON-ADAPTIVE DISTRIBUTED SYSTEM-LEVEL DIAGNOSIS METHOD FOR COMPUTER NETWORKS A NON-ADAPIVE DISRIBUED SYSEM-LEVEL DIAGNOSIS MEHOD FOR COMPUER NEWORKS Hiroshi MASUYAMA and sutomu MIYOSHI Information and Knowledge Engineering, ottori University Koyama-cho Minami -, ottori, 0- Japan

More information