EEG/ECG data fusion using Self-Organising Maps
|
|
- Verity Peters
- 6 years ago
- Views:
Transcription
1 EEG/ECG data fusion using Self-Organising Maps Nuno Bandeira 1, Victor Sousa Lobo 2,1, Fernando Moura-Pires 1, 1 Computer Science Department, Faculty of Science and Technology / New University of Lisbon, Portugal 2 Portuguese Naval Academy, Lisbon, Portugal Abstract Empirical results are presented concerning data fusion performed over several combinations of EEG/ECG channel readings of sport shooting athletes. Our purpose in applying different data fusion approaches was that of finding a satisfactory set of features, such that would allow us to build adequate classifiers on the data. The resulting data sets were used for building SOMs (Self-Organising Maps), used for visual inspection of coherence between clusters found and shooting accuracy. Keywords: Self-Organising Maps, EEG, data fusion 1. Introduction According to sport shooting experts, the shooter s ability to concentrate on the shooting task is crucial in improving one s performance, once high physical technique levels have been achieved (steady body position, respiration, muscular and eye-movement control). Since concentration is mainly a cerebral activity,we conducted an experiment where EEG and ECG signals were read and digitised in real-time during the shooting activity. Previous work (see [9]) suggested that these could be good indicators of concentration. Once we had all the data (around 80Mb -120 Mb per shooting session), we had to devise adequate pre-processing techniques in order to handle the high volume of data. Many techniques are known for transforming EEG data into feature vectors suitable for clustering and classification [2][10][11][3]. We opted for the use of Fast Fourier Transforms (FFT) as described in the next section. But the best that the FFT could give us were different types of channel spectra, therefore resulting in 20 spectra per shot, one per EEG channel. Since we wanted to apply SOMs to visually inspect potential hidden relations in our data, we also had to find ways of merging all channels into single feature vectors. Different approaches were tried and are described in sections 4 and EEG/ECG signal acquisition and pre-processing The subjects from which we recorded our data are shooters from the sport shooting team of the Portuguese Navy. So far we have recorded data from 7 such shooters, but because of difficulties with the recording software, in this paper we only present the results of shooters numbers 3 to 7. Each shooter spent one morning at a shooting range, firing up to 12 rounds of 5 shots. For each shot, besides the EEG and ECG, we kept the target, and classified the shot according to the score obtained (10 is right on the centre, 0 is outside the target). We then considered that shots with a score of 9 or 10 were good, 7 or 8 were average, and up to 6 were bad. The electrodes were placed according to the standard system [7][8]. The electrode leads are connected to a Braintronics ISO1032 preamplifier, that sends the signal to a Braintronics CONTROL 1032 amplifier. There the signals are amplified, filtered by a 50Hz notch filter and a 4 th order 70Hz low-pass filter. A DATA TRANS- LATION DT2821 ADC board is then used to digitise the resulting signals. The recorded data consists of 22 signals, recorded with 12 bit resolution and a 512Hz sampling rate. Channel 22 is the ECG, from
2 where the heart beat rate is extracted using a simple spectra based algorithm. Channel 18 is the signal of the right ear, that is used as reference for the differential amplifiers, and thus contains no information. The remaining 20 channels are all subject to the same initial preprocessing. First, the last 5 seconds before the shot are selected (2.5K points). This signal is then broken up into 9 blocks of 512 points, with 50 % overlap between them (so as to later obtain a Walsh periodogram). Each of these blocks is then multiplied by a Hamming window to reduce frequency leakage, and it s spectrum is calculated with a 512 point FFT. Thus each channel produces 9 spectra with 256 bins of real frequencies and a width of 1Hz. Since we used a 70Hz low-pass filter and standard EEG bands range from 1-30Hz we opted for using only the lower 30 bins. Thus, when all information is used, we have 20 EEG channels with 9 spectra of 30 bins, totaling 5400 EEG features, plus one heart beat rate feature. All subsequent pre-processing is done on this EEG data. 3. SOM - Self Organising Maps Self-Organising Maps, also known as Kohonen Maps, in honour of its creator, are thoroughly described in [5] and have been widely used in many applications, including as a tool for data fusion [4] (an excellent bibliography can be found in The SOM concept is based on the human brain s cortex interactions, simplified in a model in which different prototypes (neurones) try to represent the input data by competing with each and every other neurone in every iteration, for a better mapping of the input data. The basic SOM algorithmic procedure is as follows: 1. For a given training pattern x: 1.1 Calculate the distance of each neurone to the training pattern x (Calculation phase) 1.2 Find the neurone with smaller distance, and call it the winner W (Voting phase) 1.3 Change the network neurones with a function G, which depends on the learning rate α, the distance d to W (in the output plane), and the neighbourhood function F. Due to the nature of the neighbourhood function, only the neurones closer to W (in the output space) will be changed. (Update phase) 2. Update the learning rate α and the neighbourhood function F according to some rule 3. Repeat steps 1 and 2 for the next training pattern, until some stopping criteria is reached. In all our analysis we ran our algorithms 6 times with different initial values, to make sure that the process always converged to the same final map. Whenever this did not happen, we simply increased the number of iterations and the initial learning radius until a stable solution was found. A distributed SOM implementation was also used in building the maps for the largest dataset (5401 features). A detailed description of this algorithm and its empirical evaluation can be found in [1]. To visualise the results of the clustering performed by SOM, we frequently used U- matrices [12]. The U-Matrix of a SOM is obtained by calculating the distance, in the input space, between neighbouring neurones. These distances are then represented on a map in grayscale (black being the greatest distance, and white the smallest). Clusters can easily be identified as clear areas (nearby neurones) separated by dark ridges (large distances to other clusters). 4. Data fusion Since our main objective was finding an adequate set of features that would provide high visual correlation between the EEG signal and shooting performance, most of our work at this stage was concentrated on data fusion. Due to this fact, data fusion was performed using different types of feature aggregation, motivated by several different reasons. Each choice of feature aggregation led to a different training set, upon which the clustering procedure was applied. The training sets used were: I) All the features (1 set of 5401 features). In this training set, all features mentioned in Section 2 are used. It seems reasonable that this approach would capture the dynamics of the signal prior to the shot. In this set, the heart beat rate is also used. II) All average spectra. (1 set of 601 features).
3 In this training set, we averaged the spectra of each channel, and by doing so assumed that the signals are stationary during the 5 seconds before the shot. Each resulting spectrum is the average of 9 spectra, and thus the signal to noise ratio is improved considerably. In this set, the heart beat rate is also used. III) Average spectra separated by hemisphere (2 sets of 330 features). In these training sets, we separated the data in right and left hemisphere. Each hemisphere consists of 8 EEG channels unique to that hemisphere, plus the three central channels (Fz, Cz, Pz [7][8]). This choice of features is motivated by the fact that the left and right sides of the brain are reasonably distinct and all but one of the shooters were right handed and used the right eye for aiming. We performed clustering on each side separately, and later merged the results. IV) Average spectra by channel (19 sets of 30 features). In these training sets, we used the spectra of each channel as a separate training set. By analysing the ability to cluster the data sensibly, based on each channel independently, we tried to determine if there were any channels more relevant to the task at hand. V) Characteristic frequency bands (4 sets of 120 features for the alpha band, 320 for beta, and 80 for delta and theta). In these training sets, power spectra within each band (alpha, beta, delta, and theta [6]) were selected for all channels. Since each band has a different width, the number of features selected varies. According to classic literature in the area [6] these frequency bands correspond to well established activity patterns within the brain, and thus are the natural choice for discriminating between the shots. 5. Decision fusion Decision fusion was used for merging the results obtained in III and IV. The generated datasets were used in building the corresponding SOMs, which were then labelled using the same datasets. Labelling a SOM consists in finding the winning neurone in a SOM for each data vector in the dataset and appending the data vector s class label to the neurone s label. This labelling (usually called calibration in SOM terms) allowed us to use the SOMs as classifiers, simply by having each neurone belong to the class that, in its label, is most frequent. Two different strategies were applied in fusing the SOMs classifications: - Majority (III, IV). This is the simplest decision fusion, where the final class is simply the most ocurring class in the lower level classifiers. It is used for evaluation of variation in SOMs classifications. - Use of another SOM layer (IV). In this case, the results of the classification by the original SOMs are fed as features to a fusing SOM. It is then used for visual inspection of dispersion of the first level SOMs classifications. Higher levels of agreement on the first level SOMs should lead to a smooth fusion SOM. If the decision fusion SOM is messy or has outliers, then there is disagreement on the first level SOMs. In such cases, simply by glancing at the outliers neighbours it s easy to spot which class most of the first level classifiers chose for it. 6. Results With all datasets, with the exceptions of VI that will be discussed later, and certain channels of IV, the data was clustered by shooter. We present these results for the first dataset in Figure 2, and the others are very similar. Since the data is clustered by shooter, it cannot be clustered by score. So as to cluster by score, it is necessary to join all good shots in one cluster and bad ones in another, thus mixing in those clusters the different shooters (which as in this case does not happen). Thus, to classify the shots by score we have to analyse the data of each shooter individually. None of the datasets tested provides good clustering by score for all shooters. However, 2 of the individual channels (F7 and T3) provided reasonable clustering by scores, even when all shooters are considered simultaneously. Furthermore, some of the shooters have their shots clustered by score with some of the datasets. Shooter 3 has his shots clustered by score in dataset IV (channel Cz), shooter 4 in dataset V, shooter 5 in datasets II, IV and V, and shooter 7 in dataset IV. With shooter 6 no
4 dataset was capable of clustering his shots by score. To visualise the maps produced, we shall represent the mapped shots as crosses if they correspond to good shots, triangles if they correspond to average shots, and circles otherwise, as shown in Figure 1. are probably outliers that correspond to lucky shots that are good despite the bad conditions. Figure 1 - Legend for the maps The results for each of the training sets presented in section 4 are as follows: I) Training set with all 5401 features. Different shooters are clearly identified for, as we can see in Figure 2, shooters 4 and 5 have very distinct clusters (separated from the others by dark lines in the U-Matrix), and shooters 6, 7, and 3, while in the same cluster, are mapped to different areas. To obtain these maps we used the distributed version of SOM mentioned in section 3. This allowed us to reduce the total training time of each map from 2h21m to 1h16m when using 2 machines, and even more when more machines where available Figure 3 - Map obtained with training set II for shooter Figure 4 - U-Matrix obtained with training set II for shooter Figure 2 - U-Matrix obtained after applying a SOM to training set I. If we train SOM maps for each individual shooter, the results are generally bad. II) Training set with Average Spectra (601 features) This training set is only useful for shooter 5. In Figure 3 we can see that all good shots are on the upper right corner, while the average shots are on the bottom left. Furthermore, in the U-Matrix presented in Figure 4 we can see that there is a clear distinction between these two areas. It could be argued that there are some good shots in the bad area, but these III) Training set with average spectra separated by hemisphere (2 sets of 330 features). We were unable to obtain good clusters of the data by scores for any shooter with this dataset. However, as with all others, we could cluster rather easily by shooter. When we fused the results obtained by each hemisphere, we managed to obtain the results presented in Table All Best Hemisphere Fusion Gain Table 1 - Percentage of correct classification for each shooter, with and without decision fusion IV) Training set with average spectra by channel (19 sets of 30 features).
5 IV.i) Individual Channels Channels F7 (left frontal), and T3 (left temporal) proved to be quite good at clustering by score. Figure 5 shows the results for all shooters, and we can see an area of bad shots on the left center, and an area of confusion on the lower right, with good shots on most other areas. Figure 7 - Map obtained with training set IV, channel Fz, for shooter 7 Figure 5 - Map obtained with training set IV, channel T3, for all shooters. When considering individual shooters, shooter 5 again has his shots clustered by score, but now we can also do the same for shooter 7 and 3, as can be seen in Figure 6 and Figure 7. IV.ii) Fusion by majority As can be seen in Table 2, the biggest improvements were achieved with shooter 4 and all shooters together, with an increase of 18% and 17% of correct answers. Average improvement was 11% ALL Best channel Fusion Gain Table 2 - Percentage of correct classifications for each shooter, with and without decision fusion IV.iii) Fusion by another SOM layer As expected from shooter 4 s low error rate in fusion by majority, his map is pretty clean, having only one average shot amidst the bad shots, as can be seen in Figure 8. Figure 6 - Map obtained with training set IV, channel Cz, for shooter 3. Figure 8 Results of fusion of the individual channels by a SOM, for shooter 4.
6 shooter 5 s shots by score, as can be seen in Figure 11. Figure 9 - Results of fusion with a SOM layer for shooter 6. Figure 10 - Results of fusion with a SOM layer for shooter 3. On the other hand, the maps of Figure 9 and Figure 10 show that shooters 3 and 6 are in very distinct situations, although their results in fusion by majority are quite similar. In shooter 6 s map there are many neurones that have both good and average class labels, indicating that this set of features is not good enough. In shooter 3 s map, we can observe that this choice of features led to an embedding of the average shots amidst the good ones (in the SOM s 2D projection). This leads to the conclusion that, for the data vectors in the bottom left corner, most of the single-channel SOMs classified these good shots as below average. So in this case, what we have is not a mix-up, but a set of data vectors that should be put aside and carefully analysed. A possible solution to this kind of error could be the addition of another classifier, prior to decision fusion, that would mainly handle this cluster. V) Training sets with characteristic frequency bands (alpha beta, delta and theta). The alpha band, separated the shooters better then most others, with the advantage that it uses only 6 features. It was also useful in separating Figure 11 - Map obtained with training set V, Alpha band, for shooter 5. The beta band provided the best separation of all amongst shooters. It was however useless in separating by score, as was the theta band. The delta band provided a reasonable clustering of shooter 4 s scores, as can be seen in Figure 12. Figure 12 - Map obtained with training set V, delta band, for shooter Conclusion Our main objective in this first phase of our work was to gain further insight over our data, which was accomplished. Although most of the data fusion possibilities had a strong imprint of each individual s personal EEG traces, we were able to find some clues as to what were the important frequency ranges in each case. These have already led us to try and find new criteria that combine different frequency bands, on which we are currently working. The data from the ECG did not influence in any way our results, since it was almost
7 constant for each shooter, and even amongst shooters the differences were not significant. Our use of SOMs as a clustering tool for visual inspection of our data fusion options was, as we expected, very useful. Also, decision fusion wise, we found SOMs to be very useful as a visual inspection tool of a set of classifiers. High coherence means cleaner, smoother maps, messy maps mean lots of variance within the classifier set and maps with disjoint clusters could be good indicators that an extra classifier is needed to handle specific data partitions. 7. Bibliography [1] Bandeira N, Lobo V, Moura-Pires F: Training a Self-Organizing Map Distributed on a PVM Network, Proceedings of IEEE Joint Conference on Neural Networks, 1998 [2] John ER, Prichep LS : Principles of Neurometric analysis of EEG and Evoked Potentials, Electroencephalography, 4th ed (Niedermeyer E, ed), Williams & Wilkins, 1993, [9] Schober F, Schellenberg R, Dimpfel W : Reflection of mental exercise in the dynamic quantitative topographical EEG, Neuropsychobiology 1995;31; [10] Silva FL : EEG analysis: Theory and Practice, Electroencephalography, 4th ed (Niedermeyer E, ed), Williams & Wilkins, 1993, [11] Silva FL : Computer-assisted EEG diagnosis: Pattern recognition and brain mapping, in Electroencephalography, 4th ed (Niedermeyer E, ed), Williams & Wilkins, 1993, [12] Ultsch A: Siemon, H.P.; Exploratory Data Analysis Using Kohonen Networks on Transputers, Dep.of Comp.Science, Dortmund FRG, December [3] Kalayci T, Özdamar Ö : Wavelet preprocessing for automated neural network detection of EEG spikes, IEEE Engineering in Medicine and Biology, March/April 1995, [4] Lobo VS, Bandeira N, Moura-Pires F: Distributed Kohonen Networks for Passive Sonar Based Classification, Proceedings of the International Conference on Multisource- Multisensor Information Fusion FUSION 98 [5] Kohonen T : Self-Organizing Maps, Springer-Verlag, 1995 [6] Niedermeyer E: The Normal EEG of the Waking Adult, in Electroencephalography, 4th ed (Niedermeyer E, ed), Williams & Wilkins, 1993, [7] Nuwer MR, et al.: IFCN guidelines for topographic and frequency analysis of EEGs and EPs. Report of an IFCM committee. [8] Reilly E: EEG Recording and Operation of the Apparatus, in Electroencephalography, 4th ed (Niedermeyer E, ed), Williams & Wilkins, 1993,
i-eeg: A Software Tool for EEG Feature Extraction, Feature Selection and Classification
i-eeg: A Software Tool for EEG Feature Extraction, Feature Selection and Classification Baha ŞEN Computer Engineering Department, Yıldırım Beyazıt University, Ulus, Ankara, TURKEY Musa PEKER Computer Engineering
More informationFigure (5) Kohonen Self-Organized Map
2- KOHONEN SELF-ORGANIZING MAPS (SOM) - The self-organizing neural networks assume a topological structure among the cluster units. - There are m cluster units, arranged in a one- or two-dimensional array;
More informationCOMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS
COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation
More informationArtificial Neural Networks Unsupervised learning: SOM
Artificial Neural Networks Unsupervised learning: SOM 01001110 01100101 01110101 01110010 01101111 01101110 01101111 01110110 01100001 00100000 01110011 01101011 01110101 01110000 01101001 01101110 01100001
More informationCS 229 Final Project Report Learning to Decode Cognitive States of Rat using Functional Magnetic Resonance Imaging Time Series
CS 229 Final Project Report Learning to Decode Cognitive States of Rat using Functional Magnetic Resonance Imaging Time Series Jingyuan Chen //Department of Electrical Engineering, cjy2010@stanford.edu//
More informationUnsupervised Learning
Unsupervised Learning Learning without a teacher No targets for the outputs Networks which discover patterns, correlations, etc. in the input data This is a self organisation Self organising networks An
More informationEEG Imaginary Body Kinematics Regression. Justin Kilmarx, David Saffo, and Lucien Ng
EEG Imaginary Body Kinematics Regression Justin Kilmarx, David Saffo, and Lucien Ng Introduction Brain-Computer Interface (BCI) Applications: Manipulation of external devices (e.g. wheelchairs) For communication
More informationA Comparative Study of Conventional and Neural Network Classification of Multispectral Data
A Comparative Study of Conventional and Neural Network Classification of Multispectral Data B.Solaiman & M.C.Mouchot Ecole Nationale Supérieure des Télécommunications de Bretagne B.P. 832, 29285 BREST
More informationUSING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION
USING OF THE K NEAREST NEIGHBOURS ALGORITHM (k-nns) IN THE DATA CLASSIFICATION Gîlcă Natalia, Roșia de Amaradia Technological High School, Gorj, ROMANIA Gîlcă Gheorghe, Constantin Brîncuși University from
More informationClassification of Mental Task for Brain Computer Interface Using Artificial Neural Network
Classification of Mental Task for Brain Computer Interface Using Artificial Neural Network Mohammad Naushad 1, Mohammad Waseem Khanooni 2, Nicky Ballani 3 1,2,3 Department of Electronics and Telecommunication
More informationEstimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension
Estimating Noise and Dimensionality in BCI Data Sets: Towards Illiteracy Comprehension Claudia Sannelli, Mikio Braun, Michael Tangermann, Klaus-Robert Müller, Machine Learning Laboratory, Dept. Computer
More informationTexture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map
Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University
More informationAssociative Cellular Learning Automata and its Applications
Associative Cellular Learning Automata and its Applications Meysam Ahangaran and Nasrin Taghizadeh and Hamid Beigy Department of Computer Engineering, Sharif University of Technology, Tehran, Iran ahangaran@iust.ac.ir,
More informationFunction approximation using RBF network. 10 basis functions and 25 data points.
1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data
More informationMIT Samberg Center Cambridge, MA, USA. May 30 th June 2 nd, by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA
Exploratory Machine Learning studies for disruption prediction on DIII-D by C. Rea, R.S. Granetz MIT Plasma Science and Fusion Center, Cambridge, MA, USA Presented at the 2 nd IAEA Technical Meeting on
More informationLab 9. Julia Janicki. Introduction
Lab 9 Julia Janicki Introduction My goal for this project is to map a general land cover in the area of Alexandria in Egypt using supervised classification, specifically the Maximum Likelihood and Support
More informationUniversity of Florida CISE department Gator Engineering. Clustering Part 5
Clustering Part 5 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville SNN Approach to Clustering Ordinary distance measures have problems Euclidean
More informationOHBA M/EEG Analysis Workshop. Mark Woolrich Diego Vidaurre Andrew Quinn Romesh Abeysuriya Robert Becker
OHBA M/EEG Analysis Workshop Mark Woolrich Diego Vidaurre Andrew Quinn Romesh Abeysuriya Robert Becker Workshop Schedule Tuesday Session 1: Preprocessing, manual and automatic pipelines Session 2: Task
More informationThe organization of the human cerebral cortex estimated by intrinsic functional connectivity
1 The organization of the human cerebral cortex estimated by intrinsic functional connectivity Journal: Journal of Neurophysiology Author: B. T. Thomas Yeo, et al Link: https://www.ncbi.nlm.nih.gov/pubmed/21653723
More informationSchedule for Rest of Semester
Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration
More informationCHAPTER 3. Preprocessing and Feature Extraction. Techniques
CHAPTER 3 Preprocessing and Feature Extraction Techniques CHAPTER 3 Preprocessing and Feature Extraction Techniques 3.1 Need for Preprocessing and Feature Extraction schemes for Pattern Recognition and
More informationCluster Analysis using Spherical SOM
Cluster Analysis using Spherical SOM H. Tokutaka 1, P.K. Kihato 2, K. Fujimura 2 and M. Ohkita 2 1) SOM Japan Co-LTD, 2) Electrical and Electronic Department, Tottori University Email: {tokutaka@somj.com,
More informationCluster Analysis and Visualization. Workshop on Statistics and Machine Learning 2004/2/6
Cluster Analysis and Visualization Workshop on Statistics and Machine Learning 2004/2/6 Outlines Introduction Stages in Clustering Clustering Analysis and Visualization One/two-dimensional Data Histogram,
More informationEstablishing Virtual Private Network Bandwidth Requirement at the University of Wisconsin Foundation
Establishing Virtual Private Network Bandwidth Requirement at the University of Wisconsin Foundation by Joe Madden In conjunction with ECE 39 Introduction to Artificial Neural Networks and Fuzzy Systems
More information5.6 Self-organizing maps (SOM) [Book, Sect. 10.3]
Ch.5 Classification and Clustering 5.6 Self-organizing maps (SOM) [Book, Sect. 10.3] The self-organizing map (SOM) method, introduced by Kohonen (1982, 2001), approximates a dataset in multidimensional
More informationCRF Based Point Cloud Segmentation Jonathan Nation
CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to
More informationClustering and Visualisation of Data
Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some
More informationIntroduction to digital image classification
Introduction to digital image classification Dr. Norman Kerle, Wan Bakx MSc a.o. INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Purpose of lecture Main lecture topics Review
More informationBackground: Bioengineering - Electronics - Signal processing. Biometry - Person Identification from brain wave detection University of «Roma TRE»
Background: Bioengineering - Electronics - Signal processing Neuroscience - Source imaging - Brain functional connectivity University of Rome «Sapienza» BCI - Feature extraction from P300 brain potential
More informationChapter 5: Summary and Conclusion CHAPTER 5 SUMMARY AND CONCLUSION. Chapter 1: Introduction
CHAPTER 5 SUMMARY AND CONCLUSION Chapter 1: Introduction Data mining is used to extract the hidden, potential, useful and valuable information from very large amount of data. Data mining tools can handle
More informationMachine Learning : Clustering, Self-Organizing Maps
Machine Learning Clustering, Self-Organizing Maps 12/12/2013 Machine Learning : Clustering, Self-Organizing Maps Clustering The task: partition a set of objects into meaningful subsets (clusters). The
More informationI.N. Serov, 2 V.N. Sysoyev
EVALUATION OF THE EFFECTIVENESS OF USING AIRES SHIELD ELECTRONIC ANOMALY NEUTRALIZERS TO REDUCE THE NEGATIVE INFLUENCE OF A CELLULAR PHONE'S ELECTROMAGNETIC FIELD 1 I.N. Serov, 2 V.N. Sysoyev 1 AIRES New
More informationImages Reconstruction using an iterative SOM based algorithm.
Images Reconstruction using an iterative SOM based algorithm. M.Jouini 1, S.Thiria 2 and M.Crépon 3 * 1- LOCEAN, MMSA team, CNAM University, Paris, France 2- LOCEAN, MMSA team, UVSQ University Paris, France
More informationSeminar. Topic: Object and character Recognition
Seminar Topic: Object and character Recognition Tse Ngang Akumawah Lehrstuhl für Praktische Informatik 3 Table of content What's OCR? Areas covered in OCR Procedure Where does clustering come in Neural
More informationCluster analysis of 3D seismic data for oil and gas exploration
Data Mining VII: Data, Text and Web Mining and their Business Applications 63 Cluster analysis of 3D seismic data for oil and gas exploration D. R. S. Moraes, R. P. Espíndola, A. G. Evsukoff & N. F. F.
More informationHARD, SOFT AND FUZZY C-MEANS CLUSTERING TECHNIQUES FOR TEXT CLASSIFICATION
HARD, SOFT AND FUZZY C-MEANS CLUSTERING TECHNIQUES FOR TEXT CLASSIFICATION 1 M.S.Rekha, 2 S.G.Nawaz 1 PG SCALOR, CSE, SRI KRISHNADEVARAYA ENGINEERING COLLEGE, GOOTY 2 ASSOCIATE PROFESSOR, SRI KRISHNADEVARAYA
More informationMobile Application with Optical Character Recognition Using Neural Network
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 4, Issue. 1, January 2015,
More informationWhy Do Nearest-Neighbour Algorithms Do So Well?
References Why Do Nearest-Neighbour Algorithms Do So Well? Brian D. Ripley Professor of Applied Statistics University of Oxford Ripley, B. D. (1996) Pattern Recognition and Neural Networks. CUP. ISBN 0-521-48086-7.
More informationSection 9. Human Anatomy and Physiology
Section 9. Human Anatomy and Physiology 9.1 MR Neuroimaging 9.2 Electroencephalography Overview As stated throughout, electrophysiology is the key tool in current systems neuroscience. However, single-
More informationI How does the formulation (5) serve the purpose of the composite parameterization
Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)
More information1. INTRODUCTION. AMS Subject Classification. 68U10 Image Processing
ANALYSING THE NOISE SENSITIVITY OF SKELETONIZATION ALGORITHMS Attila Fazekas and András Hajdu Lajos Kossuth University 4010, Debrecen PO Box 12, Hungary Abstract. Many skeletonization algorithms have been
More information1. Introduction. 2. Motivation and Problem Definition. Volume 8 Issue 2, February Susmita Mohapatra
Pattern Recall Analysis of the Hopfield Neural Network with a Genetic Algorithm Susmita Mohapatra Department of Computer Science, Utkal University, India Abstract: This paper is focused on the implementation
More informationDiscrete Optimization of Ray Potentials for Semantic 3D Reconstruction
Discrete Optimization of Ray Potentials for Semantic 3D Reconstruction Marc Pollefeys Joined work with Nikolay Savinov, Christian Haene, Lubor Ladicky 2 Comparison to Volumetric Fusion Higher-order ray
More informationComparison of supervised self-organizing maps using Euclidian or Mahalanobis distance in classification context
6 th. International Work Conference on Artificial and Natural Neural Networks (IWANN2001), Granada, June 13-15 2001 Comparison of supervised self-organizing maps using Euclidian or Mahalanobis distance
More informationArtificial neural networks are the paradigm of connectionist systems (connectionism vs. symbolism)
Artificial Neural Networks Analogy to biological neural systems, the most robust learning systems we know. Attempt to: Understand natural biological systems through computational modeling. Model intelligent
More informationSELECTION OF A MULTIVARIATE CALIBRATION METHOD
SELECTION OF A MULTIVARIATE CALIBRATION METHOD 0. Aim of this document Different types of multivariate calibration methods are available. The aim of this document is to help the user select the proper
More informationImage Analysis - Lecture 5
Texture Segmentation Clustering Review Image Analysis - Lecture 5 Texture and Segmentation Magnus Oskarsson Lecture 5 Texture Segmentation Clustering Review Contents Texture Textons Filter Banks Gabor
More informationThe Data Mining Application Based on WEKA: Geographical Original of Music
Management Science and Engineering Vol. 10, No. 4, 2016, pp. 36-46 DOI:10.3968/8997 ISSN 1913-0341 [Print] ISSN 1913-035X [Online] www.cscanada.net www.cscanada.org The Data Mining Application Based on
More informationHistogram and watershed based segmentation of color images
Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation
More informationSOMfluor package tutorial
SOMfluor package tutorial This tutorial serves as a guide for researchers wishing to implement Kohonen's selforganizing maps (SOM) on fluorescence data using Matlab. The following instructions and commands
More informationSemi-Supervised Clustering with Partial Background Information
Semi-Supervised Clustering with Partial Background Information Jing Gao Pang-Ning Tan Haibin Cheng Abstract Incorporating background knowledge into unsupervised clustering algorithms has been the subject
More informationFACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU
FACE DETECTION AND RECOGNITION OF DRAWN CHARACTERS HERMAN CHAU 1. Introduction Face detection of human beings has garnered a lot of interest and research in recent years. There are quite a few relatively
More informationDecoding the Human Motor Cortex
Computer Science 229 December 14, 2013 Primary authors: Paul I. Quigley 16, Jack L. Zhu 16 Comment to piq93@stanford.edu, jackzhu@stanford.edu Decoding the Human Motor Cortex Abstract: A human being s
More informationRobust line segmentation for handwritten documents
Robust line segmentation for handwritten documents Kamal Kuzhinjedathu, Harish Srinivasan and Sargur Srihari Center of Excellence for Document Analysis and Recognition (CEDAR) University at Buffalo, State
More informationCS229 Final Project: Predicting Expected Response Times
CS229 Final Project: Predicting Expected Email Response Times Laura Cruz-Albrecht (lcruzalb), Kevin Khieu (kkhieu) December 15, 2017 1 Introduction Each day, countless emails are sent out, yet the time
More informationData Cleaning and Prototyping Using K-Means to Enhance Classification Accuracy
Data Cleaning and Prototyping Using K-Means to Enhance Classification Accuracy Lutfi Fanani 1 and Nurizal Dwi Priandani 2 1 Department of Computer Science, Brawijaya University, Malang, Indonesia. 2 Department
More informationIn this project, I examined methods to classify a corpus of s by their content in order to suggest text blocks for semi-automatic replies.
December 13, 2006 IS256: Applied Natural Language Processing Final Project Email classification for semi-automated reply generation HANNES HESSE mail 2056 Emerson Street Berkeley, CA 94703 phone 1 (510)
More informationCluster Analysis. Angela Montanari and Laura Anderlucci
Cluster Analysis Angela Montanari and Laura Anderlucci 1 Introduction Clustering a set of n objects into k groups is usually moved by the aim of identifying internally homogenous groups according to a
More informationNeural networks for variable star classification
Neural networks for variable star classification Vasily Belokurov, IoA, Cambridge Supervised classification Multi-Layer Perceptron (MLP) Neural Networks for Pattern Recognition by C. Bishop Unsupervised
More informationAn Improved Iris Segmentation Technique Using Circular Hough Transform
An Improved Iris Segmentation Technique Using Circular Hough Transform Kennedy Okokpujie (&), Etinosa Noma-Osaghae, Samuel John, and Akachukwu Ajulibe Department of Electrical and Information Engineering,
More informationCluster Analysis. Ying Shen, SSE, Tongji University
Cluster Analysis Ying Shen, SSE, Tongji University Cluster analysis Cluster analysis groups data objects based only on the attributes in the data. The main objective is that The objects within a group
More informationECG782: Multidimensional Digital Signal Processing
ECG782: Multidimensional Digital Signal Processing Object Recognition http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Knowledge Representation Statistical Pattern Recognition Neural Networks Boosting
More informationApplying Kohonen Network in Organising Unstructured Data for Talus Bone
212 Third International Conference on Theoretical and Mathematical Foundations of Computer Science Lecture Notes in Information Technology, Vol.38 Applying Kohonen Network in Organising Unstructured Data
More informationSUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS
SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract
More informationCombining Gabor Features: Summing vs.voting in Human Face Recognition *
Combining Gabor Features: Summing vs.voting in Human Face Recognition * Xiaoyan Mu and Mohamad H. Hassoun Department of Electrical and Computer Engineering Wayne State University Detroit, MI 4822 muxiaoyan@wayne.edu
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationClustering CS 550: Machine Learning
Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf
More informationRobust PDF Table Locator
Robust PDF Table Locator December 17, 2016 1 Introduction Data scientists rely on an abundance of tabular data stored in easy-to-machine-read formats like.csv files. Unfortunately, most government records
More informationRoad Sign Visualization with Principal Component Analysis and Emergent Self-Organizing Map
Road Sign Visualization with Principal Component Analysis and Emergent Self-Organizing Map H6429: Computational Intelligence, Method and Applications Assignment One report Written By Nguwi Yok Yen (nguw0001@ntu.edu.sg)
More informationUsing Statistical Techniques to Improve the QC Process of Swell Noise Filtering
Using Statistical Techniques to Improve the QC Process of Swell Noise Filtering A. Spanos* (Petroleum Geo-Services) & M. Bekara (PGS - Petroleum Geo- Services) SUMMARY The current approach for the quality
More informationCHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS
CHAPTER 6 PERCEPTUAL ORGANIZATION BASED ON TEMPORAL DYNAMICS This chapter presents a computational model for perceptual organization. A figure-ground segregation network is proposed based on a novel boundary
More informationResting state network estimation in individual subjects
Resting state network estimation in individual subjects Data 3T NIL(21,17,10), Havard-MGH(692) Young adult fmri BOLD Method Machine learning algorithm MLP DR LDA Network image Correlation Spatial Temporal
More informationECE 285 Class Project Report
ECE 285 Class Project Report Based on Source localization in an ocean waveguide using supervised machine learning Yiwen Gong ( yig122@eng.ucsd.edu), Yu Chai( yuc385@eng.ucsd.edu ), Yifeng Bu( ybu@eng.ucsd.edu
More informationApproach to Increase Accuracy of Multimodal Biometric System for Feature Level Fusion
Approach to Increase Accuracy of Multimodal Biometric System for Feature Level Fusion Er. Munish Kumar, Er. Prabhjit Singh M-Tech(Scholar) Global Institute of Management and Emerging Technology Assistant
More informationThe latest trend of hybrid instrumentation
Multivariate Data Processing of Spectral Images: The Ugly, the Bad, and the True The results of various multivariate data-processing methods of Raman maps recorded with a dispersive Raman microscope are
More informationAn Empirical Study of Lazy Multilabel Classification Algorithms
An Empirical Study of Lazy Multilabel Classification Algorithms E. Spyromitros and G. Tsoumakas and I. Vlahavas Department of Informatics, Aristotle University of Thessaloniki, 54124 Thessaloniki, Greece
More informationA Graph Theoretic Approach to Image Database Retrieval
A Graph Theoretic Approach to Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington, Seattle, WA 98195-2500
More informationChapter 7 UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION
UNSUPERVISED LEARNING TECHNIQUES FOR MAMMOGRAM CLASSIFICATION Supervised and unsupervised learning are the two prominent machine learning algorithms used in pattern recognition and classification. In this
More informationSOMSN: An Effective Self Organizing Map for Clustering of Social Networks
SOMSN: An Effective Self Organizing Map for Clustering of Social Networks Fatemeh Ghaemmaghami Research Scholar, CSE and IT Dept. Shiraz University, Shiraz, Iran Reza Manouchehri Sarhadi Research Scholar,
More informationChapter 3. Speech segmentation. 3.1 Preprocessing
, as done in this dissertation, refers to the process of determining the boundaries between phonemes in the speech signal. No higher-level lexical information is used to accomplish this. This chapter presents
More informationPC based EEG mapping system
PC based EEG mapping system Piotr Walerjan and Remigiusz Tarnecki Department of Neurophysiology, Nencki Institute of Experimental Biology, 3 Pasteur St., 02-093 Warsaw, Poland, email: piotrwa@nencki.gov.pl.
More information1 Introduction. 3 Data Preprocessing. 2 Literature Review
Rock or not? This sure does. [Category] Audio & Music CS 229 Project Report Anand Venkatesan(anand95), Arjun Parthipan(arjun777), Lakshmi Manoharan(mlakshmi) 1 Introduction Music Genre Classification continues
More informationChange Detection in Remotely Sensed Images Based on Image Fusion and Fuzzy Clustering
International Journal of Electronics Engineering Research. ISSN 0975-6450 Volume 9, Number 1 (2017) pp. 141-150 Research India Publications http://www.ripublication.com Change Detection in Remotely Sensed
More informationSetup & Control Program
BrainMaster tm System Type 2E Module & BMT Software for Windows tm Setup & Control Program BSetup.exe For EEG Biofeedback (Neurofeedback) Protocols Caution: Federal law restricts this device to sale by
More informationThe Curse of Dimensionality
The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more
More informationCANCER PREDICTION USING PATTERN CLASSIFICATION OF MICROARRAY DATA. By: Sudhir Madhav Rao &Vinod Jayakumar Instructor: Dr.
CANCER PREDICTION USING PATTERN CLASSIFICATION OF MICROARRAY DATA By: Sudhir Madhav Rao &Vinod Jayakumar Instructor: Dr. Michael Nechyba 1. Abstract The objective of this project is to apply well known
More informationEvaluation of Different Metrics for Shape Based Image Retrieval Using a New Contour Points Descriptor
Evaluation of Different Metrics for Shape Based Image Retrieval Using a New Contour Points Descriptor María-Teresa García Ordás, Enrique Alegre, Oscar García-Olalla, Diego García-Ordás University of León.
More informationTHE ENSEMBLE CONCEPTUAL CLUSTERING OF SYMBOLIC DATA FOR CUSTOMER LOYALTY ANALYSIS
THE ENSEMBLE CONCEPTUAL CLUSTERING OF SYMBOLIC DATA FOR CUSTOMER LOYALTY ANALYSIS Marcin Pełka 1 1 Wroclaw University of Economics, Faculty of Economics, Management and Tourism, Department of Econometrics
More informationAssignment 2. Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions
ENEE 739Q: STATISTICAL AND NEURAL PATTERN RECOGNITION Spring 2002 Assignment 2 Classification and Regression using Linear Networks, Multilayer Perceptron Networks, and Radial Basis Functions Aravind Sundaresan
More informationAnalysis of Functional MRI Timeseries Data Using Signal Processing Techniques
Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October
More informationA Robust and Real-time Multi-feature Amalgamation. Algorithm for Fingerprint Segmentation
A Robust and Real-time Multi-feature Amalgamation Algorithm for Fingerprint Segmentation Sen Wang Institute of Automation Chinese Academ of Sciences P.O.Bo 78 Beiing P.R.China100080 Yang Sheng Wang Institute
More informationIntroduction to and calibration of a conceptual LUTI model based on neural networks
Urban Transport 591 Introduction to and calibration of a conceptual LUTI model based on neural networks F. Tillema & M. F. A. M. van Maarseveen Centre for transport studies, Civil Engineering, University
More informationCountermeasure for the Protection of Face Recognition Systems Against Mask Attacks
Countermeasure for the Protection of Face Recognition Systems Against Mask Attacks Neslihan Kose, Jean-Luc Dugelay Multimedia Department EURECOM Sophia-Antipolis, France {neslihan.kose, jean-luc.dugelay}@eurecom.fr
More informationIMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS
IMAGE CLASSIFICATION USING COMPETITIVE NEURAL NETWORKS V. Musoko, M. Kolı nova, A. Procha zka Institute of Chemical Technology, Department of Computing and Control Engineering Abstract The contribution
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationarxiv: v1 [physics.data-an] 27 Sep 2007
Classification of Interest Rate Curves Using Self-Organising Maps arxiv:0709.4401v1 [physics.data-an] 27 Sep 2007 M.Kanevski a,, M.Maignan b, V.Timonin a,1, A.Pozdnoukhov a,1 a Institute of Geomatics and
More informationSupporting Information. High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images
Supporting Information High-Throughput, Algorithmic Determination of Nanoparticle Structure From Electron Microscopy Images Christine R. Laramy, 1, Keith A. Brown, 2, Matthew N. O Brien, 2 and Chad. A.
More informationEstimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees
Estimating Missing Attribute Values Using Dynamically-Ordered Attribute Trees Jing Wang Computer Science Department, The University of Iowa jing-wang-1@uiowa.edu W. Nick Street Management Sciences Department,
More informationData: a collection of numbers or facts that require further processing before they are meaningful
Digital Image Classification Data vs. Information Data: a collection of numbers or facts that require further processing before they are meaningful Information: Derived knowledge from raw data. Something
More informationTwo-step Modified SOM for Parallel Calculation
Two-step Modified SOM for Parallel Calculation Two-step Modified SOM for Parallel Calculation Petr Gajdoš and Pavel Moravec Petr Gajdoš and Pavel Moravec Department of Computer Science, FEECS, VŠB Technical
More information