Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples
|
|
- Marvin Riley
- 5 years ago
- Views:
Transcription
1 Linear and Non-linear Dimentionality Reduction Applied to Gene Expression Data of Cancer Tissue Samples Franck Olivier Ndjakou Njeunje Applied Mathematics, Statistics, and Scientific Computation University of Maryland - College Park fndjakou@math.umd.edu Advisers Wojtek Czaja John J. Benedetto Norbert Wiener Center for Harmonic Analysis Department of Mathematics University of Maryland - College Park October 18, 2014 Abstract In computational biology and medicine, gene expression data are a very useful and important piece of the puzzle as they are one of the main source from which are derived gene function and various diseases mechanism. Unfortunately, the analysis and visualization of gene expression data is not an easy task due to the high dimensionality of the data generated from high-density microarrays. In this project, I will be interested in two methods developed to carry the task of dimensionality reduction on the data at hand so that they will become better suited for further analysis. in particular I will be looking at implementation of Laplacian Eigenmaps and Principal Components Analysis as pre-processing dimensionality reduction methods on the data, and see how they compare when their generated output is fed to similarity learning algorithms like clustering. 1
2 Notation x Gene or row of dimension M in the matrix X unless otherwise stated. X Gene expression matrix of dimension M N. X Standardized matrix of the matrix X. M and N Dimension of the matrix X. y Reduced dimension data of dimension m. x i Mean of the vector x i. σ ii Variance of the vector x i. C Covariance matrix. Λ Diagonal matrix containing the eigenvalues, λ i, of the covariance matrix. U Matrix containing the eigenvectors, u i of the covariance matrix. W Weight matrix. L Laplacian matrix. D Diagonal or degree matrix. u i and f i Eigenvectors. λ i Eigenvalues. m j Means for 1 j k where k is the number of means. sets or clusters. S (t) i 2
3 1. Background and Motivation 1.1. Gene Expression Data Gene expression data are information that numerically represent the expression level of a set of genes due to environmental factors. These environment factors could be of natural cause such as the effect of cancer or any other diseases on a set of genes; or they could be reaction to drugs or medicines taken to fight said diseases. The data are usually given in matrix form, let s call this matrix X, in order to obtain the gene expression matrix X, a high-density microarray is used to numerically determine the level of expression of a set of genes, over multiple samples or observations. The matrix X is of dimension (N M) where the number of genes is given by the variable N and the number of samples is given by the variable M. Due to the usefulness of the gene expression data, a wide range of algorithms have been developed to study the biological network provided by high-density microarrays. The main ones are classification and clustering techniques. It has been shown that classification of gene expression data could help us distinguish between various cancer classes; while clustering techniques could help separate tumor from healthy normal tissues. Unfortunately, the number of observations or samples, M, is in general very high which makes it difficult to visualize the results from the similarity learning analysis. Therefore in order to determine the structure of those data in the hope of getting more information from them, whether is to classify them as genes of the same kind based on their expression or to visually separate healthy ones from unhealthy ones a dimensionality reduction algorithm is necessary as a pre-processing step Dimension Reduction By taking a closer look at the data, in figure 1 we can notice that within each expression array x, across the multiple samples, a lot of redundancy can be found in the data. This will provide us with a platform allowing us to do some pre-processing on the data in order to retain only the most pertinent information. The methods used in this part of the analysis are known as dimensionality reduction techniques and this is where I will be focusing throughout this year long project. Given an array x of dimension M the goal is to be able to reduce this array to an m-dimensional array y such that m is very small compare to M, while retaining the most important information about the array across all the samples. There are two class of dimensionality reduction techniques: linear (LDR) and non-linear (NDR). The linear techniques assume a linear relationship between the data and perform quite well under these circumstances. The problem here is that most data that arise from gene expression do not entirely maintain a linear relationship and so to remedy to this, nonlinear methods have been developed. The advantage here is that non-linear methods aim to keep the intrinsic or natural geometrical structure between the variables or data points. After this step is completed a similarity learning analysis known as clustering is applied to the data in order to acquire more information about the genes mechanism. 3
4 Figure 1: Dimentionality reduction illustration on a single gene expression across M samples Clustering After a dimensionality reduction analysis has been perform on the data, a clustering analysis will then follow and allow us to get a visual sense of the data at hand. The goal of clustering is to group elements of a set in separate subgroups called clusters in such a way that elements in the same cluster are more similar than elements in other clusters in one way or another. In practice, different clustering methods perform differently according to the nature of data they are applied to. 2. Approach For this project I will be interested in Principal Component Analysis also known as PCA as my linear dimensionality reduction method, which is the most common linear dimensionality reduction method used in the analysis the gene expression data. I will also look at Laplacian Eigenmap abbreviated as LE as my non-linear dimensionality reduction method. To perform similarity learning I will be interested in how the outputs from the dimensionality reduction algorithms listed above compare when using Hierarchical clustering and K-means clustering. The subsections bellow gives us a better understanding on how these methods operate mathematically Principal Component Analysis [1] PCA is a linear dimension reduction algorithm, a statistical technique to handle multivariate data that make use of the Euclidean distance to estimate a lower dimensional data. While this method sometimes fail at preserving the intrinsic structure of the data (given the data 4
5 have a non-linear structure) it does a good job preserving the most variance from data. The algorithm for this method can be viewed as three steps: Step 1: Given the initial matrix X representing the set of data, we will need to construct the standardized matrix X by making sure that each sample column has zero mean and unit variance. X = ( x 1, x 2,..., x M ) (1) = ( x 1 x 1, x 2 x 2,..., x M x M ). (2) σ11 σmm σ22 Here, x 1, x 2,..., x M and σ 11, σ 22,..., σ MM are respectively the mean values and the variances for corresponding column vectors. Step 2: Compute the covariance matrix of X, then make spectral decomposition to get the eigenvalues and its corresponding eigenvectors. C = X X = UΛU. (3) Here Λ = diag(λ 1, λ 2,..., λ M ), λ 1 λ 2... λ M, U = (u 1, u 2,..., u M ). λ i and u i are separately the ith eigenvalue corresponding eigenvector for covariance matrix C. Step 3: Given that we would like the target lower dimensional space to be of dimension m, the i th principal component can be computed as Xu i, and the reduced dimentional (N m) subspace is XU m. 1 Notice from Step 3 that each principal components making up the reduced dimensional subspace is just a linear combination of the raw variables Laplacian Eigenmaps [2] This method has its advantages since it is a non-linear approach to dimension reduction. It aims to preserve the intrinsic or natural geometric structure of the manifold from the high dimension to the lower dimension. This method could also be summarized in three steps: Step 1: Given a set of N points or nodes x 1, x 2,..., x N in a high dimensional space R M, construct a weighted graph with N nodes. Constructing the graph is as simple as putting an edge between nodes that are close enough to each other. In doing this, one might either consider the ɛ-neighborhood technique where two nodes are connected if their square Euclidean distance is less than ɛ, and not connected otherwise. This might sometimes lead to graphs with several connected nodes or even disconnected graphs. An alternative would be to consider the k-nearest neighbor where each node is connected to its k th nearest neighbors. Both techniques do yield a symmetric relationship. 1 Jinlong Shi, Zhigang Luo, Nonlinear dimensionality reduction of gene expression data for visualization and clustering analysis of cancer tissue samples. 5
6 Step 2: Choose the weight for the edges and construct the weight matrix W. 2. This could be as simple as putting a 1 between two connected nodes and a 0 otherwise (if the node are not connected). One could also consider a weight as a function of the Euclidean distance between two connected nodes and 0 otherwise. Step 3: For each connected sub-graph(s), solve the following generalized eigenvector problem, Lf = λdf, (4) where D ii = j W ji, the diagonal matrix; and L = D W, the Laplacian matrix. Let f 0, f 1,..., f N 1 be the solutions of (4) with corresponding λ 0, λ 1,..., λ N 1 such that, Lf i = λ i Df i for i going from 0 to N 1 and 0 = λ 0 λ 1... λ N 1. Then the m-dimensional Euclidean space embedding is given by: 2.3. Hierarchical Clustering x i y i = (f 1 (i),..., f m (i)). (5) Next I will consider a couple of clustering methods. Starting with hierarchical clustering (HC), this is a connectivity based algorithm, the idea is that nodes that are closer to each other are more related than those who are father apart. There are two ways of implementing HC; one could either take a bottom-up (order O(n 3 )) approach; where each data points start as being in its own cluster, and as we move on pairs of clusters are merged together, see figure 2. Otherwise, one could consider a top-down (order O(2 n ), mostly due to the search algorithm) approach; where we start with one big cluster and splits are performed recursively as we move further down the hierarchy. In order to proceed we then need to decide on a metric, a way to measure the distance between two pairs of observation and a linkage criteria, a function of the pairwise distances between observations in the sets which has for output the degree of similarity between sets (this function will let us know whether or not two sets could be merged). Here are some commonly used metric and linkage criteria: Examples of metric: Euclidean distance: Manhattan distance: a b 2 = (a i b i ) 2 (6) i a b 1 = a i b i (7) i 2 Mikhail Belkin, Partha Niyogi, Laplacian Eigenmaps for Dimentionality Reduction and Data Representation Examples of linkage criteria: 6
7 Maximum or CLINK (complete linkage clustering) Minimum or SLINK (single linkage clustering) Mean or average linkage clustering max{d(a, b) : a A, b B}. (8) min{d(a, b) : a A, b B}. (9) 1 A B d(a, b). (10) a A b B Figure 2: Bottom up Hierarchical clustering illustration K-means clustering The idea here is to randomly select an initial set of K means, these could be random vectors selected either within your data set or outside your data set. This selection is follow by an assignment step where all individual data points are assigned to the nearest means according to a well-defined metric (square Euclidean distance). After this step is done the mean within each of the clusters formed gets updated to the mean of the data in the cluster. The two previous steps are repeated until no new assignment is made, this means that the clusters remain the same before and after an assignment step. This method is NP-hard (Nondeterministic Polynomial-time hard) and can be summarized as such: Initialized a set of k means m (1) 1, m (1) 2,..., m (1) k. Assignment step: Assign each observation x p to exactly one set S i containing the nearest mean to x p. S (t) i = {x p : x p m (t) i 2 x p m (t) j 2 j, 1 j k}. (11) 7
8 Update step: update the mean within each cluster, Repeat the two previous steps. m t+1 i = 1 S (t) 1 Stop when no new assignments are made. See figure 3 for an illustration of those steps. x j S i (t) x j. (12) Figure 3: K-means clustering illustration. 3. The Data The NCI-60 data I will be working with consist of microarray expressions of closed to 22,000 genes activities within 60 different cancer cell lines. I plan on working with the traditional gene expressions across these 60 cancer cell lines, without presence of drugs. Although there will be no drugs in the samples, the presence of cancer stimulant is still enough to make this analysis meaningful and interesting. These data are available to download trough the CellMiner database under the NCI website. 4. Implementation and Validation methods 4.1. Software and hardware The two dimension reduction algorithms described above will be implemented using Matlab as a mathematical tool. This decision is due to the superior ability of Matlab to deal with matrix operations. Another reason would be the wide range of toolbox available to bring this project to completion in a timely manner, the toolbox will provide us with test data and prior implementation of PCA and LE for validation and bench-marking respectively. I will be using my personal laptop with 8Gb of memory to run simulations on smaller data sets and the Norbert Wiener Center lab, clocking at 128Gb of memory for larger data set if needed. Clustering algorithms (K-means and Hierarchical) built into the Matlab software will be used as comparison tool to the implemented dimension reduction algorithms. 8
9 4.2. Validation methods We will take advantage of the DRtoolbox 3 [4] which contains implementation of the Principal Component Analysis method and the Laplacian Eigenmap methods describe above. The DRtoolbox also contains a number of well understood data sets in 3-dimensional space with corresponding representation in 2-dimensiona space for testing and validating the dimensionality reduction methods implemented for this project. Some examples of those datasets courtesy of the DRtoolbox [4] include the following: The Swiss Roll dataset in figure 4 F : (x, y) (xcos(x), y, xsin(x)) (13) Figure 4: 3-dimensional presentation of the Swiss Roll data. the Twin Peaks dataset in figure 5 f(x, y) = x 4 + 2x 2 + 4y 2 + 8x (14) 5. Results At the end of this project we expect to see a better performance overall from the Laplacian Eigenmap method versus Principal Component Analysis. This means the clusters obtained from the output or data coming from LE will be visually more significant than those coming PCA. In addition, the clustering algorithm would be able to produce more consistent results from the output of LE than that of PCA. 3 Laurens van der Maaten, Delft University of Technology 9
10 6. Timeline Figure 5: 3-dimensional presentation of the Twin Peaks data. Throughout the year I intend to follow the timeline below to completion. October - November: Implementation of PCA algorithm. Resolve issues that come up (storage and memory). Testing and validating. December: Mid-year presentation. January: First semester progress report. February - April: Implementation of LE algorithm. Testing and validating. April - May: Implementation of a clustering algorithm (if time permits). May: Final report 7. Deliverable The following materials are expected to be delivered by the end of the academic year: Weekly Report 10
11 Self Introduction Project Proposal First-Semester Progress Report Mid-year Status Report Final Report Code for Principal Component Analysis implementation Code for Laplacian Eigenmap implementation NIC-60 data set. 11
12 References [1] Jinlong Shi, Zhigang Luo, Nonlinear dimensionality reduction of gene expression data for visualization and clustering analysis of cancer tissue samples. Computers in Biology and Medicine 40 (2010) [2] Mikhail Belkin, Partha Niyogi, Laplacian Eigenmaps for Dimentionality Reduction and Data Representation. Neural Computation 15, (2003) [3] Vinodh N. Rajapakse (2013). Data Representation for Learning and Information Fusion in Bioinformatics. Digital Repository at the University of Maryland, University of Maryland (College Park, Md.) [4] Laurens van der Maaten, Affiliation: Delft University of Technology. Matlab Toolbox for Dimensionality Reduction (v0.8.1b) March 21,
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu 10/17/2014 Abstract
More informationDIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS
DIMENSION REDUCTION FOR HYPERSPECTRAL DATA USING RANDOMIZED PCA AND LAPLACIAN EIGENMAPS YIRAN LI APPLIED MATHEMATICS, STATISTICS AND SCIENTIFIC COMPUTING ADVISOR: DR. WOJTEK CZAJA, DR. JOHN BENEDETTO DEPARTMENT
More informationDimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report
Dimension reduction for hyperspectral imaging using laplacian eigenmaps and randomized principal component analysis:midyear Report Yiran Li yl534@math.umd.edu Advisor: Wojtek Czaja wojtek@math.umd.edu
More informationOlmo S. Zavala Romero. Clustering Hierarchical Distance Group Dist. K-means. Center of Atmospheric Sciences, UNAM.
Center of Atmospheric Sciences, UNAM November 16, 2016 Cluster Analisis Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster)
More informationApplication of Spectral Clustering Algorithm
1/27 Application of Spectral Clustering Algorithm Danielle Middlebrooks dmiddle1@math.umd.edu Advisor: Kasso Okoudjou kasso@umd.edu Department of Mathematics University of Maryland- College Park Advance
More informationCluster Analysis. Mu-Chun Su. Department of Computer Science and Information Engineering National Central University 2003/3/11 1
Cluster Analysis Mu-Chun Su Department of Computer Science and Information Engineering National Central University 2003/3/11 1 Introduction Cluster analysis is the formal study of algorithms and methods
More informationSpectral Clustering on Handwritten Digits Database
October 6, 2015 Spectral Clustering on Handwritten Digits Database Danielle dmiddle1@math.umd.edu Advisor: Kasso Okoudjou kasso@umd.edu Department of Mathematics University of Maryland- College Park Advance
More informationAssessing a Nonlinear Dimensionality Reduction-Based Approach to Biological Network Reconstruction.
Assessing a Nonlinear Dimensionality Reduction-Based Approach to Biological Network Reconstruction. Vinodh N. Rajapakse vinodh@math.umd.edu PhD Advisor: Professor Wojciech Czaja wojtek@math.umd.edu Project
More informationDimension Reduction CS534
Dimension Reduction CS534 Why dimension reduction? High dimensionality large number of features E.g., documents represented by thousands of words, millions of bigrams Images represented by thousands of
More informationNonlinear Dimensionality Reduction Applied to the Classification of Images
onlinear Dimensionality Reduction Applied to the Classification of Images Student: Chae A. Clark (cclark8 [at] math.umd.edu) Advisor: Dr. Kasso A. Okoudjou (kasso [at] math.umd.edu) orbert Wiener Center
More informationAdvanced Machine Learning Practical 2: Manifold Learning + Clustering (Spectral Clustering and Kernel K-Means)
Advanced Machine Learning Practical : Manifold Learning + Clustering (Spectral Clustering and Kernel K-Means) Professor: Aude Billard Assistants: Nadia Figueroa, Ilaria Lauzana and Brice Platerrier E-mails:
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationLocality Preserving Projections (LPP) Abstract
Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL
More informationNetwork Traffic Measurements and Analysis
DEIB - Politecnico di Milano Fall, 2017 Introduction Often, we have only a set of features x = x 1, x 2,, x n, but no associated response y. Therefore we are not interested in prediction nor classification,
More informationCSE 6242 A / CX 4242 DVA. March 6, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 A / CX 4242 DVA March 6, 2014 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Analyze! Limited memory size! Data may not be fitted to the memory of your machine! Slow computation!
More informationLarge-Scale Face Manifold Learning
Large-Scale Face Manifold Learning Sanjiv Kumar Google Research New York, NY * Joint work with A. Talwalkar, H. Rowley and M. Mohri 1 Face Manifold Learning 50 x 50 pixel faces R 2500 50 x 50 pixel random
More informationDimension reduction : PCA and Clustering
Dimension reduction : PCA and Clustering By Hanne Jarmer Slides by Christopher Workman Center for Biological Sequence Analysis DTU The DNA Array Analysis Pipeline Array design Probe design Question Experimental
More informationClustering and Visualisation of Data
Clustering and Visualisation of Data Hiroshi Shimodaira January-March 28 Cluster analysis aims to partition a data set into meaningful or useful groups, based on distances between data points. In some
More informationData fusion and multi-cue data matching using diffusion maps
Data fusion and multi-cue data matching using diffusion maps Stéphane Lafon Collaborators: Raphy Coifman, Andreas Glaser, Yosi Keller, Steven Zucker (Yale University) Part of this work was supported by
More informationSpectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6,
Spectral Clustering XIAO ZENG + ELHAM TABASSI CSE 902 CLASS PRESENTATION MARCH 16, 2017 1 Presentation based on 1. Von Luxburg, Ulrike. "A tutorial on spectral clustering." Statistics and computing 17.4
More informationModelling and Visualization of High Dimensional Data. Sample Examination Paper
Duration not specified UNIVERSITY OF MANCHESTER SCHOOL OF COMPUTER SCIENCE Modelling and Visualization of High Dimensional Data Sample Examination Paper Examination date not specified Time: Examination
More informationLocally Linear Landmarks for large-scale manifold learning
Locally Linear Landmarks for large-scale manifold learning Max Vladymyrov and Miguel Á. Carreira-Perpiñán Electrical Engineering and Computer Science University of California, Merced http://eecs.ucmerced.edu
More informationMSA220 - Statistical Learning for Big Data
MSA220 - Statistical Learning for Big Data Lecture 13 Rebecka Jörnsten Mathematical Sciences University of Gothenburg and Chalmers University of Technology Clustering Explorative analysis - finding groups
More informationLecture 6: Unsupervised Machine Learning Dagmar Gromann International Center For Computational Logic
SEMANTIC COMPUTING Lecture 6: Unsupervised Machine Learning Dagmar Gromann International Center For Computational Logic TU Dresden, 23 November 2018 Overview Unsupervised Machine Learning overview Association
More informationWork 2. Case-based reasoning exercise
Work 2. Case-based reasoning exercise Marc Albert Garcia Gonzalo, Miquel Perelló Nieto November 19, 2012 1 Introduction In this exercise we have implemented a case-based reasoning system, specifically
More informationSGN (4 cr) Chapter 10
SGN-41006 (4 cr) Chapter 10 Feature Selection and Extraction Jussi Tohka & Jari Niemi Department of Signal Processing Tampere University of Technology February 18, 2014 J. Tohka & J. Niemi (TUT-SGN) SGN-41006
More informationUnsupervised Clustering of Bitcoin Transaction Data
Unsupervised Clustering of Bitcoin Transaction Data Midyear Report 1 AMSC 663/664 Project Advisor: Dr. Chris Armao By: Stefan Poikonen Bitcoin: A Brief Refresher 2 Bitcoin is a decentralized cryptocurrency
More informationTime Series Clustering Ensemble Algorithm Based on Locality Preserving Projection
Based on Locality Preserving Projection 2 Information & Technology College, Hebei University of Economics & Business, 05006 Shijiazhuang, China E-mail: 92475577@qq.com Xiaoqing Weng Information & Technology
More informationLearning a Manifold as an Atlas Supplementary Material
Learning a Manifold as an Atlas Supplementary Material Nikolaos Pitelis Chris Russell School of EECS, Queen Mary, University of London [nikolaos.pitelis,chrisr,lourdes]@eecs.qmul.ac.uk Lourdes Agapito
More informationCluster Analysis (b) Lijun Zhang
Cluster Analysis (b) Lijun Zhang zlj@nju.edu.cn http://cs.nju.edu.cn/zlj Outline Grid-Based and Density-Based Algorithms Graph-Based Algorithms Non-negative Matrix Factorization Cluster Validation Summary
More informationRobust Pose Estimation using the SwissRanger SR-3000 Camera
Robust Pose Estimation using the SwissRanger SR- Camera Sigurjón Árni Guðmundsson, Rasmus Larsen and Bjarne K. Ersbøll Technical University of Denmark, Informatics and Mathematical Modelling. Building,
More informationClustering and Dimensionality Reduction
Clustering and Dimensionality Reduction Some material on these is slides borrowed from Andrew Moore's excellent machine learning tutorials located at: Data Mining Automatically extracting meaning from
More informationCSE 6242 / CX October 9, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 / CX 4242 October 9, 2014 Dimension Reduction Guest Lecturer: Jaegul Choo Volume Variety Big Data Era 2 Velocity Veracity 3 Big Data are High-Dimensional Examples of High-Dimensional Data Image
More informationSensitivity to parameter and data variations in dimensionality reduction techniques
Sensitivity to parameter and data variations in dimensionality reduction techniques Francisco J. García-Fernández 1,2,MichelVerleysen 2, John A. Lee 3 and Ignacio Díaz 1 1- Univ. of Oviedo - Department
More informationExploratory data analysis for microarrays
Exploratory data analysis for microarrays Jörg Rahnenführer Computational Biology and Applied Algorithmics Max Planck Institute for Informatics D-66123 Saarbrücken Germany NGFN - Courses in Practical DNA
More informationCSE 6242 A / CS 4803 DVA. Feb 12, Dimension Reduction. Guest Lecturer: Jaegul Choo
CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo CSE 6242 A / CS 4803 DVA Feb 12, 2013 Dimension Reduction Guest Lecturer: Jaegul Choo Data is Too Big To Do Something..
More informationIsometric Mapping Hashing
Isometric Mapping Hashing Yanzhen Liu, Xiao Bai, Haichuan Yang, Zhou Jun, and Zhihong Zhang Springer-Verlag, Computer Science Editorial, Tiergartenstr. 7, 692 Heidelberg, Germany {alfred.hofmann,ursula.barth,ingrid.haas,frank.holzwarth,
More informationThe Analysis of Parameters t and k of LPP on Several Famous Face Databases
The Analysis of Parameters t and k of LPP on Several Famous Face Databases Sujing Wang, Na Zhang, Mingfang Sun, and Chunguang Zhou College of Computer Science and Technology, Jilin University, Changchun
More informationClustering. CS294 Practical Machine Learning Junming Yin 10/09/06
Clustering CS294 Practical Machine Learning Junming Yin 10/09/06 Outline Introduction Unsupervised learning What is clustering? Application Dissimilarity (similarity) of objects Clustering algorithm K-means,
More informationClustering analysis of gene expression data
Clustering analysis of gene expression data Chapter 11 in Jonathan Pevsner, Bioinformatics and Functional Genomics, 3 rd edition (Chapter 9 in 2 nd edition) Human T cell expression data The matrix contains
More informationCOMPRESSED DETECTION VIA MANIFOLD LEARNING. Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park. { zzon, khliu, jaeypark
COMPRESSED DETECTION VIA MANIFOLD LEARNING Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park Email : { zzon, khliu, jaeypark } @umich.edu 1. INTRODUCTION In many imaging applications such as Computed Tomography
More informationClustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin
Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014
More informationNon-linear dimension reduction
Sta306b May 23, 2011 Dimension Reduction: 1 Non-linear dimension reduction ISOMAP: Tenenbaum, de Silva & Langford (2000) Local linear embedding: Roweis & Saul (2000) Local MDS: Chen (2006) all three methods
More informationClustering CS 550: Machine Learning
Clustering CS 550: Machine Learning This slide set mainly uses the slides given in the following links: http://www-users.cs.umn.edu/~kumar/dmbook/ch8.pdf http://www-users.cs.umn.edu/~kumar/dmbook/dmslides/chap8_basic_cluster_analysis.pdf
More informationSchroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery
Schroedinger Eigenmaps with Nondiagonal Potentials for Spatial-Spectral Clustering of Hyperspectral Imagery Nathan D. Cahill a, Wojciech Czaja b, and David W. Messinger c a Center for Applied and Computational
More informationSYDE Winter 2011 Introduction to Pattern Recognition. Clustering
SYDE 372 - Winter 2011 Introduction to Pattern Recognition Clustering Alexander Wong Department of Systems Design Engineering University of Waterloo Outline 1 2 3 4 5 All the approaches we have learned
More informationManifold Alignment. Chang Wang, Peter Krafft, and Sridhar Mahadevan
Manifold Alignment Chang Wang, Peter Krafft, and Sridhar Mahadevan April 1, 211 2 Contents 5 Manifold Alignment 5 5.1 Introduction.................................... 5 5.1.1 Problem Statement............................
More information9/29/13. Outline Data mining tasks. Clustering algorithms. Applications of clustering in biology
9/9/ I9 Introduction to Bioinformatics, Clustering algorithms Yuzhen Ye (yye@indiana.edu) School of Informatics & Computing, IUB Outline Data mining tasks Predictive tasks vs descriptive tasks Example
More informationThe Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy
The Anatomical Equivalence Class Formulation and its Application to Shape-based Computational Neuroanatomy Sokratis K. Makrogiannis, PhD From post-doctoral research at SBIA lab, Department of Radiology,
More informationThe exam is closed book, closed notes except your one-page (two-sided) cheat sheet.
CS 189 Spring 2015 Introduction to Machine Learning Final You have 2 hours 50 minutes for the exam. The exam is closed book, closed notes except your one-page (two-sided) cheat sheet. No calculators or
More informationInteractive Text Mining with Iterative Denoising
Interactive Text Mining with Iterative Denoising, PhD kegiles@vcu.edu www.people.vcu.edu/~kegiles Assistant Professor Department of Statistics and Operations Research Virginia Commonwealth University Interactive
More informationA Comparative Study of Locality Preserving Projection and Principle Component Analysis on Classification Performance Using Logistic Regression
Journal of Data Analysis and Information Processing, 2016, 4, 55-63 Published Online May 2016 in SciRes. http://www.scirp.org/journal/jdaip http://dx.doi.org/10.4236/jdaip.2016.42005 A Comparative Study
More informationLow-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification
Rochester Institute of Technology RIT Scholar Works Presentations and other scholarship 8-31-2015 Low-dimensional Representations of Hyperspectral Data for Use in CRF-based Classification Yang Hu Nathan
More informationStatistical Analysis of Metabolomics Data. Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte
Statistical Analysis of Metabolomics Data Xiuxia Du Department of Bioinformatics & Genomics University of North Carolina at Charlotte Outline Introduction Data pre-treatment 1. Normalization 2. Centering,
More informationMUSI-6201 Computational Music Analysis
MUSI-6201 Computational Music Analysis Part 4.3: Feature Post-Processing alexander lerch November 4, 2015 instantaneous features overview text book Chapter 3: Instantaneous Features (pp. 63 69) sources:
More informationManifold Learning for Video-to-Video Face Recognition
Manifold Learning for Video-to-Video Face Recognition Abstract. We look in this work at the problem of video-based face recognition in which both training and test sets are video sequences, and propose
More informationTechnical Report. Title: Manifold learning and Random Projections for multi-view object recognition
Technical Report Title: Manifold learning and Random Projections for multi-view object recognition Authors: Grigorios Tsagkatakis 1 and Andreas Savakis 2 1 Center for Imaging Science, Rochester Institute
More informationUnsupervised Learning
Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover
More informationAn efficient algorithm for sparse PCA
An efficient algorithm for sparse PCA Yunlong He Georgia Institute of Technology School of Mathematics heyunlong@gatech.edu Renato D.C. Monteiro Georgia Institute of Technology School of Industrial & System
More informationFace Recognition using Laplacianfaces
Journal homepage: www.mjret.in ISSN:2348-6953 Kunal kawale Face Recognition using Laplacianfaces Chinmay Gadgil Mohanish Khunte Ajinkya Bhuruk Prof. Ranjana M.Kedar Abstract Security of a system is an
More informationOutline. Multivariate analysis: Least-squares linear regression Curve fitting
DATA ANALYSIS Outline Multivariate analysis: principal component analysis (PCA) visualization of high-dimensional data clustering Least-squares linear regression Curve fitting e.g. for time-course data
More informationVisualizing breast cancer data with t-sne
Bachelor Thesis Visualizing breast cancer data with t-sne Student: Ka ka Tam s4074157 k.tam@student.ru.nl Supervisors: Dr. Ida Sprinkhuizen-Kuyper Dr. Elena Marchiori October 25, 2013 Contents 1 Introduction
More informationGlobally and Locally Consistent Unsupervised Projection
Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence Globally and Locally Consistent Unsupervised Projection Hua Wang, Feiping Nie, Heng Huang Department of Electrical Engineering
More informationNonlinear Dimensionality Reduction Applied to the Binary Classification of Images
Nonlinear Dimensionality Reduction Applied to the Binary Classification of Images. Abstract Chae A. Clark cclark8@math.umd.edu University of Maryland, College Park Advisor Dr. Kasso A. Okoudjou kasso@math.umd.edu
More informationImage Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial
Image Similarities for Learning Video Manifolds Selen Atasoy MICCAI 2011 Tutorial Image Spaces Image Manifolds Tenenbaum2000 Roweis2000 Tenenbaum2000 [Tenenbaum2000: J. B. Tenenbaum, V. Silva, J. C. Langford:
More informationThe Curse of Dimensionality
The Curse of Dimensionality ACAS 2002 p1/66 Curse of Dimensionality The basic idea of the curse of dimensionality is that high dimensional data is difficult to work with for several reasons: Adding more
More informationCurvilinear Distance Analysis versus Isomap
Curvilinear Distance Analysis versus Isomap John Aldo Lee, Amaury Lendasse, Michel Verleysen Université catholique de Louvain Place du Levant, 3, B-1348 Louvain-la-Neuve, Belgium {lee,verleysen}@dice.ucl.ac.be,
More informationLaplacian Faces: A Face Recognition Tool
Laplacian Faces: A Face Recognition Tool Prof. Sami M Halwani 1, Prof. M.V.Ramana Murthy 1, Prof. S.B.Thorat 1 Faculty of Computing and Information Technology, King Abdul Aziz University, Rabigh, KSA,Email-mv.rm50@gmail.com,
More informationFacial Expression Detection Using Implemented (PCA) Algorithm
Facial Expression Detection Using Implemented (PCA) Algorithm Dileep Gautam (M.Tech Cse) Iftm University Moradabad Up India Abstract: Facial expression plays very important role in the communication with
More informationAarti Singh. Machine Learning / Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg
Spectral Clustering Aarti Singh Machine Learning 10-701/15-781 Apr 7, 2010 Slides Courtesy: Eric Xing, M. Hein & U.V. Luxburg 1 Data Clustering Graph Clustering Goal: Given data points X1,, Xn and similarities
More informationCluster Analysis and Visualization. Workshop on Statistics and Machine Learning 2004/2/6
Cluster Analysis and Visualization Workshop on Statistics and Machine Learning 2004/2/6 Outlines Introduction Stages in Clustering Clustering Analysis and Visualization One/two-dimensional Data Histogram,
More informationAN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION
AN IMPROVED K-MEANS CLUSTERING ALGORITHM FOR IMAGE SEGMENTATION WILLIAM ROBSON SCHWARTZ University of Maryland, Department of Computer Science College Park, MD, USA, 20742-327, schwartz@cs.umd.edu RICARDO
More informationNonlinear projections. Motivation. High-dimensional. data are. Perceptron) ) or RBFN. Multi-Layer. Example: : MLP (Multi(
Nonlinear projections Université catholique de Louvain (Belgium) Machine Learning Group http://www.dice.ucl ucl.ac.be/.ac.be/mlg/ 1 Motivation High-dimensional data are difficult to represent difficult
More informationHow do microarrays work
Lecture 3 (continued) Alvis Brazma European Bioinformatics Institute How do microarrays work condition mrna cdna hybridise to microarray condition Sample RNA extract labelled acid acid acid nucleic acid
More informationLecture Topic Projects
Lecture Topic Projects 1 Intro, schedule, and logistics 2 Applications of visual analytics, basic tasks, data types 3 Introduction to D3, basic vis techniques for non-spatial data Project #1 out 4 Data
More informationSergei Silvestrov, Christopher Engström. January 29, 2013
Sergei Silvestrov, January 29, 2013 L2: t Todays lecture:. L2: t Todays lecture:. measurements. L2: t Todays lecture:. measurements. s. L2: t Todays lecture:. measurements. s.. First we would like to define
More informationImage Segmentation. Srikumar Ramalingam School of Computing University of Utah. Slides borrowed from Ross Whitaker
Image Segmentation Srikumar Ramalingam School of Computing University of Utah Slides borrowed from Ross Whitaker Segmentation Semantic Segmentation Indoor layout estimation What is Segmentation? Partitioning
More informationImage Processing. Image Features
Image Processing Image Features Preliminaries 2 What are Image Features? Anything. What they are used for? Some statements about image fragments (patches) recognition Search for similar patches matching
More informationSchool of Computer and Communication, Lanzhou University of Technology, Gansu, Lanzhou,730050,P.R. China
Send Orders for Reprints to reprints@benthamscienceae The Open Automation and Control Systems Journal, 2015, 7, 253-258 253 Open Access An Adaptive Neighborhood Choosing of the Local Sensitive Discriminant
More informationSupervised vs unsupervised clustering
Classification Supervised vs unsupervised clustering Cluster analysis: Classes are not known a- priori. Classification: Classes are defined a-priori Sometimes called supervised clustering Extract useful
More informationFeature selection. Term 2011/2012 LSI - FIB. Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/ / 22
Feature selection Javier Béjar cbea LSI - FIB Term 2011/2012 Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/2012 1 / 22 Outline 1 Dimensionality reduction 2 Projections 3 Attribute selection
More informationSELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen
SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and
More informationCourtesy of Prof. Shixia University
Courtesy of Prof. Shixia Liu @Tsinghua University Outline Introduction Classification of Techniques Table Scatter Plot Matrices Projections Parallel Coordinates Summary Motivation Real world data contain
More informationRecognizing Handwritten Digits Using the LLE Algorithm with Back Propagation
Recognizing Handwritten Digits Using the LLE Algorithm with Back Propagation Lori Cillo, Attebury Honors Program Dr. Rajan Alex, Mentor West Texas A&M University Canyon, Texas 1 ABSTRACT. This work is
More informationFACE RECOGNITION USING SUPPORT VECTOR MACHINES
FACE RECOGNITION USING SUPPORT VECTOR MACHINES Ashwin Swaminathan ashwins@umd.edu ENEE633: Statistical and Neural Pattern Recognition Instructor : Prof. Rama Chellappa Project 2, Part (b) 1. INTRODUCTION
More informationSpectral Clustering. Presented by Eldad Rubinstein Based on a Tutorial by Ulrike von Luxburg TAU Big Data Processing Seminar December 14, 2014
Spectral Clustering Presented by Eldad Rubinstein Based on a Tutorial by Ulrike von Luxburg TAU Big Data Processing Seminar December 14, 2014 What are we going to talk about? Introduction Clustering and
More informationRDRToolbox A package for nonlinear dimension reduction with Isomap and LLE.
RDRToolbox A package for nonlinear dimension reduction with Isomap and LLE. Christoph Bartenhagen October 30, 2017 Contents 1 Introduction 1 1.1 Loading the package......................................
More informationCIE L*a*b* color model
CIE L*a*b* color model To further strengthen the correlation between the color model and human perception, we apply the following non-linear transformation: with where (X n,y n,z n ) are the tristimulus
More informationAn Unsupervised Technique for Statistical Data Analysis Using Data Mining
International Journal of Information Sciences and Application. ISSN 0974-2255 Volume 5, Number 1 (2013), pp. 11-20 International Research Publication House http://www.irphouse.com An Unsupervised Technique
More informationData Preprocessing. Javier Béjar AMLT /2017 CS - MAI. (CS - MAI) Data Preprocessing AMLT / / 71 BY: $\
Data Preprocessing S - MAI AMLT - 2016/2017 (S - MAI) Data Preprocessing AMLT - 2016/2017 1 / 71 Outline 1 Introduction Data Representation 2 Data Preprocessing Outliers Missing Values Normalization Discretization
More informationData Preprocessing. Javier Béjar. URL - Spring 2018 CS - MAI 1/78 BY: $\
Data Preprocessing Javier Béjar BY: $\ URL - Spring 2018 C CS - MAI 1/78 Introduction Data representation Unstructured datasets: Examples described by a flat set of attributes: attribute-value matrix Structured
More informationSpatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels
Spatial-Spectral Dimensionality Reduction of Hyperspectral Imagery with Partial Knowledge of Class Labels Nathan D. Cahill, Selene E. Chew, and Paul S. Wenger Center for Applied and Computational Mathematics,
More informationPublication 4. Jarkko Venna, and Samuel Kaski. Comparison of visualization methods
Publication 4 Jarkko Venna, and Samuel Kaski. Comparison of visualization methods 4 for an atlas of gene expression data sets. Information Visualization. To Appear. c 2007 Palgrave Macmillan Ltd. Reproduced
More informationGenetic Programming. Charles Chilaka. Department of Computational Science Memorial University of Newfoundland
Genetic Programming Charles Chilaka Department of Computational Science Memorial University of Newfoundland Class Project for Bio 4241 March 27, 2014 Charles Chilaka (MUN) Genetic algorithms and programming
More informationFeature Selection Using Principal Feature Analysis
Feature Selection Using Principal Feature Analysis Ira Cohen Qi Tian Xiang Sean Zhou Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign Urbana,
More informationLab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD
Lab # 2 - ACS I Part I - DATA COMPRESSION in IMAGE PROCESSING using SVD Goals. The goal of the first part of this lab is to demonstrate how the SVD can be used to remove redundancies in data; in this example
More informationCBioVikings. Richard Röttger. Copenhagen February 2 nd, Clustering of Biomedical Data
CBioVikings Copenhagen February 2 nd, Richard Röttger 1 Who is talking? 2 Resources Go to http://imada.sdu.dk/~roettger/teaching/cbiovikings.php You will find The dataset These slides An overview paper
More informationIncorporating Known Pathways into Gene Clustering Algorithms for Genetic Expression Data
Incorporating Known Pathways into Gene Clustering Algorithms for Genetic Expression Data Ryan Atallah, John Ryan, David Aeschlimann December 14, 2013 Abstract In this project, we study the problem of classifying
More informationFrame based kernel methods for hyperspectral imagery data
Frame based kernel methods for hyperspectral imagery data Norbert Wiener Center Department of Mathematics University of Maryland, College Park Recent Advances in Harmonic Analysis and Elliptic Partial
More informationUnsupervised Learning
Unsupervised Learning Pierre Gaillard ENS Paris September 28, 2018 1 Supervised vs unsupervised learning Two main categories of machine learning algorithms: - Supervised learning: predict output Y from
More information