Localization from Pairwise Distance Relationships using Kernel PCA

Similar documents
Nonlinear Spherical Shells for Approximate Principal Curves Skeletonization

Automated Modularization of Human Motion into Actions and Behaviors

Markerless Kinematic Model and Motion Capture from Volume Sequences

Deriving Action and Behavior Primitives from Human Motion Data

A Developmental Framework for Visual Learning in Robotics

Converting Sequences of Human Volumes into Kinematic Motion

Non-linear dimension reduction

Generating Different Realistic Humanoid Motion

Differential Structure in non-linear Image Embedding Functions

Generalized Principal Component Analysis CVPR 2007

Learning Manifolds in Forensic Data

Manifold Clustering. Abstract. 1. Introduction

Selecting Models from Videos for Appearance-Based Face Recognition

Large-Scale Face Manifold Learning

Indian Institute of Technology Kanpur. Visuomotor Learning Using Image Manifolds: ST-GK Problem

Dimension Reduction CS534

Modelling and Visualization of High Dimensional Data. Sample Examination Paper

Advanced Machine Learning Practical 1: Manifold Learning (PCA and Kernel PCA)

Kernel PCA in nonlinear visualization of a healthy and a faulty planetary gearbox data

CIE L*a*b* color model

Curvilinear Distance Analysis versus Isomap

Image Similarities for Learning Video Manifolds. Selen Atasoy MICCAI 2011 Tutorial

Face Recognition using Tensor Analysis. Prahlad R. Enuganti

Remote Sensing Data Classification Using Combined Spectral and Spatial Local Linear Embedding (CSSLE)

Advanced Machine Learning Practical 1 Solution: Manifold Learning (PCA and Kernel PCA)

Evgeny Maksakov Advantages and disadvantages: Advantages and disadvantages: Advantages and disadvantages: Advantages and disadvantages:

Stereo and Epipolar geometry

Lecture Topic Projects

Bagging for One-Class Learning

Locally Linear Landmarks for large-scale manifold learning

Nonlinear projections. Motivation. High-dimensional. data are. Perceptron) ) or RBFN. Multi-Layer. Example: : MLP (Multi(

Dimension reduction : PCA and Clustering

RDRToolbox A package for nonlinear dimension reduction with Isomap and LLE.

Locality Preserving Projections (LPP) Abstract

Geodesic Based Ink Separation for Spectral Printing

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

Robot Manifolds for Direct and Inverse Kinematics Solutions

Dimension Reduction of Image Manifolds

Learning Geometry from Sensorimotor Experience

FUZZY KERNEL K-MEDOIDS ALGORITHM FOR MULTICLASS MULTIDIMENSIONAL DATA CLASSIFICATION

Technical Report. Title: Manifold learning and Random Projections for multi-view object recognition

Discovering Natural Kinds of Robot Sensory Experiences in Unstructured Environments

Actuated Sensor Networks: a 40-minute rant on convergence of vision, robotics, and sensor networks

Binary Principal Component Analysis

A Discriminative Non-Linear Manifold Learning Technique for Face Recognition

Estimating Cluster Overlap on Manifolds and its Application to Neuropsychiatric Disorders

08 An Introduction to Dense Continuous Robotic Mapping

Texture Mapping using Surface Flattening via Multi-Dimensional Scaling

Robust Pose Estimation using the SwissRanger SR-3000 Camera

Locality Preserving Projections (LPP) Abstract

Isometric Mapping Hashing

Vision based Localization of Mobile Robots using Kernel approaches

Alternative Statistical Methods for Bone Atlas Modelling

Segmentation and Tracking of Partial Planar Templates

The Analysis of Parameters t and k of LPP on Several Famous Face Databases

Iterative Non-linear Dimensionality Reduction by Manifold Sculpting

arxiv: v3 [cs.cv] 31 Aug 2014

Head Frontal-View Identification Using Extended LLE

Globally and Locally Consistent Unsupervised Projection

Extended Isomap for Pattern Classification

Shape Descriptor using Polar Plot for Shape Recognition.

Introduction to Support Vector Machines

IMAGE MASKING SCHEMES FOR LOCAL MANIFOLD LEARNING METHODS. Hamid Dadkhahi and Marco F. Duarte

Spatializing GIS Commands with Self-Organizing Maps. Jochen Wendel Barbara P. Buttenfield Roland J. Viger Jeremy M. Smith

Leave-One-Out Support Vector Machines

Hyperspectral image segmentation using spatial-spectral graphs

Wireless Sensor Networks Localization Methods: Multidimensional Scaling vs. Semidefinite Programming Approach

Courtesy of Prof. Shixia University

COMPRESSED DETECTION VIA MANIFOLD LEARNING. Hyun Jeong Cho, Kuang-Hung Liu, Jae Young Park. { zzon, khliu, jaeypark

Sparse Control of Robot Grasping from 2D Subspaces

Feature selection. Term 2011/2012 LSI - FIB. Javier Béjar cbea (LSI - FIB) Feature selection Term 2011/ / 22

String distance for automatic image classification

Advanced Machine Learning Practical 2: Manifold Learning + Clustering (Spectral Clustering and Kernel K-Means)

How to project circular manifolds using geodesic distances?

Discovering Natural Kinds of Robot Sensory Experiences in Unstructured Environments

Bumptrees for Efficient Function, Constraint, and Classification Learning

Clustering and Dimensionality Reduction

Exemplar-Based Primitives for Humanoid Movement Classification and Control

Does Dimensionality Reduction Improve the Quality of Motion Interpolation?

Data fusion and multi-cue data matching using diffusion maps

A Modular Software Framework for Eye-Hand Coordination in Humanoid Robots

Motion Generation for Humanoid Robots with Automatically Derived Behaviors

Data Mining: Data. Lecture Notes for Chapter 2. Introduction to Data Mining

Object and Action Detection from a Single Example

Alternative Model for Extracting Multidimensional Data Based-on Comparative Dimension Reduction

A Supervised Non-linear Dimensionality Reduction Approach for Manifold Learning

3D Geometry and Camera Calibration

Vision-Based Localization for Mobile Platforms

This is the author s version of a work that was submitted/accepted for publication in the following source:

Multiple View Geometry in Computer Vision Second Edition

Exponential Maps for Computer Vision

A Keypoint Descriptor Inspired by Retinal Computation

ABSTRACT. Keywords: visual training, unsupervised learning, lumber inspection, projection 1. INTRODUCTION

Kernel PCA in application to leakage detection in drinking water distribution system

Topological map merging

A KERNEL MACHINE BASED APPROACH FOR MULTI- VIEW FACE RECOGNITION

Shape Co-analysis. Daniel Cohen-Or. Tel-Aviv University

Kernel Methods and Visualization for Interval Data Mining

Data Preprocessing. Javier Béjar. URL - Spring 2018 CS - MAI 1/78 BY: $\

Multidimensional scaling Based in part on slides from textbook, slides of Susan Holmes. October 10, Statistics 202: Data Mining

Transcription:

Center for Robotics and Embedded Systems Technical Report Localization from Pairwise Distance Relationships using Kernel PCA Odest Chadwicke Jenkins cjenkins@usc.edu 1 Introduction In this paper, we present a method for estimating the relative localization for a set of points from information indicative of pairwise distance. While other methods for localization have been proposed, we propose an alternative that is simple to implement and easily extendable for different datasets of varied dimensionality. This method utilizes the Kernel PCA framework [6] (or equivantly the Multidimensional Scaling framework [8]) for producing localization coordinates. To localize, Kernel PCA is performed on a matrix of pairwise similarity values, assuming similarity values are reflective of the data generation distance metric. We test this localization method on 3D points comprising a human upper body and signal strength data from Mote processor modules. 2 What is Kernel PCA? As developed by Scholkopf et al., the Kernel PCA (KPCA) is a technique for nonlinear dimension reduction of data with an underlying nonlinear spatial structure. A key insight behind KPCA is to transform the input data into a higher-dimensional feature space. The feature space is constructed such that a nonlinear operation can be applied in the input space by applying a linear operation in the feature space. Consequently, standard PCA (a linear operation) can be applied in feature space to perform nonlinear PCA in the input space. The caveat to operating in feature space, however, is that the mapping from input space to feature space is not uniquely identifiable. The process of explicitly stating the coordinates of the input data in a feature space could be difficult to produce. To avoid this problem, KPCA utilizes the kernel trick to perform feature space operations using only dot-product between data points in the feature space. In simpler terms, KPCA requires only the similarity between all pairs to perform nonlinear dimension reduction. Similarity between data pairs can be measured in input space using some chosen similarity measure. KPCA uses kernel functions (e.g., radial-basis functions). Isomap [7], a similar technique, uses geodesic distances. By using pairwise similarity, the relative topology among the data is preserved between the input and feature spaces. However, the specific metric information about the data position, scale, and orientation are lost. In a larger sense, KPCA provides a means for mapping data between Cartesian spaces and pairwise distance matrices while preserving the relative topological structure of the data. The KPCA framework general procedures for computing pairwise similarity from Cartesian input data (via some all-pairs distance computation) and for computing Cartesian locations of data from pairwise similarity (via a MDS-like procedure). In performing nonlinear dimension reduction, KPCA transforms input data to pairwise distances to Cartesian locations of an embedding such that the embedding locations preserve the topology of the input data while removing the nonlinearity of its underlying structure. We have found that Isomap typically performs better in this capacity [3, 1], but KPCA provides a more grounded explanation of these procedures for subtleties such as feature space centering. 3 Kernel PCA localization If we focus on the transformation of pair distances to coordinates, this transformation is similar to localizing a set of entities with pairwise distance estimates. For example, consider the case of a group of robots capable of estimating their distance to other robots in the group. The distance measurements could be actual physical distances, possibly from laser range readings [2], or indicators with a proportional relationship to physical distance, such as radio signal strength [4]. The distances between the robot entities are directly or indirectly grounded in the physical world. The topology of this physical grounding is preserved in the pairwise distances between robot entities. However, metric information specific to the physical world is lost. The embedding produced by KPCA specifies the relative locations of the robot entities to each other. KPCA produces feature space eigenvectors. The i th component of the j th feature space eigenvector specifies the location of the i th entity with respect to the j th em- 1

bedding dimension. We assume the number of dimensions to select for embedding is the same as the dimensionality of the input data (i.e. planar world, 3D coordinates) and is known a priori. If landmarks with absolute coordinates in the physical world are included in the KPCA process, an affine transformation can be found to restore world metric information to the embedded coordinates. For this paper, we assume that all entities can estimate distance to all other entities. This assumption is not true in general and is a potential source of localization error because the localization problem becomes underconstrained. Our implementation of KPCA is derived from the Kernel PCA toy example for MATLAB provided by the authors of [6], which can be downloaded from http://www.kernel-machines.org/. 4 Results We tested the Kernel PCA localization technique on two data sources with available ground truth. The first dataset is a set of voxels collected from a volume capture system (Figure 1 1). The second dataset is pairwise signal strength data from multiple communicating radio modules (Figure 2 2). 4.1 Human Volumes The human volume dataset was collected from a volume capture system implemented from the description provided by Penny et al.[5]. This dataset consists of a set of points in 3D representative of the occupancy of a human upper body within a voxel grid. The volume of each voxel is approximately 2.5in 3. The volume points were converted into a matrix of pairwise Euclidean distances to serve as input into the localization and saved for ground truth. The pairwise distance matrix was run through KPCA localization to reproduce volume point locations closely approximating the input volume, but not exact. The variance between the input and localized volumes can be seen by their respective pairwise distance matrices. These matrices illustrate that the embedding process does not completely preserve the pairwise distances, but provides a good approximation. This inexact approximation stems from the fact that KPCA itself is a least-squares approximation in feature space. We hypothesize that the localization can be automatically refined to minimize difference between the pairwise distances of the input and localized volumes, but that is for future work. 4.2 Radio Modules The radio signal strength and collected from a set of 7 Mote processor modules in a 2D hexagon pattern continuously passing messages between each other. This data and ground truth was graciously provided by Gabe Sibley. Signal strength between each pair of radios was used as an indicator of distance between the pair and formed into a matrix of pairwise distances. However, KPCA cannot be performed directly on this matrix because of its gross nonsymmetries. Several aspects of the KPCA technique assume the matrix is symmetric (or at least close to symmetric). One critical aspect require symmetry is the feature space centering mechanism. We applied two modifications to KPCA localization for the problems associated with nonsymmetric pairwise distances. The first modification scale normalized each column followed by scale normalization of each row. This normalization was used to counter variances between radios with different transmitting strengths and reception capabilities. Making an analogy between radios radial basis kernels, this normalization would be an attempt to make kernels of different widths scale to a common width. The second modification assumes normalization will not bring symmetry to the distance matrix. Thus, feature space centering will not work. This modification removes feature space centering from the localization process. The feature space mean will be represented in the first eigenvector of the embedding. We ignore the first eigenvector and use the second and third to perform normalization. The radio placement produced by KPCA localization was surprisingly representative of their actual hexagon positioning, but far from perfect. Considering the factors obfuscating the structure of the radio positions and much room is left for improving KPCA localization, this result is quite promising. 5 Acknowledgements The authors would like to thank Gabe Sibley for providing radio signal strength data, Maja Matarić, Gaurav Sukhatme, Andrew Howard, and Evan Drumwright for insightful discussions. References [1] Chi-Wei Chu, Odest Chadwicke Jenkins, and Maja J Matarić. Markerless kinematic model and motion capture from volume sequences. In To appear in the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 23), Madison, Wisconsin, USA, June 23. 2

[2] A. Howard, M. J Matarić, and G. S. Sukhatme. An incremental deployment algorithm for mobile robot teams. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2849 2854, Lausanne, Switzerland, October 22. [3] Odest Chadwicke Jenkins and Maja J Matarić. Automated derivation of behavior vocabularies for autonomous humanoid motion. In To appear in the Second International Joint Conference on Autonomous Agents and Multiagent Systems (Agents 23), Melbourne, Australia, July 23. [4] Andrew M. Ladd, Kostas E. Bekris, Guillaume Marceau, Algis Rudys, Dan S. Wallach, and Lydia E. Kavraki. An incremental deployment algorithm for mobile robot teams. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, October 22. [5] Simon G Penny, Jeffrey Smith, and Andre Bernhardt. Traces: Wireless full body tracking in the cave. In Ninth International Conference on Artificial Reality and Telexistence (ICAT 99), December 1999. [6] Bernhard Scholkopf, Alex J. Smola, and Klaus-Robert Muller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 1(5):1299 1319, 1998. [7] J. B. Tenenbaum, V. de Silva, and J. C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 29(55):2319 2323, 2. [8] Christopher K. I. Williams. On a connection between kernel pca and metric multidimensional scaling. Machine Learning, 46(1-3):11 19, 22. 3

3 2.8 2.6 2.4.6 2.2.4 2.2 1.8 1.6.2 1.4.4 1.2.6 1.2.4.6.5.5 1.5.5.8.6.4.2.2.4.6 (a) (b) original distances embedding distances 1 2 1 2 3 3 4 Student Version of 4MATLAB 5 5 6 6 7 7 8 8 9 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 (c) (d) Figure 1: 4

(a) (b) (c) Figure 2: 5