Emergence of Topography and Complex Cell Properties from Natural Images using Extensions of ICA

Size: px
Start display at page:

Download "Emergence of Topography and Complex Cell Properties from Natural Images using Extensions of ICA"

Transcription

1 Emergence of Topography and Complex Cell Properties from Natural Images using Extensions of ICA Aapo Hyviirinen and Patrik Hoyer Neural Networks Research Center Helsinki University of Technology P.O. Box 5400, FIN HUT, Finland aapo.hyvarinen~hut.fi, patrik.hoyer~hut.fi Abstract Independent component analysis of natural images leads to emergence of simple cell properties, Le. linear filters that resemble wavelets or Gabor functions. In this paper, we extend ICA to explain further properties of VI cells. First, we decompose natural images into independent subspaces instead of scalar components. This model leads to emergence of phase and shift invariant features, similar to those in VI complex cells. Second, we define a topography between the linear components obtained by ICA. The topographic distance between two components is defined by their higher-order correlations, so that two components are close to each other in the topography if they are strongly dependent on each other. This leads to simultaneous emergence of both topography and invariances similar to complex cell properties. 1 Introduction A fundamental approach in signal processing is to design a statistical generative model of the observed signals. Such an approach is also useful for modeling the properties of neurons in primary sensory areas. The basic models that we consider here express a static monochrome image J (x, y) as a linear superposition of some features or basis functions bi (x, y): n J(x, y) = 2: bi(x, Y)Si (1) i=l where the Si are stochastic coefficients, different for each image J(x, y). Estimation of the model in Eq. (1) consists of determining the values of Si and bi(x, y) for all i and (x, y), given a sufficient number of observations of images, or in practice, image patches J(x,y). We restrict ourselves here to the basic case where the bi(x,y) form an invertible linear system. Then we can invert Si =< Wi, J > where the Wi denote the inverse filters, and < Wi, J >= L.x,y Wi(X, y)j(x, y) denotes the dot-product.

2 828 A. Hyviirinen and P Hoyer The Wi (x, y) can then be identified as the receptive fields of the model simple cells, and the Si are their activities when presented with a given image patch I(x, y). In the basic case, we assume that the Si are nongaussian, and mutually independent. This type of decomposition is called independent component analysis (ICA) [3, 9, 1, 8], or sparse coding [13]. Olshausen and Field [13] showed that when this model is estimated with input data consisting of patches of natural scenes, the obtained filters Wi(X,y) have the three principal properties of simple cells in VI: they are localized, oriented, and bandpass (selective to scale/frequency). Van Hateren and van der Schaaf [15] compared quantitatively the obtained filters Wi(X, y) with those measured by single-cell recordings of the macaque cortex, and found a good match for most of the parameters. We show in this paper that simple extensions of the basic ICA model explain emergence of further properties of VI cells: topography and the invariances of complex cells. Due to space limitations, we can only give the basic ideas in this paper. More details can be found in [6, 5, 7]. First, using the method of feature subspaces [11], we model the response of a complex cell as the norm of the projection of the input vector (image patch) onto a linear subspace, which is equivalent to the classical energy models. Then we maximize the independence between the norms of such projections, or energies. Thus we obtain features that are localized in space, oriented, and bandpass, like those given by simple cells, or Gabor analysis. In contrast to simple linear filters, however, the obtained feature subspaces also show emergence of phase invariance and (limited) shift or translation invariance. Maximizing the independence, or equivalently, the sparseness of the norms of the projections to feature subspaces thus allows for the emergence of exactly those invariances that are encountered in complex cells. Second, we extend this model of independent subspaces so that we have overlapping subspaces, and every subspace corresponds to a neighborhood on a topographic grid. This is called topographic ICA, since it defines a topographic organization between components. Components that are far from each other on the grid are independent, like in ICA. In contrast, components that are near to each other are not independent: they have strong higher-order correlations. This model shows emergence of both complex cell properties and topography from image data. 2 Independent subspaces as complex cells In addition to the simple cells that can be modelled by basic ICA, another important class of cells in VI is complex cells. The two principal properties that distinguish complex cells from simple cells are phase invariance and (limited) shift invariance. The purpose of the first model in this paper is to explain the emergence of such phase and shift invariant features using a modification of the ICA model. The modification is based on combining the principle of invariant-feature subspaces [11] and the model of multidimensional independent component analysis [2]. Invariant feature subspaces. The principle of invariant-feature subspaces states that one may consider an invariant feature as a linear subspace in a feature space. The value of the invariant, higher-order feature is given by (the square of) the norm of the projection of the given data point on that subspace, which is typically spanned by lower-order features. A feature subspace, as any linear subspace, can always be represented by a set of orthogonal basis vectors, say Wi(X, y), i = 1,..., m, where m is the dimension of the subspace. Then the value F(I) of the feature F with input vector I(x, y) is given by F(I) = L::l < Wi, I >2, where a square root

3 Emergence of VI properties using Extensions of lea 829 might be taken. In fact, this is equivalent to computing the distance between the input vector I (X, y) and a general linear combination of the basis vectors (filters) Wi(X, y) of the feature subspace [11]. In [11], it was shown that this principle, when combined with competitive learning techniques, can lead to emergence of invariant image features. Multidimensional independent component analysis. In multidimensional independent component analysis [2] (see also [12]), a linear generative model as in Eq. (1) is assumed. In contrast to ordinary lea, however, the components (responses) Si are not assumed to be all mutually independent. Instead, it is assumed that the Si can be divided into couples, triplets or in general m-tuples, such that the Si inside a given m-tuple may be dependent on each other, but dependencies between different m-tuples are not allowed. Every m-tuple of Si corresponds to m basis vectors bi(x, y). The m-dimensional probability densities inside the m-tuples of Si is not specified in advance in the general definition of multidimensional lea [2]. In the following, let us denote by J the number of independent feature subspaces, and by Sj,j = 1,..., J the set of the indices of the Si belonging to the subspace of index j. Independent feature subspaces. Invariant-feature subspaces can be embedded in multidimensional independent component analysis by considering probability distributions for the m-tuples of Si that are spherically symmetric, i.e. depend only on the norm. In other words, the probability density Pj (.) of the m-tuple with index j E {1,..., J}, can be expressed as a function of the sum of the squares of the si,i E Sj only. For simplicity, we assume further that the Pj(') are equal for all j, i.e. for all subspaces. Assume that the data consists of K observed image patches I k (x, y), k = 1,..., K. Then the logarithm of the likelihood L of the data given the model can be expressed as K J 10gL(wi(x, y), i = L.n) = L L 10gp(L < Wi, h >2) + Klog I det WI (2) k=1 j=1 iesj where P(LiESj st) = pj(si,i E Sj) gives the probability density inside the j-th m-tuple of Si, and W is a matrix containing the filters Wi(X, y) as its columns. As in basic lea, prewhitening of the data allows us to consider the Wi(X, y) to be orthonormal, and this implies that log I det WI is zero [6]. Thus we see that the likelihood in Eq. (2) is a function of the norms of the projections of Ik(x,y) on the subspaces indexed by j, which are spanned by the orthonormal basis sets given by Wi(X, y), i E Sj. Since the norm of the projection of visual data on practically any subspace has a supergaussian distribution, we need to choose the probability density P in the model to be sparse [13], i.e. supergaussian [8]. For example, we could use the following probability distribution logp( L st) = -O:[L s~11/2 + {3, (3) iesj which could be considered a multi-dimensional version of the exponential distribution. Now we see that the estimation of the model consists of finding subspaces such that the norms of the projections of the (whitened) data on those subspaces have maximally sparse distributions. The introduced "independent (feature) subspace analysis" is a natural generalization of ordinary lea. In fact, if the projections on the subspaces are reduced to dotproducts, i.e. projections on 1-D subs paces, the model reduces to ordinary lea iesj

4 830 A. Hyviirinen and P. Hoyer (provided that, in addition, the independent components are assumed to have nonskewed distributions). It is to be expected that the norms of the projections on the subspaces represent some higher-order, invariant features. The exact nature of the invariances has not been specified in the model but will emerge from the input data, using only the prior information on their independence. When independent subspace analysis is applied to natural image data, we can identify the norms of the projections (2:iESj st)1/2 as the responses of the complex cells. If the individual filter vectors Wi(X, y) are identified with the receptive fields of simple cells, this can be interpreted as a hierarchical model where the complex cell response is computed from simple cell responses Si, in a manner similar to the classical energy models for complex cells. Experiments (see below and [6]) show that the model does lead to emergence of those invariances that are encountered in complex cells. 3 Topographic lea The independent subspace analysis model introduces a certain dependence structure for the components Si. Let us assume that the distribution in the subspace is sparse, which means that the norm of the projection is most of the time very near to zero. This is the case, for example, if the densities inside the subspaces are specified as in (3). Then the model implies that two components Si and Sj that belong to the same subspace tend to be nonzero simultaneously. In other words, s; and S] are positively correlated. This seems to be a preponderant structure of dependency in most natural data. For image data, this has also been noted by Simoncelli [14). Now we generalize the model defined by (2) so that it models this kind of dependence not only inside the m-tuples, but among all ''neighboring'' components. A neighborhood relation defines a topographic order [10). (A different generalization based on an explicit generative model is given in [5].) We define the model by the following likelihood: K n n 10gL(wi(x,y),i = 1,...,n) = LLG(Lh(i,j) < Wi,h >2) +KlogldetWI (4) k=i j=l i=l Here, h(i, j) is a neighborhood function, which expresses the strength of the connection between the i-th and j-th units. The neighborhood function can be defined in the same way as with the self-organizing map [10). Neighborhoods can thus be defined as one-dimensional or two-dimensional; 2-D neighborhoods can be square or hexagonal. A simple example is to define a 1-D neighborhood relation by ~ m 0, otherwise. h(i,j) = {I, if Ii - ~I The constant m defines here the width of the neighborhood. The function G has a similar role as the log-density of the independent components in classic ICA. For image data, or other data with a sparse structure, G should be chosen as in independent subspace analysis, see Eq. (3). Properties of the topographic lea model. Here, we consider for simplicity only the case of sparse data. The first basic property is that all the components Si are uncorrelated, as can be easily proven by symmetry arguments [5]. Moreover, their variances can be defined to be equal to unity, as in classic ICA. Second, components Si and S j that are near to each other, Le. such that h( i, j) is significantly non-zero, (5)

5 Emergence oj VI properties using Extensions oj lea 831 tend to be active (non-zero) at the same time. In other words, their energies sf and s; are positively correlated. Third, latent variables that are far from each other are practically independent. Higher-order correlation decreases as a function of distance, assuming that the neighborhood is defined in a way similar to that in (5). For details, see [5]. Let us note that our definition of topography by higher-order correlations is very different from the one used in practically all existing topographic mapping methods. Usually, the distance is defined by basic geometrical relations like Euclidean distance or correlation. Interestingly, our principle makes it possible to define a topography even among a set of orthogonal vectors whose Euclidean distances are all equal. Such orthogonal vectors are actually encountered in lea, where the basis vectors and filters can be constrained to be orthogonal in the whitened space. 4 Experiments with natural image data We applied our methods on natural image data. The data was obtained by taking 16 x 16 pixel image patches at random locations from monochrome photographs depicting wild-life scenes (animals, meadows, forests, etc.). Preprocessing consisted of removing the De component and reducing the dimension of the data to 160 by pea. For details on the experiments, see [6, 5]. Fig. 1 shows the basis vectors of the 40 feature subspaces (complex cells), when subspace dimension was chosen to be 4. It can be seen that the basis vectors associated with a single complex cell all have approximately the same orientation and frequency. Their locations are not identical, but close to each other. The phases differ considerably. Every feature subspace can thus be considered a generalization of a quadrature-phase filter pair as found in the classical energy models, enabling the cell to be selective to some given orientation and frequency, but invariant to phase and somewhat invariant to shifts. Using 4 dimensions instead of 2 greatly enhances the shift invariance of the feature subspace. In topographic lea, the neighborhood function was defined so that every neighborhood consisted of a 3 x 3 square of 9 units on a 2-D torus lattice [10]. The obtained basis vectors, are shown in Fig. 2. The basis vectors are similar to those obtained by ordinary lea of image data [13, 1]. In addition, they have a clear topographic organization. In addition, the connection to independent subspace analysis is clear from Fig. 2. Two neighboring basis vectors in Fig. 2 tend to be of the same orientation and frequency. Their locations are near to each other as well. In contrast, their phases are very different. This means that a neighborhood of such basis vectors, i.e. simple cells, is similar to an independent subspace. Thus it functions as a complex cell. This was demonstrated in detail in [5]. 5 Discussion We introduced here two extensions of lea that are especially useful for image modelling. The first model uses a subspace representation to model invariant features. It turns out that the independent subspaces of natural images are similar to complex cells. The second model is a further extension of the independent subspace model. This topographic lea model is a generative model that combines topographic mapping with lea. As in all topographic mappings, the distance in the representation space (on the topographic "grid") is related to some measure of distance between represented components. In topographic lea, the distance between represented components is defined by higher-order correlations, which gives

6 832 A. Hyviirinen and P Hoyer the natural distance measure in the context of lea. An approach closely related to ours is given by Kohonen's Adaptive Subspace Self Organizing Map [11). However, the emergence of shift invariance in [11) was conditional to restricting consecutive patches to come from nearby locations in the image, giving the input data a temporal structure like in a smoothly changing image sequence. Similar developments were given by F6ldiak [4). In contrast to these two theories, we formulated an explicit image model. This independent subspace analysis model shows that emergence of complex cell properties is possible using patches at random, independently selected locations, which proves that there is enough information in static images to explain the properties of complex cells. Moreover, by extending this subspace model to model topography, we showed that the emergence of both topography and complex cell properties can be explained by a single principle: neighboring cells should have strong higher-order correlations. References [1] A.J. Bell and T.J. Sejnowski. The 'independent components' of natural scenes are edge filters. Vision Research, 37: , [2] J.-F. Cardoso. Multidimensional independent component analysis. In Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP'98), Seattle, WA, [3] P. Comon. Independent component analysis - a new concept? Signal Processing, 36: , [4] P. Foldiak. Learning invariance from transformation sequences. Neural Computation, 3: , [5] A. Hyvarinen and P. O. Hoyer. Topographic independent component analysis Submitted, available at [6] A. Hyvarinen and P. O. Hoyer. Emergence of phase and shift invariant features by decomposition of natur:al images into independent feature subspaces. Neural Computation, (in press). [7] A. Hyvarinen, P. O. Hoyer, and M. Inki. The independence assumption: Analyzing the independence of the components by topography. In M. Girolami, editor, Advances in Independent Component Analysis. Springer-Verlag, in press. [8] A. Hyvarinen and E. Oja. A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7): , [9] C. Jutten and J. Herault. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1-10, [10] T. Kohonen. Self-Organizing Maps. Springer-Verlag, Berlin, Heidelberg, New York, [11] T. Kohonen. Emergence of invariant-feature detectors in the adaptive-subspace selforganizing map. Biological Cybernetics, 75: , [12] J. K. Lin. Factorizing multivariate function classes. In Advances in Neural Information Processing Systems, volume 10, pages The MIT Press, [13] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381: , [14] E. P. Simoncelli and O. Schwartz. Modeling surround suppression in VI neurons with a statistically-derived normalization model. In Advances in Neural Information Processing Systems 11, pages MIT Press, [15] J. H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. Royal Society ser. B, 265: , 1998.

7 Emergence of Vi properties using Extensions of lea ", ;II :II.. -,.. '11'1.; Figure 1: Independent subspaces of natural image data. The model gives Gaborlike basis vectors for image windows. Every group of four basis vectors corresponds to one independent feature subspace, or complex cell. Basis vectors in a subspace are similar in orientation, location and frequency. In contrast, their phases are very different..- ~ iii - " IiioiII - I I I iii I; I...., 'i." ~ Figure 2: Topographic lea of natural image data. This gives Gabor-like basis vectors as well. Basis vectors that are similar in orientation, location and/or frequency are close to each other. The phases of near by basis vectors are very different, giving each neighborhood properties similar to a complex cell.

The "tree-dependent components" of natural scenes are edge filters

The tree-dependent components of natural scenes are edge filters The "tree-dependent components" of natural scenes are edge filters Daniel Zoran Interdisciplinary Center for Neural Computation Hebrew University of Jerusalem daniez@cs.huji.ac.il Yair Weiss School of

More information

Estimating Markov Random Field Potentials for Natural Images

Estimating Markov Random Field Potentials for Natural Images Estimating Markov Random Field Potentials for Natural Images Urs Köster 1,2, Jussi T. Lindgren 1,2 and Aapo Hyvärinen 1,2,3 1 Department of Computer Science, University of Helsinki, Finland 2 Helsinki

More information

Human Vision Based Object Recognition Sye-Min Christina Chan

Human Vision Based Object Recognition Sye-Min Christina Chan Human Vision Based Object Recognition Sye-Min Christina Chan Abstract Serre, Wolf, and Poggio introduced an object recognition algorithm that simulates image processing in visual cortex and claimed to

More information

A Topography-Preserving Latent Variable Model with Learning Metrics

A Topography-Preserving Latent Variable Model with Learning Metrics A Topography-Preserving Latent Variable Model with Learning Metrics Samuel Kaski and Janne Sinkkonen Helsinki University of Technology Neural Networks Research Centre P.O. Box 5400, FIN-02015 HUT, Finland

More information

A Hierarchial Model for Visual Perception

A Hierarchial Model for Visual Perception A Hierarchial Model for Visual Perception Bolei Zhou 1 and Liqing Zhang 2 1 MOE-Microsoft Laboratory for Intelligent Computing and Intelligent Systems, and Department of Biomedical Engineering, Shanghai

More information

Extracting Coactivated Features from Multiple Data Sets

Extracting Coactivated Features from Multiple Data Sets Extracting Coactivated Features from Multiple Data Sets Michael U. Gutmann and Aapo Hyvärinen Dept. of Computer Science and HIIT Dept. of Mathematics and Statistics P.O. Box 68, FIN-4 University of Helsinki,

More information

SPARSE CODES FOR NATURAL IMAGES. Davide Scaramuzza Autonomous Systems Lab (EPFL)

SPARSE CODES FOR NATURAL IMAGES. Davide Scaramuzza Autonomous Systems Lab (EPFL) SPARSE CODES FOR NATURAL IMAGES Davide Scaramuzza (davide.scaramuzza@epfl.ch) Autonomous Systems Lab (EPFL) Final rapport of Wavelet Course MINI-PROJECT (Prof. Martin Vetterli) ABSTRACT The human visual

More information

2 Feature Extraction and Signal Reconstruction. of Air and Bone Conduction Voices by Independent Component Analysis

2 Feature Extraction and Signal Reconstruction. of Air and Bone Conduction Voices by Independent Component Analysis IMECS 28, 9-2 March, 28, Hong Kong Feature Extraction and Signal Reconstruction of Air and Bone Conduction Voices by Independent Component Analysis Tadahiro Azetsu, Eiji Uchino, Ryosuke Kubota, and Noriaki

More information

A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images

A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images Vision Research 41 (2001) 2413 2423 www.elsevier.com/locate/visres A two-layer sparse coding model learns simple and complex cell receptive fields and topography from natural images Aapo Hyvärinen a,b,

More information

Robust Image Transmission over Noisy Channel Using Independent Component Analysis

Robust Image Transmission over Noisy Channel Using Independent Component Analysis 37 obust Image Transmission over oisy Channel Using Independent Component Analysis Deepak.. A Department of Computer cience, Ghousia College of Engineering, amanagaram-571511 Email: deepak_n_a@rediffmail.com

More information

ICA mixture models for image processing

ICA mixture models for image processing I999 6th Joint Sy~nposiurn orz Neural Computation Proceedings ICA mixture models for image processing Te-Won Lee Michael S. Lewicki The Salk Institute, CNL Carnegie Mellon University, CS & CNBC 10010 N.

More information

Advances in Neural Information Processing Systems, 1999, In press. Unsupervised Classication with Non-Gaussian Mixture Models using ICA Te-Won Lee, Mi

Advances in Neural Information Processing Systems, 1999, In press. Unsupervised Classication with Non-Gaussian Mixture Models using ICA Te-Won Lee, Mi Advances in Neural Information Processing Systems, 1999, In press. Unsupervised Classication with Non-Gaussian Mixture Models using ICA Te-Won Lee, Michael S. Lewicki and Terrence Sejnowski Howard Hughes

More information

Self-Organizing Sparse Codes

Self-Organizing Sparse Codes Self-Organizing Sparse Codes Yangqing Jia Sergey Karayev 2010-12-16 Abstract Sparse coding as applied to natural image patches learns Gabor-like components that resemble those found in the lower areas

More information

FASTGEO - A HISTOGRAM BASED APPROACH TO LINEAR GEOMETRIC ICA. Andreas Jung, Fabian J. Theis, Carlos G. Puntonet, Elmar W. Lang

FASTGEO - A HISTOGRAM BASED APPROACH TO LINEAR GEOMETRIC ICA. Andreas Jung, Fabian J. Theis, Carlos G. Puntonet, Elmar W. Lang FASTGEO - A HISTOGRAM BASED APPROACH TO LINEAR GEOMETRIC ICA Andreas Jung, Fabian J Theis, Carlos G Puntonet, Elmar W Lang Institute for Theoretical Physics, University of Regensburg Institute of Biophysics,

More information

The Sparse Manifold Transform

The Sparse Manifold Transform The Sparse Manifold Transform Bruno Olshausen Redwood Center for Theoretical Neuroscience, Helen Wills Neuroscience Institute, and School of Optometry UC Berkeley with Yubei Chen (EECS) and Dylan Paiton

More information

Unsupervised Discovery of Non-Linear Structure using Contrastive Backpropagation

Unsupervised Discovery of Non-Linear Structure using Contrastive Backpropagation Unsupervised Discovery of Non-Linear Structure using Contrastive Backpropagation G. E. Hinton, S. Osindero, M. Welling and Y. W. Teh Department of Computer Science University of Toronto Toronto, Canada

More information

Learning Sparse Topographic Representations with Products of Student-t Distributions

Learning Sparse Topographic Representations with Products of Student-t Distributions Learning Sparse Topographic Representations with Products of Student-t Distributions Max Welling and Geoffrey Hinton Department of Computer Science University of Toronto 10 King s College Road Toronto,

More information

Multiresponse Sparse Regression with Application to Multidimensional Scaling

Multiresponse Sparse Regression with Application to Multidimensional Scaling Multiresponse Sparse Regression with Application to Multidimensional Scaling Timo Similä and Jarkko Tikka Helsinki University of Technology, Laboratory of Computer and Information Science P.O. Box 54,

More information

Using perceptual relation of regularity and anisotropy in the texture with independent component model for defect detection

Using perceptual relation of regularity and anisotropy in the texture with independent component model for defect detection Using perceptual relation of regularity and anisotropy in the texture with independent component model for defect detection O.G. Sezer a, 1, A. Ercil b,, and A. Ertuzun c a Electrical Sciences and Computer

More information

Independent component analysis applied to feature extraction from colour and stereo images

Independent component analysis applied to feature extraction from colour and stereo images Network: Comput. Neural Syst. 11 (2000) 191 210. Printed in the UK PII: S0954-898X(00)14814-6 Independent component analysis applied to feature extraction from colour and stereo images Patrik O Hoyer and

More information

DOCUMENT INDEXING USING INDEPENDENT TOPIC EXTRACTION. Yu-Hwan Kim and Byoung-Tak Zhang

DOCUMENT INDEXING USING INDEPENDENT TOPIC EXTRACTION. Yu-Hwan Kim and Byoung-Tak Zhang DOCUMENT INDEXING USING INDEPENDENT TOPIC EXTRACTION Yu-Hwan Kim and Byoung-Tak Zhang School of Computer Science and Engineering Seoul National University Seoul 5-7, Korea yhkim,btzhang bi.snu.ac.kr ABSTRACT

More information

Estimating Dependency Structures for non-gaussian Components with Linear and Energy Correlations

Estimating Dependency Structures for non-gaussian Components with Linear and Energy Correlations Estimating Dependency Structures for non-gaussian Components with Linear and Energy Correlations Hiroaki Sasaki 1 hsasaki@cc.uec.ac.jp Michael U. Gutmann michael.gutmann@helsinki.fi Hayaru Shouno 1 shouno@uec.ac.jp

More information

Independent Component Analysis Using Random Projection For Data Pre-Processing

Independent Component Analysis Using Random Projection For Data Pre-Processing www.ijcsi.org 200 Independent Component Analysis Using Random Projection For Data Pre-Processing Abstract There is an inherent difficulty of finding out latent structures within high dimensional data repository

More information

Recursive ICA. Abstract

Recursive ICA. Abstract Recursive ICA Honghao Shan, Lingyun Zhang, Garrison W. Cottrell Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92093-0404 {hshan,lingyun,gary}@cs.ucsd.edu

More information

MOTION SEGMENTATION BASED ON INDEPENDENT SUBSPACE ANALYSIS

MOTION SEGMENTATION BASED ON INDEPENDENT SUBSPACE ANALYSIS MOTION SEGMENTATION BASED ON INDEPENDENT SUBSPACE ANALYSIS Zhimin Fan, Jie Zhou and Ying Wu Department of Automation, Tsinghua University, Beijing 100084, China Department of Electrical and Computer Engineering,

More information

Extracting Coactivated Features from Multiple Data Sets

Extracting Coactivated Features from Multiple Data Sets Extracting Coactivated Features from Multiple Data Sets Michael U. Gutmann University of Helsinki michael.gutmann@helsinki.fi Aapo Hyvärinen University of Helsinki aapo.hyvarinen@helsinki.fi Michael U.

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

Features that draw visual attention: an information theoretic perspective

Features that draw visual attention: an information theoretic perspective Neurocomputing 65 66 (2005) 125 133 www.elsevier.com/locate/neucom Features that draw visual attention: an information theoretic perspective Neil D.B. Bruce a,b, a Department of Computer Science, York

More information

Local Image Descriptors Using Supervised Kernel ICA

Local Image Descriptors Using Supervised Kernel ICA Local Image Descriptors Using Supervised Kernel ICA Masaki Yamazaki 1 and Sidney Fels 2 1 Faculty of Information Science and Engineering, Ritsumeikan University, Shiga, Japan 2 Department of Electrical

More information

Natural Image Patch Statistics Conditioned on Activity of an Independent Component

Natural Image Patch Statistics Conditioned on Activity of an Independent Component Natural Image Patch Statistics Conditioned on Activity of an Independent Component Mika Inki Neural Networks Research Centre Helsinki University of Technology P.O. Box 54, FI-215 HUT, Finland inki@cis.hut.fi

More information

INDEPENDENT COMPONENT ANALYSIS WITH FEATURE SELECTIVE FILTERING

INDEPENDENT COMPONENT ANALYSIS WITH FEATURE SELECTIVE FILTERING INDEPENDENT COMPONENT ANALYSIS WITH FEATURE SELECTIVE FILTERING Yi-Ou Li 1, Tülay Adalı 1, and Vince D. Calhoun 2,3 1 Department of Computer Science and Electrical Engineering University of Maryland Baltimore

More information

SPARSE CODE SHRINKAGE BASED ON THE NORMAL INVERSE GAUSSIAN DENSITY MODEL. Department of Physics University of Tromsø N Tromsø, Norway

SPARSE CODE SHRINKAGE BASED ON THE NORMAL INVERSE GAUSSIAN DENSITY MODEL. Department of Physics University of Tromsø N Tromsø, Norway SPARSE CODE SRINKAGE ASED ON TE NORMA INVERSE GAUSSIAN DENSITY MODE Robert Jenssen, Tor Arne Øigård, Torbjørn Eltoft and Alfred anssen Department of Physics University of Tromsø N - 9037 Tromsø, Norway

More information

INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS

INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS INDEPENDENT COMPONENT ANALYSIS APPLIED TO fmri DATA: A GENERATIVE MODEL FOR VALIDATING RESULTS V. Calhoun 1,2, T. Adali, 2 and G. Pearlson 1 1 Johns Hopkins University Division of Psychiatric Neuro-Imaging,

More information

What is a receptive field? Why a sensory neuron has such particular RF How a RF was developed?

What is a receptive field? Why a sensory neuron has such particular RF How a RF was developed? What is a receptive field? Why a sensory neuron has such particular RF How a RF was developed? x 1 x 2 x 3 y f w 1 w 2 w 3 T x y = f (wx i i T ) i y x 1 x 2 x 3 = = E (y y) (y f( wx T)) 2 2 o o i i i

More information

Dominant plane detection using optical flow and Independent Component Analysis

Dominant plane detection using optical flow and Independent Component Analysis Dominant plane detection using optical flow and Independent Component Analysis Naoya OHNISHI 1 and Atsushi IMIYA 2 1 School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, 263-8522,

More information

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation

More information

Non-negative Matrix Factorization with Sparseness Constraints

Non-negative Matrix Factorization with Sparseness Constraints Journal of Machine Learning Research 5 (2004) 1457 1469 Submitted 8/04; Published 11/04 Non-negative Matrix Factorization with Sparseness Constraints Patrik O. Hoyer HIIT Basic Research Unit Department

More information

ICA. PCs. ICs. characters. E WD -1/2. feature. image. vector. feature. feature. selection. selection.

ICA. PCs. ICs. characters. E WD -1/2. feature. image. vector. feature. feature. selection. selection. A Study of Feature Extraction and Selection Using Independent Component Analysis Seiichi Ozawa Div. of System Function Science Graduate School of Science and Technology Kobe University Kobe 657-851, JAPAN

More information

Independent Component Analysis of fmri Data

Independent Component Analysis of fmri Data Independent Component Analysis of fmri Data Denise Miller April 2005 Introduction Techniques employed to analyze functional magnetic resonance imaging (fmri) data typically use some form of univariate

More information

Function approximation using RBF network. 10 basis functions and 25 data points.

Function approximation using RBF network. 10 basis functions and 25 data points. 1 Function approximation using RBF network F (x j ) = m 1 w i ϕ( x j t i ) i=1 j = 1... N, m 1 = 10, N = 25 10 basis functions and 25 data points. Basis function centers are plotted with circles and data

More information

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and

More information

3.1. Solution for white Gaussian noise

3.1. Solution for white Gaussian noise Low complexity M-hypotheses detection: M vectors case Mohammed Nae and Ahmed H. Tewk Dept. of Electrical Engineering University of Minnesota, Minneapolis, MN 55455 mnae,tewk@ece.umn.edu Abstract Low complexity

More information

SOM+EOF for Finding Missing Values

SOM+EOF for Finding Missing Values SOM+EOF for Finding Missing Values Antti Sorjamaa 1, Paul Merlin 2, Bertrand Maillet 2 and Amaury Lendasse 1 1- Helsinki University of Technology - CIS P.O. Box 5400, 02015 HUT - Finland 2- Variances and

More information

Modelling image complexity by independent component analysis, with application to content-based image retrieval

Modelling image complexity by independent component analysis, with application to content-based image retrieval Modelling image complexity by independent component analysis, with application to content-based image retrieval Jukka Perkiö 1,2 and Aapo Hyvärinen 1,2,3 1 Helsinki Institute for Information Technology,

More information

SLIDING WINDOW FOR RELATIONS MAPPING

SLIDING WINDOW FOR RELATIONS MAPPING SLIDING WINDOW FOR RELATIONS MAPPING Dana Klimesova Institute of Information Theory and Automation, Prague, Czech Republic and Czech University of Agriculture, Prague klimes@utia.cas.c klimesova@pef.czu.cz

More information

WAVELET BASED NONLINEAR SEPARATION OF IMAGES. Mariana S. C. Almeida and Luís B. Almeida

WAVELET BASED NONLINEAR SEPARATION OF IMAGES. Mariana S. C. Almeida and Luís B. Almeida WAVELET BASED NONLINEAR SEPARATION OF IMAGES Mariana S. C. Almeida and Luís B. Almeida Instituto de Telecomunicações Instituto Superior Técnico Av. Rovisco Pais 1, 1049-001 Lisboa, Portugal mariana.almeida@lx.it.pt,

More information

C. Poultney S. Cho pra (NYU Courant Institute) Y. LeCun

C. Poultney S. Cho pra (NYU Courant Institute) Y. LeCun Efficient Learning of Sparse Overcomplete Representations with an Energy-Based Model Marc'Aurelio Ranzato C. Poultney S. Cho pra (NYU Courant Institute) Y. LeCun CIAR Summer School Toronto 2006 Why Extracting

More information

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies

Deep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies http://blog.csdn.net/zouxy09/article/details/8775360 Automatic Colorization of Black and White Images Automatically Adding Sounds To Silent Movies Traditionally this was done by hand with human effort

More information

Natural image statistics and efficient coding

Natural image statistics and efficient coding Network: Computation in Neural Systems 7 (1996) 333 339. Printed in the UK WORKSHOP PAPER Natural image statistics and efficient coding B A Olshausen and D J Field Department of Psychology, Uris Hall,

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

A Computational Approach To Understanding The Response Properties Of Cells In The Visual System

A Computational Approach To Understanding The Response Properties Of Cells In The Visual System A Computational Approach To Understanding The Response Properties Of Cells In The Visual System David C. Noelle Assistant Professor of Computer Science and Psychology Vanderbilt University March 3, 2004

More information

Diffusion Wavelets for Natural Image Analysis

Diffusion Wavelets for Natural Image Analysis Diffusion Wavelets for Natural Image Analysis Tyrus Berry December 16, 2011 Contents 1 Project Description 2 2 Introduction to Diffusion Wavelets 2 2.1 Diffusion Multiresolution............................

More information

Bilinear Models of Natural Images

Bilinear Models of Natural Images In: SPIE Proceedings vol. 6492: Human Vision and Electronic Imaging XII, (B.E. Rogowitz, T.N. Pappas, S.J. Daly, Eds.), Jan 28-Feb 1, 2007, San Jose, California Bilinear Models of Natural Images Bruno

More information

Removing Shadows from Images

Removing Shadows from Images Removing Shadows from Images Zeinab Sadeghipour Kermani School of Computing Science Simon Fraser University Burnaby, BC, V5A 1S6 Mark S. Drew School of Computing Science Simon Fraser University Burnaby,

More information

Hand-Eye Calibration from Image Derivatives

Hand-Eye Calibration from Image Derivatives Hand-Eye Calibration from Image Derivatives Abstract In this paper it is shown how to perform hand-eye calibration using only the normal flow field and knowledge about the motion of the hand. The proposed

More information

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari

SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES. Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari SPARSE COMPONENT ANALYSIS FOR BLIND SOURCE SEPARATION WITH LESS SENSORS THAN SOURCES Yuanqing Li, Andrzej Cichocki and Shun-ichi Amari Laboratory for Advanced Brain Signal Processing Laboratory for Mathematical

More information

Facial Expression Classification with Random Filters Feature Extraction

Facial Expression Classification with Random Filters Feature Extraction Facial Expression Classification with Random Filters Feature Extraction Mengye Ren Facial Monkey mren@cs.toronto.edu Zhi Hao Luo It s Me lzh@cs.toronto.edu I. ABSTRACT In our work, we attempted to tackle

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information

An Approach for Reduction of Rain Streaks from a Single Image

An Approach for Reduction of Rain Streaks from a Single Image An Approach for Reduction of Rain Streaks from a Single Image Vijayakumar Majjagi 1, Netravati U M 2 1 4 th Semester, M. Tech, Digital Electronics, Department of Electronics and Communication G M Institute

More information

Modelling the Statistics of Natural Images with Topographic Product of Student-t Models

Modelling the Statistics of Natural Images with Topographic Product of Student-t Models Modelling the Statistics of Natural Images with Topographic Product of Student-t Models Simon Osindero Department of Computer Science University of Toronto ON M5S 3G4 Canada osindero@cs.toronto.edu Max

More information

Non-negative matrix factorization with sparseness constraints

Non-negative matrix factorization with sparseness constraints Non-negative matrix factorization with sparseness constraints Patrik O. Hoyer arxiv:cs/0408058v1 [cs.lg] 25 Aug 2004 HIIT Basic Research Unit Department of Computer Science University of Helsinki, Finland

More information

(c) (b) Recognition Accuracy. Recognition Accuracy. Recognition Accuracy. PCA-matching Noise Rate σ (a) PCA-matching

(c) (b) Recognition Accuracy. Recognition Accuracy. Recognition Accuracy. PCA-matching Noise Rate σ (a) PCA-matching Application of Independent Component Analysis to Hand-written Japanese Character Recognition Seiichi Ozawa y, Toshihide Tsujimoto y, Manabu Kotani yy, and Norio Baba y Email:fozawa, babag@is.osaka-kyoiku.ac.jp,

More information

An algorithm for image representation as independent levels of resolution

An algorithm for image representation as independent levels of resolution An algorithm for image representation as independent levels of resolution Antonio Turiel 1, Jean-Pierre Nadal 2, and Néstor Parga 3 1 Air Project - INRIA. Domaine de Voluceau BP105 78153 Le Chesnay CEDEX,

More information

Efficient Visual Coding: From Retina To V2

Efficient Visual Coding: From Retina To V2 Efficient Visual Coding: From Retina To V Honghao Shan Garrison Cottrell Computer Science and Engineering UCSD La Jolla, CA 9093-0404 shanhonghao@gmail.com, gary@ucsd.edu Abstract The human visual system

More information

Independent component analysis of natural image sequences yields spatio-temporal lters similar to simple cells in primary visual cortex

Independent component analysis of natural image sequences yields spatio-temporal lters similar to simple cells in primary visual cortex Independent component analysis of natural image sequences yields spatio-temporal lters similar to simple cells in primary visual cortex J. H. van Hateren 1* and D. L. Ruderman 2 1 Department of Neurobiophysics,

More information

Sparse coding in practice

Sparse coding in practice Sparse coding in practice Chakra Chennubhotla & Allan Jepson Department of Computer Science University of Toronto, 6 King s College Road Toronto, ON M5S 3H5, Canada Email: chakra,jepson @cs.toronto.edu

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 06 Image Structures 13/02/06 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks

Time Series Prediction as a Problem of Missing Values: Application to ESTSP2007 and NN3 Competition Benchmarks Series Prediction as a Problem of Missing Values: Application to ESTSP7 and NN3 Competition Benchmarks Antti Sorjamaa and Amaury Lendasse Abstract In this paper, time series prediction is considered as

More information

Why Normalizing v? Why did we normalize v on the right side? Because we want the length of the left side to be the eigenvalue

Why Normalizing v? Why did we normalize v on the right side? Because we want the length of the left side to be the eigenvalue Why Normalizing v? Why did we normalize v on the right side? Because we want the length of the left side to be the eigenvalue 1 CCI PCA Algorithm (1) 2 CCI PCA Algorithm (2) 3 PCA from the FERET Face Image

More information

Mapping of Hierarchical Activation in the Visual Cortex Suman Chakravartula, Denise Jones, Guillaume Leseur CS229 Final Project Report. Autumn 2008.

Mapping of Hierarchical Activation in the Visual Cortex Suman Chakravartula, Denise Jones, Guillaume Leseur CS229 Final Project Report. Autumn 2008. Mapping of Hierarchical Activation in the Visual Cortex Suman Chakravartula, Denise Jones, Guillaume Leseur CS229 Final Project Report. Autumn 2008. Introduction There is much that is unknown regarding

More information

An Efficient Blind Source Separation Based on Adaptive Wavelet Filter and SFA

An Efficient Blind Source Separation Based on Adaptive Wavelet Filter and SFA RESEARCH ARTICLE OPEN ACCESS An Efficient Blind Source Separation Based on Adaptive Wavelet Filter and SFA Subi C, Ramya P2 1SUBI C. Author is currently pursuing M.E (Software Engineering) in Vins Christian

More information

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map

Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Texture Classification by Combining Local Binary Pattern Features and a Self-Organizing Map Markus Turtinen, Topi Mäenpää, and Matti Pietikäinen Machine Vision Group, P.O.Box 4500, FIN-90014 University

More information

Independent Components Analysis through Product Density Estimation

Independent Components Analysis through Product Density Estimation Independent Components Analysis through Product Density Estimation 'frevor Hastie and Rob Tibshirani Department of Statistics Stanford University Stanford, CA, 94305 { hastie, tibs } @stat.stanford. edu

More information

International Journal of Digital Application & Contemporary research Website: (Volume 1, Issue 8, March 2013)

International Journal of Digital Application & Contemporary research Website:   (Volume 1, Issue 8, March 2013) Face Recognition using ICA for Biometric Security System Meenakshi A.D. Abstract An amount of current face recognition procedures use face representations originate by unsupervised statistical approaches.

More information

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference

Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400

More information

A Keypoint Descriptor Inspired by Retinal Computation

A Keypoint Descriptor Inspired by Retinal Computation A Keypoint Descriptor Inspired by Retinal Computation Bongsoo Suh, Sungjoon Choi, Han Lee Stanford University {bssuh,sungjoonchoi,hanlee}@stanford.edu Abstract. The main goal of our project is to implement

More information

Visual Tracking (1) Feature Point Tracking and Block Matching

Visual Tracking (1) Feature Point Tracking and Block Matching Intelligent Control Systems Visual Tracking (1) Feature Point Tracking and Block Matching Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

Seismic regionalization based on an artificial neural network

Seismic regionalization based on an artificial neural network Seismic regionalization based on an artificial neural network *Jaime García-Pérez 1) and René Riaño 2) 1), 2) Instituto de Ingeniería, UNAM, CU, Coyoacán, México D.F., 014510, Mexico 1) jgap@pumas.ii.unam.mx

More information

Computational Vision U. Minn. Psy 5036 Daniel Kersten Contrast normalization, 2nd order histogram analysis of filter outputs

Computational Vision U. Minn. Psy 5036 Daniel Kersten Contrast normalization, 2nd order histogram analysis of filter outputs Computational Vision U. Minn. Psy 5036 Daniel Kersten Contrast normalization, nd order histogram analysis of filter outputs Initialize Spell check off In[537]:= SetOptions@ArrayPlot, ColorFunction Ø "GrayTones",

More information

Locality Preserving Projections (LPP) Abstract

Locality Preserving Projections (LPP) Abstract Locality Preserving Projections (LPP) Xiaofei He Partha Niyogi Computer Science Department Computer Science Department The University of Chicago The University of Chicago Chicago, IL 60615 Chicago, IL

More information

K-means Recovers ICA Filters when Independent Components are Sparse

K-means Recovers ICA Filters when Independent Components are Sparse Alon Vinnikov ALON.VINNIKOV@MAIL.HUJI.AC.IL Shai Shalev-Shwartz SHAIS@CS.HUJI.AC.IL School of Computer Science and Engineering, The Hebrew University of Jerusalem, ISRAEL Abstract Unsupervised feature

More information

Parametric Texture Model based on Joint Statistics

Parametric Texture Model based on Joint Statistics Parametric Texture Model based on Joint Statistics Gowtham Bellala, Kumar Sricharan, Jayanth Srinivasa Department of Electrical Engineering, University of Michigan, Ann Arbor 1. INTRODUCTION Texture images

More information

New Approaches for EEG Source Localization and Dipole Moment Estimation. Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine

New Approaches for EEG Source Localization and Dipole Moment Estimation. Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine New Approaches for EEG Source Localization and Dipole Moment Estimation Shun Chi Wu, Yuchen Yao, A. Lee Swindlehurst University of California Irvine Outline Motivation why EEG? Mathematical Model equivalent

More information

Clustering Lecture 5: Mixture Model

Clustering Lecture 5: Mixture Model Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics

More information

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA)

Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA) Multi-focus Image Fusion Using Stationary Wavelet Transform (SWT) with Principal Component Analysis (PCA) Samet Aymaz 1, Cemal Köse 1 1 Department of Computer Engineering, Karadeniz Technical University,

More information

Sparse Models in Image Understanding And Computer Vision

Sparse Models in Image Understanding And Computer Vision Sparse Models in Image Understanding And Computer Vision Jayaraman J. Thiagarajan Arizona State University Collaborators Prof. Andreas Spanias Karthikeyan Natesan Ramamurthy Sparsity Sparsity of a vector

More information

VLSI DESIGN FOR CONVOLUTIVE BLIND SOURCE SEPARATION

VLSI DESIGN FOR CONVOLUTIVE BLIND SOURCE SEPARATION VLSI DESIGN FOR CONVOLUTIVE BLIND SOURCE SEPARATION Ramya K 1, Rohini A B 2, Apoorva 3, Lavanya M R 4, Vishwanath A N 5 1Asst. professor, Dept. Of ECE, BGS Institution of Technology, Karnataka, India 2,3,4,5

More information

FMRI data: Independent Component Analysis (GIFT) & Connectivity Analysis (FNC)

FMRI data: Independent Component Analysis (GIFT) & Connectivity Analysis (FNC) FMRI data: Independent Component Analysis (GIFT) & Connectivity Analysis (FNC) Software: Matlab Toolbox: GIFT & FNC Yingying Wang, Ph.D. in Biomedical Engineering 10 16 th, 2014 PI: Dr. Nadine Gaab Outline

More information

ICASSO: SOFTWARE FOR INVESTIGATING THE RELIABILITY OF ICA ESTIMATES BY CLUSTERING AND VISUALIZATION

ICASSO: SOFTWARE FOR INVESTIGATING THE RELIABILITY OF ICA ESTIMATES BY CLUSTERING AND VISUALIZATION ICASSO: SOFTWARE FOR INVESTIGATING THE RELIABILITY OF ICA ESTIMATES BY CLUSTERING AND VISUALIZATION Johan Himberg 1 and Aapo Hyvärinen 1,2 1) Neural Networks Research Centre Helsinki Univ. of Technology,

More information

Modern Signal Processing and Sparse Coding

Modern Signal Processing and Sparse Coding Modern Signal Processing and Sparse Coding School of Electrical and Computer Engineering Georgia Institute of Technology March 22 2011 Reason D etre Modern signal processing Signals without cosines? Sparse

More information

Unsupervised learning in Vision

Unsupervised learning in Vision Chapter 7 Unsupervised learning in Vision The fields of Computer Vision and Machine Learning complement each other in a very natural way: the aim of the former is to extract useful information from visual

More information

NIC FastICA Implementation

NIC FastICA Implementation NIC-TR-2004-016 NIC FastICA Implementation Purpose This document will describe the NIC FastICA implementation. The FastICA algorithm was initially created and implemented at The Helsinki University of

More information

An ICA-Based Multivariate Discretization Algorithm

An ICA-Based Multivariate Discretization Algorithm An ICA-Based Multivariate Discretization Algorithm Ye Kang 1,2, Shanshan Wang 1,2, Xiaoyan Liu 1, Hokyin Lai 1, Huaiqing Wang 1, and Baiqi Miao 2 1 Department of Information Systems, City University of

More information

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects

Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Intelligent Control Systems Visual Tracking (1) Tracking of Feature Points and Planar Rigid Objects Shingo Kagami Graduate School of Information Sciences, Tohoku University swk(at)ic.is.tohoku.ac.jp http://www.ic.is.tohoku.ac.jp/ja/swk/

More information

Human Action Recognition Using Independent Component Analysis

Human Action Recognition Using Independent Component Analysis Human Action Recognition Using Independent Component Analysis Masaki Yamazaki, Yen-Wei Chen and Gang Xu Department of Media echnology Ritsumeikan University 1-1-1 Nojihigashi, Kusatsu, Shiga, 525-8577,

More information

I How does the formulation (5) serve the purpose of the composite parameterization

I How does the formulation (5) serve the purpose of the composite parameterization Supplemental Material to Identifying Alzheimer s Disease-Related Brain Regions from Multi-Modality Neuroimaging Data using Sparse Composite Linear Discrimination Analysis I How does the formulation (5)

More information

Learning the Epitome of an Image

Learning the Epitome of an Image University of Toronto TR PSI-2002-14, Nov 10, 2002. Learning the Epitome of an Image Brendan J. Frey and Nebojsa Jojic November 10, 2002 Abstract Estimating and visualizing high-order statistics of multivariate

More information

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model

Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model Illumination-Robust Face Recognition based on Gabor Feature Face Intrinsic Identity PCA Model TAE IN SEOL*, SUN-TAE CHUNG*, SUNHO KI**, SEONGWON CHO**, YUN-KWANG HONG*** *School of Electronic Engineering

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information