Learning the Epitome of an Image

Size: px
Start display at page:

Download "Learning the Epitome of an Image"

Transcription

1 University of Toronto TR PSI , Nov 10, Learning the Epitome of an Image Brendan J. Frey and Nebojsa Jojic November 10, 2002 Abstract Estimating and visualizing high-order statistics of multivariate data is important for analysis, synthesis and visualization in science and engineering. Often, data consists of measurements on an underlying domain, such as space or time. Examples include images, audio signals and text, where the domains are 2-D space, 1-D time and 1-D symbol index. We introduce a model called the epitome that can simultaneously represent multi-scale high-order statistics as a set of parameters on the same domain as the input data. A cost function measures how well multi-scale patches drawn from the input data match the epitome and this cost function can be optimized efficiently using the EM algorithm. Our technique reduces a large number of high-order statistics to an intuitive, compact representation that is suitable for a variety of data processing applications. We demonstrate our method using problems of object detection, texture segmentation and image retrieval. One approach to the problems of fully or semi-automated visualization, analysis and synthesis of data is to learn a compact model that accurately accounts for interesting properties of the data and can be used as a summary of the data [1, 2]. The hope is that the compact University of Toronto, 10 King s College Rd., Toronto ON., M5S 3G4, Canada Microsoft Research, One Microsoft Way, Redmond WA., , USA 1

2 model will offer a way to visualize complex relationships, discover high-order patterns that are useful for further analysis by human or machine, and provide an efficient parameterization of the data that is useful for synthesizing variations of the data and making predictions. We introduce a parametric generative model called the epitome and an unsupervised learning algorithm for fitting the epitome to input data. When data is measured on some underlying domain such as space or time, a sensible model should account for properties that emerge from invariances in the domain. The epitome accounts for one one of the fundamental elements in Grenander s pattern theory, domain warping [1], and the computations needed to learn the epitome are related to Mumford s proposition for implementation of domain warping in neuronal architectures [3]. An epitome is an image of parameters that specifies a generative model of patches taken from an input image. Although the domain on which the data is measured can have arbitrary dimensionality, for concreteness we focus on 2-dimensional images, such as those shown in Fig. 1 [4]. Given an input image, a training set can be formed by randomly selecting patches of various sizes from the input image. In fact, the patches may be of the same size or of a variety of sizes and they may be taken regularly from the input, or from random locations. The set of patches is then used to obtain a maximum likelihood estimate of the epitome, as shown on the far right in Fig. 1. Although the epitome is over five times smaller in area than the input image, it contains many of the multi-scale high-order statistics present in the input. We now introduce a novel cost function for learning the epitome. The goal is to maximize the marginal probability of the input patches, summing over all possible ways in which the input patches can be generated from the epitome. An input patch X is a function x(k) on 2

3 A B Figure 1: (A) To estimate the epitome of a input image, a training set is formed by extracting patches of random sizes from random positions in the input image. Then, the expectation maximization (EM) algorithm is used to learn the more compact epitome, so that the set of patches is also likely to have been drawn from the epitome. (B) The mean color image of the epitome during learning [9], where each successive picture corresponds to an additional 3 iterations of EM. The final epitome contains many of the large-scale structures in the input image, but also retains many details, such as the sharp boundaries between the flowers and the dark background. an underlying domain K, where k K is a vector whose dimensionality is equal to the dimensionality of the image domain. We assume the domain is discrete, so that K is a subset of the integer tuples. In Fig. 1, the underlying domain is 2-D space, so k is a 2-D integer vector. 3

4 The epitome E is represented by an image of model parameters e(j), a set of mappings M that specifies how patches may be generated from the epitome, and one probability for each mapping. The domain of the model parameters, J, is usually smaller than the domain of the input image. A patch is generated by first selecting a mapping m with probability ρ(m), where m M ρ(m) = 1. Then, each value in the patch, x(k), is generated independently using the corresponding parameter in the epitome, e(m[k]). The density of the patch given the mapping and the epitome is p(x m, E) = k K f(x(k); e(m[k]))), (1) where f( ; ) is the density function for each value in the patch. Out of all parameters, the density of x(k) depends only on the parameter e(m[k]). The most appropriate form of the density function f is application-specific [5]. Multiplying by the prior probability of the mapping and summing over all mappings, we obtain the marginal likelihood of the epitome, p(x E) = ρ(m)p(x m, E). (2) m M The marginal log-likelihood for a set of N i.i.d. training patches X 1,..., X N is L(E) = N n=1 ( log m M )) ρ(m)p(x m, E). (3) This is the log-probability that the training patches were generated from the epitome. Learning entails searching for the epitome E that maximizes the highly nonlinear function L(E). While a variety of nonlinear optimization techniques can be applied, we present an efficient expectation maximization (EM) algorithm [6] that treats the mapping as a hidden variable. Initially, the epitome E is set to a random value and then the algorithm alternates 4

5 between an E step and and M step until convergence of the epitome. In the E step, for each input patch, the posterior distribution over the mapping is computed using Bayes rule: P (m X n, E) = p(x n m, E)ρ(m) m M p(x n m, E)ρ(m). (4) In the M step, ρ(m) is set to 1 N N n=1 P (m X n, E) and e is set to the value of e that maximizes the expected value of log P (X m, E), N P (m X n, E) log f(x n (k); e (m[k])). (5) n=1 k K m M In the M step, the parameter e (j) at location j can be solved for independently of the other parameters, by maximizing N ( n=1 k K m:m(k)=j ) P (m X n, E) log f(x n (k); e (j)). (6) e(j) is adjusted to maximize the log-likelihood of all of the values in every patch, where each value is weighted by the posterior probability that the value was mapped from position j in the epitome. The EM algorithm produces a sequence of parameter estimates with monotonically increasing likelihood [7], We report results on real-valued visual images using the normal distribution for f and the set of mappings corresponding to cutting out axis-aligned rectangular patches from all possible locations in the epitome. See [8] for details on the EM updates and how they can be computed efficiently using convolutions. Here, the parameter e(j) associated with position j in the epitome consists of a mean and a variance. Fig. 1 shows the means, as an image, learned from a color picture of a dog standing on a side-walk in front of a garden [9]. The 5

6 Figure 2: The epitome simultaneously supports a variety of complex data processing tasks, such as detecting individual patterns and segmenting textures with quite different spectral properties. Regions including the dog s nose, the shaded flowers and the sidewalk surface are tagged in the epitome and then the input is reconstructed using the tagged epitome [10]. epitome contains many of the dominant features from the input image, including various features from the dog, light and dark magenta flowers, and the sidewalk. Importantly, these features are integrated together in a consistent way across the epitome, even though they have different scales. The epitome of an image offers a way to easily visualize the high-order statistics in the image, and simultaneously supports a variety of complex data-processing operations. Fig. 2 6

7 shows how the epitome from above can be used to separately tag two textures with quite different scales, and locate an object (the dog s nose) that occurs at only one location in the input image [10]. Complex patterns are often described by a combination of parts, such as specific types of mouth, eyes and hair in images of faces. Fig. 3 shows that learning the epitome of patches drawn from frontal images of faces produces a semi-coherent montage of face parts, which can be used for parts-based image retrieval. In other work, libraries of fixed-size patches have been used successfully for modeling high-order statistics in images [13]. By virtue of the patch index, libraries of patches can model dependencies that span the width of a patch. To account for multiple scales, a huge library of multi-scale patches can be constructed. In contrast, the epitome can model the same range of scales, but uses many fewer parameters. Further, the epitome integrates the multiple sizes of patches together into a single, spatially coherent image. In contrast to most texture models, the epitome can simultaneously model structures at a variety at different scales. Low-order image statistics, low-order Markov random fields, and statistical spectral techniques have been used quite successfully for modeling narrowband textures [14 16]. However, these techniques are not well-suited to modeling inhomogeneous images that have multiple patterns at different scales. Since the epitome is represented as an image, it can model features that occupy a wide range of frequency bands and can model phase dependencies that span the width of the epitome. A quite different approach to modeling long-range dependencies is to introduce hidden variables that store the state of the pattern, in the fashion of hidden Markov models [17] and Markov random fields [2]. While hidden Markov models can potentially model long-range 7

8 A B C D E Figure 3: Parts-based image retrieval using the epitome. (A) The epitome was learned using patches drawn from a library of face images [11]. (B) To find smiling faces, we identified a region in the epitome that contains the corner of a smiling mouth. Then, we retrieved the 25 faces that had the highest total posterior probability of using patches in the identified region. (C) To find faces without smiles, we retrieved the 25 faces that had the lowest total posterior probability of using patches in the identified region. (D) To retrieve images containing a combination of parts, we identified multiple regions in the epitome and retrieved images containing at least one patch that was most likely from each region. Here, we chose regions in the epitome containing a nearly closed eye, the corner of a smiling mouth, and dark hair. (C) Images retrieved when we identified the same regions as above, except that an open eye was identified instead of a closed eye. 8

9 dependencies, the forward-backward learning algorithm is notorious for confusing some of the states in long sequences, thus failing to accurately model long-range dependencies [18]. This problem is worse for Markov random fields, where approximate inference algorithms must be used to infer the state variables during learning, and confusing states for different patterns is even more likely. The epitome avoids this problem by directly fitting segments from the input, so that long-range dependencies within individual patterns are preserved during learning. In comparison with vector-space techniques for clustering [19, 20], dimensionality reduction [21 24] and independent component analysis [25], an advantage of the epitome is that it can account for relationships between patterns that emerge when the data is measured on an underlying domain, such as space or time. By representing the model in this same domain, the epitome can associate input patterns that come from nearby locations in a broader pattern, even when the input patterns are far apart in the corresponding vector space. We view the epitome and other data analysis techniques as complementary. Dependencies between elements in the epitome that are not accounted for by domain warping can be modeled using vector-space machine learning techniques, such as independent component analysis [25]. Since the epitome is a probability model, it can be integrated into other probability models used in machine learning. Our description of the epitome and the EM learning algorithm leaves several interesting avenues of research unexplored. For example, although we reported results on highlyconstrained forms of mappings, another approach is to allow much richer mappings. An arbitrary mapping can be specified using one mapping variable m[k] for each position k in the patch: m = {m[k] : k K}. Without any constraints, the total number of 9

10 such mappings is too large to directly enumerate for all but uninterestingly small models. However, if the distribution over mappings, ρ(m), is described by a tree on the mapping variables {m[k] : k K}, dynamic programming can be used to efficiently compute the summations needed for inference and learning [26]. Our results show that the epitome provides a natural interface to the much larger input image and supports a variety of complex data processing tasks, including texture segmentation, object detection, and parts-based image retrieval. It is also appealing that the epitome can be estimated by optimizing a cost function and that the EM algorithm can be used to efficiently find a solution. By virtue of the fact that the epitome models multi-scale highorder statistics, but is defined on the same domain as the input, we believe the epitome has the potential to be useful in a variety of application areas. References [1] U. Grenander, Lectures in Pattern Theory I, II and III: Pattern Analysis, Pattern Synthesis and Regular Structures (Springer-Verlag, Berlin, ). [2] G. E. Hinton and T. J. Sejnowski, In D. E. Rumelhart and J. L. McClelland (eds), Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vol I (MIT Press, Cambridge MA, 1986). [3] D. Mumford, in C. Koch, J. Davis (eds) Large-Scale Theories of the Cortex, 125 (MIT Pres, Cambridge, MA., 1994). [4] To account for multi-channel data such as color images, it is useful to allow the input image and the epitome to be fields, i.e., vector images. For example, in color 10

11 visual images such as those shown in Fig. 1, the color at each 2-D spacial index can be represented as a 3-vector representing red, green and blue channels. In spectral representations of time-series data, the spectrum at each 1-D time index can be represented as a vector containing the energies at various frequencies. For notational simplicity, we describe the model and learning algorithm in the case of scalar images. The extension to vector images is straightforward. [5] We have investigated learning epitomes of color images, music, and text. For color images, we took k to be 2-dimensional and x(k) to be a 3-vector of light intensities in the red, green and blue channels. We used the normal density for f, where e(j) consisted of 3 means and a 3 variances. For spectral representations of timeseries data, we took k to be 1-dimensional and x(k) to be an n-vector containing the short-time power spectrum. We used the n-dimensional normal density for f, with diagonal covariance matrix. For text, we took k to be 1-dimensional and x(k) to be a discrete symbol. We used the multinomial distribution for f, where e(j) consisted of one probability for each symbol. [6] A. P. Dempster, N. M. Laird, and D. B. Rubin, Proceedings of the Royal Statistical Society B 39, 1 (1977). [7] R. M. Neal and G. E. Hinton, in M. I. Jordan (ed) Learning in Graphical Models (Kluwer Academic Publishers, Norwell MA, 1998). [8] In this case, the parameter associated with position j in the epitome, e(j), consists of a mean µ(j) and a variance σ 2 (j). The mapping can be represented as an integer vector m that specifies the location in the epitome of the patch, so the parameter used for position k in the patch is e(m + k). It is convenient to define the epitome on a torus, so that m + k is taken modulus the largest integers in J. 11

12 Inserting the normal distribution into Eq. 4, it can be shown that up to an additive constant, the log-posterior distribution over the mapping, log P (m X, E), is log ρ(m) 1 2 k K( log(σ 2 (m + k)) + (x(k) µ(m + k)) 2 /σ 2 (m + k) ). Expanding the square, we obtain the following: log ρ(m) 1 2 k K( log(σ 2 (m + k)) + x(k) 2 /σ 2 (m + k) 2x(k)µ(m + k)/σ 2 (m + k) + µ(m + k) 2 /σ 2 (m + k) ). Each of these terms can be computed efficiently for all m using a convolution. The update for the mean at position i in the epitome is µ(i) ( N n=1 m:i m K P (m X n, E)x n (i m) ) / ( N n=1 m:i m K P (m X n, E) ) and the update for the variance is σ 2 (i) ( N n=1 µ(i)) 2) / ( N n=1 m:i m K P (m X n, E)(x n (i m) m:i m K P (m X n, E) ). These updates can also be computed using convolutions. The above parameter updates can be viewed as the updates for learning a large mixture of J Gaussians [6], where there is one mixture component for each possible patch in the epitome, and the mixture model parameters corresponding to overlapping patches are constrained to be equal. [9] To learn the epitome in Fig. 1, a total of 4000 square training patches with widths 80, 64, 48, 32, 24, 20, 16, 12, 10 and 8 were sampled randomly from the input image. Large patches constrain the learning algorithm more than small patches, so the number of patches of each size was selected to keep the total area of the patches in every size category the same. For each size category, 3 iterations of EM were applied, starting with the patches and finishing with the 8 8 patches. The means for each color channel in the epitome were initialized to the mean of all values in the same channel in the training set, plus Gaussian noise with 1/100th of the standard deviation of the training data. The variances were initialized to the variance of the training data. 12

13 [10] The means of the epitome found in the experiment described in [9] were tagged by hand, to highlight the shaded flowers (green), the sidewalk surface (cyan), and the dog s nose (yellow). Then, 8 8 patches were taken at regular intervals from the input image and the most probable position of each patch in the original epitome was computed using Eq. 4. If the patch in the epitome overlapped with a region in the epitome that had been tagged, the tagged version of the patch was copied to the output image. Otherwise, the original patch from the input image was copied to the output image. To produce a smooth output image, patches were taken at 4-pixel intervals from the input image and overlapping patches in the output image were averaged together. [11] To learn the epitome shown in Fig. 3, a set of 300 frontal face images from the publicly available database described in [12] were cropped and sub-sampled to form a set of 300, images. A total of 40,000 patches of size 24 24, and 7 7 were then drawn randomly from these images, where the number of patches of each size was selected to keep the total area of the patches in every size category the same. For each size category, 10 iterations of EM were applied, starting with the patches and finishing with the 7 7 patches. The means in the epitome were initialized to the mean of all values in the training set, plus Gaussian noise with 1/100th of the standard deviation of the training data. The variances were initialized to the variance of the training data. [12] A. Lanitis, C. J. Taylor, T. F. Cootes, Image and Vis. Comput. 13, 393 (1995). [13] W. Freeman, E. Pasztor, Proc. Internat. Conf. Comp. Vision, 1182 (1999). [14] B. Julesz, Nature 290, 91 (1981). 13

14 [15] S. C. Zhu, Y. N. Wu, D. Mumford, Neural Comput. 9, 1627 (1997). [16] J. Portilla, E. P. Simoncelli, Intern. Journ. Comp. Vision 40, 49 (2000). [17] L. Rabiner, B.-H. Juang, Fundamentals of Speech Recognition (Prentice-Hall, Englewood Cliffs, NJ., 1993). [18] M. Ostendorf, V. Digalakis, D. Kimball, IEEE Trans. Speech & Audio Proc. 4, 360 (1996). [19] S. P. Loyd, IEEE Trans. Info. Theory 28, 129 (1982). [20] J. Shi, J. Malik, IEEE Trans Patt. Anal. Mach. Intell. 222, 888 (2000). [21] T. Kohonen, Self-Organization and Associative Memory (Springer-Verlag, Berlin, 1988). [22] I. T. Jolliffe, Principal Component Analysis (Springer-Verlag, New York, 1989). [23] C. Bishop, M. Svensen, C. Williams, Neural Comput. 10, 215 (1998). [24] S. T. Roweis, L. K. Saul, Science 290, 2323 (2000). [25] A. J. Bell, T. J. Sejnowski, Neural Comput. 7, 1129 (1995). [26] In this case, the distribution ρ(m) over mappings is described by a tree on the mapping variables m[k], k K. The epitome defines a density f(x(k); e(m[k])) for each mapping variable, so p(x m, E)ρ(m) is described by a tree on m[k], k K. Dynamic programming can be used to efficiently compute the marginal likelihood of the epitome in Eq. 2 and the expected value of log p(x m, E) in Eq. 5, which can be used to learn the epitome. Once the epitome is learned, dynamic programming can be used for various data-processing tasks. 14

15 [27] We thank P. Anandan, A. Blake, G. E. Hinton, A. Kannan, S. T Roweis, C. K. I. Williams and S. C. Zhu for helpful discussions. Frey acknowledges support from the Natural Sciences and Engineering Research Council of Canada. 15

Mixture Models and EM

Mixture Models and EM Mixture Models and EM Goal: Introduction to probabilistic mixture models and the expectationmaximization (EM) algorithm. Motivation: simultaneous fitting of multiple model instances unsupervised clustering

More information

Patch-Based Image Classification Using Image Epitomes

Patch-Based Image Classification Using Image Epitomes Patch-Based Image Classification Using Image Epitomes David Andrzejewski CS 766 - Final Project December 19, 2005 Abstract Automatic image classification has many practical applications, including photo

More information

Normalized Texture Motifs and Their Application to Statistical Object Modeling

Normalized Texture Motifs and Their Application to Statistical Object Modeling Normalized Texture Motifs and Their Application to Statistical Obect Modeling S. D. Newsam B. S. Manunath Center for Applied Scientific Computing Electrical and Computer Engineering Lawrence Livermore

More information

Keeping flexible active contours on track using Metropolis updates

Keeping flexible active contours on track using Metropolis updates Keeping flexible active contours on track using Metropolis updates Trausti T. Kristjansson University of Waterloo ttkri stj @uwater l oo. ca Brendan J. Frey University of Waterloo f r ey@uwater l oo. ca

More information

ICA mixture models for image processing

ICA mixture models for image processing I999 6th Joint Sy~nposiurn orz Neural Computation Proceedings ICA mixture models for image processing Te-Won Lee Michael S. Lewicki The Salk Institute, CNL Carnegie Mellon University, CS & CNBC 10010 N.

More information

Epitomic Analysis of Human Motion

Epitomic Analysis of Human Motion Epitomic Analysis of Human Motion Wooyoung Kim James M. Rehg Department of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {wooyoung, rehg}@cc.gatech.edu Abstract Epitomic analysis is

More information

Learning Appearance and Transparency Manifolds of Occluded Objects in Layers

Learning Appearance and Transparency Manifolds of Occluded Objects in Layers Learning Appearance and Transparency Manifolds of Occluded Objects in Layers Brendan J. Frey Nebojsa Jojic Anitha Kannan University of Toronto Microsoft Research University of Toronto 10 King s College

More information

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking

Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Switching Hypothesized Measurements: A Dynamic Model with Applications to Occlusion Adaptive Joint Tracking Yang Wang Tele Tan Institute for Infocomm Research, Singapore {ywang, telctan}@i2r.a-star.edu.sg

More information

Empirical Bayesian Motion Segmentation

Empirical Bayesian Motion Segmentation 1 Empirical Bayesian Motion Segmentation Nuno Vasconcelos, Andrew Lippman Abstract We introduce an empirical Bayesian procedure for the simultaneous segmentation of an observed motion field estimation

More information

Note Set 4: Finite Mixture Models and the EM Algorithm

Note Set 4: Finite Mixture Models and the EM Algorithm Note Set 4: Finite Mixture Models and the EM Algorithm Padhraic Smyth, Department of Computer Science University of California, Irvine Finite Mixture Models A finite mixture model with K components, for

More information

Joint design of data analysis algorithms and user interface for video applications

Joint design of data analysis algorithms and user interface for video applications Joint design of data analysis algorithms and user interface for video applications Nebojsa Jojic Microsoft Research Sumit Basu Microsoft Research Nemanja Petrovic University of Illinois Brendan Frey University

More information

Local Image Registration: An Adaptive Filtering Framework

Local Image Registration: An Adaptive Filtering Framework Local Image Registration: An Adaptive Filtering Framework Gulcin Caner a,a.murattekalp a,b, Gaurav Sharma a and Wendi Heinzelman a a Electrical and Computer Engineering Dept.,University of Rochester, Rochester,

More information

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

Classification of Face Images for Gender, Age, Facial Expression, and Identity 1 Proc. Int. Conf. on Artificial Neural Networks (ICANN 05), Warsaw, LNCS 3696, vol. I, pp. 569-574, Springer Verlag 2005 Classification of Face Images for Gender, Age, Facial Expression, and Identity 1

More information

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos

22 October, 2012 MVA ENS Cachan. Lecture 5: Introduction to generative models Iasonas Kokkinos Machine Learning for Computer Vision 1 22 October, 2012 MVA ENS Cachan Lecture 5: Introduction to generative models Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Center for Visual Computing Ecole Centrale Paris

More information

Face Hallucination Based on Eigentransformation Learning

Face Hallucination Based on Eigentransformation Learning Advanced Science and Technology etters, pp.32-37 http://dx.doi.org/10.14257/astl.2016. Face allucination Based on Eigentransformation earning Guohua Zou School of software, East China University of Technology,

More information

Image Coding with Active Appearance Models

Image Coding with Active Appearance Models Image Coding with Active Appearance Models Simon Baker, Iain Matthews, and Jeff Schneider CMU-RI-TR-03-13 The Robotics Institute Carnegie Mellon University Abstract Image coding is the task of representing

More information

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution

Detecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.

More information

Automatic Alignment of Local Representations

Automatic Alignment of Local Representations Automatic Alignment of Local Representations Yee Whye Teh and Sam Roweis Department of Computer Science, University of Toronto ywteh,roweis @cs.toronto.edu Abstract We present an automatic alignment procedure

More information

Feature Selection Using Principal Feature Analysis

Feature Selection Using Principal Feature Analysis Feature Selection Using Principal Feature Analysis Ira Cohen Qi Tian Xiang Sean Zhou Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana-Champaign Urbana,

More information

Learning based face hallucination techniques: A survey

Learning based face hallucination techniques: A survey Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information

Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Probabilistic Facial Feature Extraction Using Joint Distribution of Location and Texture Information Mustafa Berkay Yilmaz, Hakan Erdogan, Mustafa Unel Sabanci University, Faculty of Engineering and Natural

More information

Probabilistic index maps for modeling natural signals

Probabilistic index maps for modeling natural signals Probabilistic index maps for modeling natural signals Nebojsa Jojic Microsoft Research Redmond, WA Yaron Caspi Hebrew University Manuel Reyes-Gomez Columbia University New York Abstract One of the major

More information

Unsupervised Learning

Unsupervised Learning Unsupervised Learning Learning without Class Labels (or correct outputs) Density Estimation Learn P(X) given training data for X Clustering Partition data into clusters Dimensionality Reduction Discover

More information

Probabilistic Graphical Models

Probabilistic Graphical Models Overview of Part Two Probabilistic Graphical Models Part Two: Inference and Learning Christopher M. Bishop Exact inference and the junction tree MCMC Variational methods and EM Example General variational

More information

Machine Learning : Clustering, Self-Organizing Maps

Machine Learning : Clustering, Self-Organizing Maps Machine Learning Clustering, Self-Organizing Maps 12/12/2013 Machine Learning : Clustering, Self-Organizing Maps Clustering The task: partition a set of objects into meaningful subsets (clusters). The

More information

Expectation Maximization (EM) and Gaussian Mixture Models

Expectation Maximization (EM) and Gaussian Mixture Models Expectation Maximization (EM) and Gaussian Mixture Models Reference: The Elements of Statistical Learning, by T. Hastie, R. Tibshirani, J. Friedman, Springer 1 2 3 4 5 6 7 8 Unsupervised Learning Motivation

More information

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing

Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,

More information

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing

Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Image Segmentation Using Iterated Graph Cuts BasedonMulti-scaleSmoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200, Kasugai,

More information

A Graph Theoretic Approach to Image Database Retrieval

A Graph Theoretic Approach to Image Database Retrieval A Graph Theoretic Approach to Image Database Retrieval Selim Aksoy and Robert M. Haralick Intelligent Systems Laboratory Department of Electrical Engineering University of Washington, Seattle, WA 98195-2500

More information

Clustering Lecture 5: Mixture Model

Clustering Lecture 5: Mixture Model Clustering Lecture 5: Mixture Model Jing Gao SUNY Buffalo 1 Outline Basics Motivation, definition, evaluation Methods Partitional Hierarchical Density-based Mixture model Spectral methods Advanced topics

More information

Nonparametric Bayesian Texture Learning and Synthesis

Nonparametric Bayesian Texture Learning and Synthesis Appears in Advances in Neural Information Processing Systems (NIPS) 2009. Nonparametric Bayesian Texture Learning and Synthesis Long (Leo) Zhu 1 Yuanhao Chen 2 William Freeman 1 Antonio Torralba 1 1 CSAIL,

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Segmentation: Clustering, Graph Cut and EM

Segmentation: Clustering, Graph Cut and EM Segmentation: Clustering, Graph Cut and EM Ying Wu Electrical Engineering and Computer Science Northwestern University, Evanston, IL 60208 yingwu@northwestern.edu http://www.eecs.northwestern.edu/~yingwu

More information

Machine Learning 13. week

Machine Learning 13. week Machine Learning 13. week Deep Learning Convolutional Neural Network Recurrent Neural Network 1 Why Deep Learning is so Popular? 1. Increase in the amount of data Thanks to the Internet, huge amount of

More information

Face recognition using Singular Value Decomposition and Hidden Markov Models

Face recognition using Singular Value Decomposition and Hidden Markov Models Face recognition using Singular Value Decomposition and Hidden Markov Models PETYA DINKOVA 1, PETIA GEORGIEVA 2, MARIOFANNA MILANOVA 3 1 Technical University of Sofia, Bulgaria 2 DETI, University of Aveiro,

More information

Improved Non-Local Means Algorithm Based on Dimensionality Reduction

Improved Non-Local Means Algorithm Based on Dimensionality Reduction Improved Non-Local Means Algorithm Based on Dimensionality Reduction Golam M. Maruf and Mahmoud R. El-Sakka (&) Department of Computer Science, University of Western Ontario, London, Ontario, Canada {gmaruf,melsakka}@uwo.ca

More information

Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches

Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches Accelerating Cyclic Update Algorithms for Parameter Estimation by Pattern Searches Antti Honkela, Harri Valpola and Juha Karhunen Helsinki University of Technology, Neural Networks Research Centre, P.O.

More information

Color Image Segmentation

Color Image Segmentation Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.

More information

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin

Clustering K-means. Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, Carlos Guestrin Clustering K-means Machine Learning CSEP546 Carlos Guestrin University of Washington February 18, 2014 Carlos Guestrin 2005-2014 1 Clustering images Set of Images [Goldberger et al.] Carlos Guestrin 2005-2014

More information

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves

Machine Learning A W 1sst KU. b) [1 P] Give an example for a probability distributions P (A, B, C) that disproves Machine Learning A 708.064 11W 1sst KU Exercises Problems marked with * are optional. 1 Conditional Independence I [2 P] a) [1 P] Give an example for a probability distribution P (A, B, C) that disproves

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition

Pattern Recognition. Kjell Elenius. Speech, Music and Hearing KTH. March 29, 2007 Speech recognition Pattern Recognition Kjell Elenius Speech, Music and Hearing KTH March 29, 2007 Speech recognition 2007 1 Ch 4. Pattern Recognition 1(3) Bayes Decision Theory Minimum-Error-Rate Decision Rules Discriminant

More information

Automatic Construction of Active Appearance Models as an Image Coding Problem

Automatic Construction of Active Appearance Models as an Image Coding Problem Automatic Construction of Active Appearance Models as an Image Coding Problem Simon Baker, Iain Matthews, and Jeff Schneider The Robotics Institute Carnegie Mellon University Pittsburgh, PA 1213 Abstract

More information

Head Frontal-View Identification Using Extended LLE

Head Frontal-View Identification Using Extended LLE Head Frontal-View Identification Using Extended LLE Chao Wang Center for Spoken Language Understanding, Oregon Health and Science University Abstract Automatic head frontal-view identification is challenging

More information

Rate-coded Restricted Boltzmann Machines for Face Recognition

Rate-coded Restricted Boltzmann Machines for Face Recognition Rate-coded Restricted Boltzmann Machines for Face Recognition Yee Whye Teh Department of Computer Science University of Toronto Toronto M5S 2Z9 Canada ywteh@cs.toronto.edu Geoffrey E. Hinton Gatsby Computational

More information

Locally Adaptive Learning for Translation-Variant MRF Image Priors

Locally Adaptive Learning for Translation-Variant MRF Image Priors Locally Adaptive Learning for Translation-Variant MRF Image Priors Masayuki Tanaka and Masatoshi Okutomi Tokyo Institute of Technology 2-12-1 O-okayama, Meguro-ku, Tokyo, JAPAN mtanaka@ok.ctrl.titech.ac.p,

More information

Face Cyclographs for Recognition

Face Cyclographs for Recognition Face Cyclographs for Recognition Guodong Guo Department of Computer Science North Carolina Central University E-mail: gdguo@nccu.edu Charles R. Dyer Computer Sciences Department University of Wisconsin-Madison

More information

Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification

Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification Application of Principal Components Analysis and Gaussian Mixture Models to Printer Identification Gazi. Ali, Pei-Ju Chiang Aravind K. Mikkilineni, George T. Chiu Edward J. Delp, and Jan P. Allebach School

More information

A Hierarchial Model for Visual Perception

A Hierarchial Model for Visual Perception A Hierarchial Model for Visual Perception Bolei Zhou 1 and Liqing Zhang 2 1 MOE-Microsoft Laboratory for Intelligent Computing and Intelligent Systems, and Department of Biomedical Engineering, Shanghai

More information

Robust Steganography Using Texture Synthesis

Robust Steganography Using Texture Synthesis Robust Steganography Using Texture Synthesis Zhenxing Qian 1, Hang Zhou 2, Weiming Zhang 2, Xinpeng Zhang 1 1. School of Communication and Information Engineering, Shanghai University, Shanghai, 200444,

More information

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH

SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH SEMI-BLIND IMAGE RESTORATION USING A LOCAL NEURAL APPROACH Ignazio Gallo, Elisabetta Binaghi and Mario Raspanti Universitá degli Studi dell Insubria Varese, Italy email: ignazio.gallo@uninsubria.it ABSTRACT

More information

Image pyramids and their applications Bill Freeman and Fredo Durand Feb. 28, 2006

Image pyramids and their applications Bill Freeman and Fredo Durand Feb. 28, 2006 Image pyramids and their applications 6.882 Bill Freeman and Fredo Durand Feb. 28, 2006 Image pyramids Gaussian Laplacian Wavelet/QMF Steerable pyramid http://www-bcs.mit.edu/people/adelson/pub_pdfs/pyramid83.pdf

More information

Learning to Perceive Transparency from the Statistics of Natural Scenes

Learning to Perceive Transparency from the Statistics of Natural Scenes Learning to Perceive Transparency from the Statistics of Natural Scenes Anat Levin Assaf Zomet Yair Weiss School of Computer Science and Engineering The Hebrew University of Jerusalem 9194 Jerusalem, Israel

More information

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS

CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS CHAPTER 4 CLASSIFICATION WITH RADIAL BASIS AND PROBABILISTIC NEURAL NETWORKS 4.1 Introduction Optical character recognition is one of

More information

Comparing Dropout Nets to Sum-Product Networks for Predicting Molecular Activity

Comparing Dropout Nets to Sum-Product Networks for Predicting Molecular Activity 000 001 002 003 004 005 006 007 008 009 010 011 012 013 014 015 016 017 018 019 020 021 022 023 024 025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044 045 046 047 048 049 050

More information

Face Detection Using Convolutional Neural Networks and Gabor Filters

Face Detection Using Convolutional Neural Networks and Gabor Filters Face Detection Using Convolutional Neural Networks and Gabor Filters Bogdan Kwolek Rzeszów University of Technology W. Pola 2, 35-959 Rzeszów, Poland bkwolek@prz.rzeszow.pl Abstract. This paper proposes

More information

Contrast adjustment via Bayesian sequential partitioning

Contrast adjustment via Bayesian sequential partitioning Contrast adjustment via Bayesian sequential partitioning Zhiyu Wang, Shuo Xie, Bai Jiang Abstract Photographs taken in dim light have low color contrast. However, traditional methods for adjusting contrast

More information

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points]

CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, :59pm, PDF to Canvas [100 points] CIS 520, Machine Learning, Fall 2015: Assignment 7 Due: Mon, Nov 16, 2015. 11:59pm, PDF to Canvas [100 points] Instructions. Please write up your responses to the following problems clearly and concisely.

More information

Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune

Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Face sketch photo synthesis Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Abstract Face sketch to photo synthesis has

More information

t 1 y(x;w) x 2 t 2 t 3 x 1

t 1 y(x;w) x 2 t 2 t 3 x 1 Neural Computing Research Group Dept of Computer Science & Applied Mathematics Aston University Birmingham B4 7ET United Kingdom Tel: +44 (0)121 333 4631 Fax: +44 (0)121 333 4586 http://www.ncrg.aston.ac.uk/

More information

Does the Brain do Inverse Graphics?

Does the Brain do Inverse Graphics? Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto How to learn many layers of features

More information

Multivariate Standard Normal Transformation

Multivariate Standard Normal Transformation Multivariate Standard Normal Transformation Clayton V. Deutsch Transforming K regionalized variables with complex multivariate relationships to K independent multivariate standard normal variables is an

More information

Geoff McLachlan and Angus Ng. University of Queensland. Schlumberger Chaired Professor Univ. of Texas at Austin. + Chris Bishop

Geoff McLachlan and Angus Ng. University of Queensland. Schlumberger Chaired Professor Univ. of Texas at Austin. + Chris Bishop EM Algorithm Geoff McLachlan and Angus Ng Department of Mathematics & Institute for Molecular Bioscience University of Queensland Adapted by Joydeep Ghosh Schlumberger Chaired Professor Univ. of Texas

More information

Grundlagen der Künstlichen Intelligenz

Grundlagen der Künstlichen Intelligenz Grundlagen der Künstlichen Intelligenz Unsupervised learning Daniel Hennes 29.01.2018 (WS 2017/18) University Stuttgart - IPVS - Machine Learning & Robotics 1 Today Supervised learning Regression (linear

More information

Parameter Selection for EM Clustering Using Information Criterion and PDDP

Parameter Selection for EM Clustering Using Information Criterion and PDDP Parameter Selection for EM Clustering Using Information Criterion and PDDP Ujjwal Das Gupta,Vinay Menon and Uday Babbar Abstract This paper presents an algorithm to automatically determine the number of

More information

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov

ECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern

More information

Advances in Neural Information Processing Systems, 1999, In press. Unsupervised Classication with Non-Gaussian Mixture Models using ICA Te-Won Lee, Mi

Advances in Neural Information Processing Systems, 1999, In press. Unsupervised Classication with Non-Gaussian Mixture Models using ICA Te-Won Lee, Mi Advances in Neural Information Processing Systems, 1999, In press. Unsupervised Classication with Non-Gaussian Mixture Models using ICA Te-Won Lee, Michael S. Lewicki and Terrence Sejnowski Howard Hughes

More information

Breaking it Down: The World as Legos Benjamin Savage, Eric Chu

Breaking it Down: The World as Legos Benjamin Savage, Eric Chu Breaking it Down: The World as Legos Benjamin Savage, Eric Chu To devise a general formalization for identifying objects via image processing, we suggest a two-pronged approach of identifying principal

More information

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions International Journal of Electrical and Electronic Science 206; 3(4): 9-25 http://www.aascit.org/journal/ijees ISSN: 2375-2998 Adaptive Wavelet Image Denoising Based on the Entropy of Homogenus Regions

More information

Capturing image structure with probabilistic index maps

Capturing image structure with probabilistic index maps Capturing image structure with probabilistic index maps Nebojsa Jojic Microsoft Research www.research.microsoft.com/ jojic Yaron Caspi The Hebrew University of Jerusalem www.cs.huji.ac.il/ caspiy Abstract

More information

ABSTRACT 1. INTRODUCTION 2. METHODS

ABSTRACT 1. INTRODUCTION 2. METHODS Finding Seeds for Segmentation Using Statistical Fusion Fangxu Xing *a, Andrew J. Asman b, Jerry L. Prince a,c, Bennett A. Landman b,c,d a Department of Electrical and Computer Engineering, Johns Hopkins

More information

What is machine learning?

What is machine learning? Machine learning, pattern recognition and statistical data modelling Lecture 12. The last lecture Coryn Bailer-Jones 1 What is machine learning? Data description and interpretation finding simpler relationship

More information

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry

cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry cse 252c Fall 2004 Project Report: A Model of Perpendicular Texture for Determining Surface Geometry Steven Scher December 2, 2004 Steven Scher SteveScher@alumni.princeton.edu Abstract Three-dimensional

More information

An Adaptive Eigenshape Model

An Adaptive Eigenshape Model An Adaptive Eigenshape Model Adam Baumberg and David Hogg School of Computer Studies University of Leeds, Leeds LS2 9JT, U.K. amb@scs.leeds.ac.uk Abstract There has been a great deal of recent interest

More information

Selecting Models from Videos for Appearance-Based Face Recognition

Selecting Models from Videos for Appearance-Based Face Recognition Selecting Models from Videos for Appearance-Based Face Recognition Abdenour Hadid and Matti Pietikäinen Machine Vision Group Infotech Oulu and Department of Electrical and Information Engineering P.O.

More information

A Quantitative Approach for Textural Image Segmentation with Median Filter

A Quantitative Approach for Textural Image Segmentation with Median Filter International Journal of Advancements in Research & Technology, Volume 2, Issue 4, April-2013 1 179 A Quantitative Approach for Textural Image Segmentation with Median Filter Dr. D. Pugazhenthi 1, Priya

More information

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation

Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial Region Segmentation IJCSNS International Journal of Computer Science and Network Security, VOL.13 No.11, November 2013 1 Moving Object Segmentation Method Based on Motion Information Classification by X-means and Spatial

More information

A Parametric Texture Model based on Joint Statistics of Complex Wavelet Coefficients. Gowtham Bellala Kumar Sricharan Jayanth Srinivasa

A Parametric Texture Model based on Joint Statistics of Complex Wavelet Coefficients. Gowtham Bellala Kumar Sricharan Jayanth Srinivasa A Parametric Texture Model based on Joint Statistics of Complex Wavelet Coefficients Gowtham Bellala Kumar Sricharan Jayanth Srinivasa 1 Texture What is a Texture? Texture Images are spatially homogeneous

More information

J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998

J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report. February 5, 1998 Density Estimation using Support Vector Machines J. Weston, A. Gammerman, M. Stitson, V. Vapnik, V. Vovk, C. Watkins. Technical Report CSD-TR-97-3 February 5, 998!()+, -./ 3456 Department of Computer Science

More information

Bandwidth Selection for Kernel Density Estimation Using Total Variation with Fourier Domain Constraints

Bandwidth Selection for Kernel Density Estimation Using Total Variation with Fourier Domain Constraints IEEE SIGNAL PROCESSING LETTERS 1 Bandwidth Selection for Kernel Density Estimation Using Total Variation with Fourier Domain Constraints Alexander Suhre, Orhan Arikan, Member, IEEE, and A. Enis Cetin,

More information

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT

DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT DETECTION OF SMOOTH TEXTURE IN FACIAL IMAGES FOR THE EVALUATION OF UNNATURAL CONTRAST ENHANCEMENT 1 NUR HALILAH BINTI ISMAIL, 2 SOONG-DER CHEN 1, 2 Department of Graphics and Multimedia, College of Information

More information

Statistical image models

Statistical image models Chapter 4 Statistical image models 4. Introduction 4.. Visual worlds Figure 4. shows images that belong to different visual worlds. The first world (fig. 4..a) is the world of white noise. It is the world

More information

A Taxonomy of Semi-Supervised Learning Algorithms

A Taxonomy of Semi-Supervised Learning Algorithms A Taxonomy of Semi-Supervised Learning Algorithms Olivier Chapelle Max Planck Institute for Biological Cybernetics December 2005 Outline 1 Introduction 2 Generative models 3 Low density separation 4 Graph

More information

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics

Stimulus Synthesis for Efficient Evaluation and Refinement of Perceptual Image Quality Metrics Presented at: IS&T/SPIE s 16th Annual Symposium on Electronic Imaging San Jose, CA, Jan. 18-22, 2004 Published in: Human Vision and Electronic Imaging IX, Proc. SPIE, vol. 5292. c SPIE Stimulus Synthesis

More information

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS

COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS COMBINED METHOD TO VISUALISE AND REDUCE DIMENSIONALITY OF THE FINANCIAL DATA SETS Toomas Kirt Supervisor: Leo Võhandu Tallinn Technical University Toomas.Kirt@mail.ee Abstract: Key words: For the visualisation

More information

Image Classification Using Wavelet Coefficients in Low-pass Bands

Image Classification Using Wavelet Coefficients in Low-pass Bands Proceedings of International Joint Conference on Neural Networks, Orlando, Florida, USA, August -7, 007 Image Classification Using Wavelet Coefficients in Low-pass Bands Weibao Zou, Member, IEEE, and Yan

More information

Generative and discriminative classification techniques

Generative and discriminative classification techniques Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15

More information

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION Ken Sauef and Charles A. Bournant *Department of Electrical Engineering, University of Notre Dame Notre Dame, IN 46556, (219) 631-6999 tschoo1

More information

Visual object classification by sparse convolutional neural networks

Visual object classification by sparse convolutional neural networks Visual object classification by sparse convolutional neural networks Alexander Gepperth 1 1- Ruhr-Universität Bochum - Institute for Neural Dynamics Universitätsstraße 150, 44801 Bochum - Germany Abstract.

More information

Using the Kolmogorov-Smirnov Test for Image Segmentation

Using the Kolmogorov-Smirnov Test for Image Segmentation Using the Kolmogorov-Smirnov Test for Image Segmentation Yong Jae Lee CS395T Computational Statistics Final Project Report May 6th, 2009 I. INTRODUCTION Image segmentation is a fundamental task in computer

More information

Does the Brain do Inverse Graphics?

Does the Brain do Inverse Graphics? Does the Brain do Inverse Graphics? Geoffrey Hinton, Alex Krizhevsky, Navdeep Jaitly, Tijmen Tieleman & Yichuan Tang Department of Computer Science University of Toronto The representation used by the

More information

Introduction to Image Super-resolution. Presenter: Kevin Su

Introduction to Image Super-resolution. Presenter: Kevin Su Introduction to Image Super-resolution Presenter: Kevin Su References 1. S.C. Park, M.K. Park, and M.G. KANG, Super-Resolution Image Reconstruction: A Technical Overview, IEEE Signal Processing Magazine,

More information

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques

Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Analysis of Functional MRI Timeseries Data Using Signal Processing Techniques Sea Chen Department of Biomedical Engineering Advisors: Dr. Charles A. Bouman and Dr. Mark J. Lowe S. Chen Final Exam October

More information

Clustering & Dimensionality Reduction. 273A Intro Machine Learning

Clustering & Dimensionality Reduction. 273A Intro Machine Learning Clustering & Dimensionality Reduction 273A Intro Machine Learning What is Unsupervised Learning? In supervised learning we were given attributes & targets (e.g. class labels). In unsupervised learning

More information

Face Recognition using Rectangular Feature

Face Recognition using Rectangular Feature Face Recognition using Rectangular Feature Sanjay Pagare, Dr. W. U. Khan Computer Engineering Department Shri G.S. Institute of Technology and Science Indore Abstract- Face recognition is the broad area

More information

Image Segmentation for Image Object Extraction

Image Segmentation for Image Object Extraction Image Segmentation for Image Object Extraction Rohit Kamble, Keshav Kaul # Computer Department, Vishwakarma Institute of Information Technology, Pune kamble.rohit@hotmail.com, kaul.keshav@gmail.com ABSTRACT

More information

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision

Object Recognition Using Pictorial Structures. Daniel Huttenlocher Computer Science Department. In This Talk. Object recognition in computer vision Object Recognition Using Pictorial Structures Daniel Huttenlocher Computer Science Department Joint work with Pedro Felzenszwalb, MIT AI Lab In This Talk Object recognition in computer vision Brief definition

More information

Motion Interpretation and Synthesis by ICA

Motion Interpretation and Synthesis by ICA Motion Interpretation and Synthesis by ICA Renqiang Min Department of Computer Science, University of Toronto, 1 King s College Road, Toronto, ON M5S3G4, Canada Abstract. It is known that high-dimensional

More information