Nonparametric Bayesian Texture Learning and Synthesis
|
|
- Susan Booth
- 5 years ago
- Views:
Transcription
1 Appears in Advances in Neural Information Processing Systems (NIPS) Nonparametric Bayesian Texture Learning and Synthesis Long (Leo) Zhu 1 Yuanhao Chen 2 William Freeman 1 Antonio Torralba 1 1 CSAIL, MIT 2 Department of Statistics, UCLA {leozhu, billf, antonio}@csail.mit.edu yhchen@stat.ucla.edu Abstract We present a nonparametric Bayesian method for texture learning and synthesis. A texture image is represented by a 2D Hidden Markov Model (2DHMM) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes their spatial layout (the compatibility between adjacent textons). The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over the state-of-the-art patch-based rendering methods. We also show that the HDP- 2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems. 1 Introduction Texture learning and synthesis are important tasks in computer vision and graphics. Recent attempts can be categorized into two different styles. The first style emphasizes the modeling and understanding problems and develops statistical models [1, 2] which are capable of representing texture using textons and their spatial layout. But the learning is rather sensitive to the parameter settings and the rendering quality and speed is still not satisfactory. The second style relies on patch-based rendering techniques [3, 4] which focus on rendering quality and speed, but forego the semantic understanding and modeling of texture. This paper aims at texture understanding and modeling with fast synthesis and high rendering quality. Our strategy is to augment the patch-based rendering method [3] with nonparametric Bayesian modeling and statistical learning. We represent a texture image by a 2D Hidden Markov Model (2D-HMM) (see figure (1)) where the hidden states correspond to the cluster labeling of textons and the transition matrix encodes the texton spatial layout (the compatibility between adjacent textons). The 2D-HMM is coupled with the Hierarchical Dirichlet process (HDP) [5, 6] which allows the number of textons (i.e. hidden states) and the complexity of the transition matrix to grow as more training data is available or the randomness of the input texture becomes large. The Dirichlet process prior penalizes the model complexity to favor reusing clusters and transitions and thus regular texture which can be represented by compact models. This framework (HDP-2DHMM) discovers the semantic meaning of texture in an explicit way that the texton vocabulary and their spatial layout are learnt jointly and automatically (the number of textons is fully determined by HDP-2DHMM). 1
2 Figure 1: The flow chart of texture learning and synthesis. The colored rectangles correspond to the index (labeling) of textons which are represented by image patches. The texton vocabulary shows the correspondence between the color (states) and the examples of image patches. The transition matrices show the probability (indicated by the intensity) of generating a new state (coded by the color of the top left corner rectangle), given the states of the left and upper neighbor nodes (coded by the top and left-most rectangles). The inferred texton map shows the state assignments of the input texture. The top-right panel shows the sampled texton map according to the transition matrices. The last panel shows the synthesized texture using image quilting according to the correspondence between the sampled texton map and the texton vocabulary. Once the texton vocabulary and the transition matrix are learnt, the synthesis process samples the latent texton labeling map according to the probability encoded in the transition matrix. The final image is then generated by selecting the image patches based on the sampled texton labeling map. Here, image quilting [3] is applied to search and stitch together all the patches so that the boundary inconsistency is minimized. By contrast to [3], our method is only required to search a much smaller set of candidate patches within a local texton cluster. Therefore, the synthesis cost is dramatically reduced. We show that the HDP-2DHMM is able to synthesize texture in one second (25 times faster than image quilting) with comparable quality. In addition, the HDP-2DHMM is less sensitive to the patch size which has to be tuned over different input images in [3]. We also show that the HDP-2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that the HDP-2DHMM is generally useful for further applications in low-level vision problems. 2 Previous Work Our primary interest is texture understanding and modeling. The FRAME model [7] provides a principled way to learn Markov random field models according to the marginal image statistics. This model is very successful in capturing stochastic textures, but may fail for more structured textures due to lack of spatial modeling. Zhu et al. [1, 2] extend it to explicitly learn the textons and their spatial relations which are represented by extra hidden layers. This new model is parametric (the number of texton clusters has to be tuned by hand for different texture images) and model selection which might be unstable in practice, is needed to avoid overfitting. Therefore, the learning is sensitive to the parameter settings. Inspired by recent progress in machine learning, we extend the nonparametric Bayesian framework of coupling 1D HMM and HDP [6] to deal with 2D texture image. A new model (HDP-2DHMM) is developed to learn texton vocabulary and spatial layouts jointly and automatically. Since the HDP-2DHMM is designed to generate appropriate image statistics, but not pixel intensity, a patch-based texture synthesis technique, called image quilting [3], is integrated into our system to 2
3 ' z z 1 z 2 x 1 z 3 z 4 x x 2 x Figure 2: Graphical representation of the HDP-2DHMM. α, α, γ are hyperparameters set by hand. β are state parameters. θ and π are emission and transition parameters, respectively. i is the index of nodes in HMM. L(i) and T (i) are two nodes on the left and top of node i. z i are hidden states of node i. x i are observations (features) of the image patch at position i. sample image patches. The texture synthesis algorithm has also been applied to image inpainting [8]. Malik et al. [9, 10] and Varma and Zisserman [11] study the filter representations of textons which are related to our implementations of visual features. But the interactions between textons are not explicitly considered. Liu et al. [12, 13] address texture understanding by discovering regularity without explicit statistical texture modeling. Our work has partial similarities with the epitome [14] and jigsaw [15] models for non-texture images which also tend to model appearance and spatial layouts jointly. The major difference is that their models, which are parametric, cannot grow automatically as more data is available. Our method is closely related to [16] which is not designed for texture learning. They use hierarchical Dirichlet process, but the models and the image feature representations, including both the image filters and the data likelihood model, are different. The structure of 2DHMM is also discussed in [17]. Other work using Dirichlet prior includes [18, 19]. Tree-structured vector quantization [20] has been used to speed up existing image-based rendering algorithms. While this is orthogonal to our work, it may help us optimize the rendering speed. The meaning of nonparametric in this paper is under the context of Bayesian framework which differs from the non-bayesian terminology used in [4]. 3 Texture Modeling 3.1 Image Patches and Features A texture image I is represented by a grid of image patches {x i } with size of in this paper where i denotes the location. {x i } will be grouped into different textons by the HDP-2DHMM. We begin with a simplified model where the positions of textons represented by image patches are pre-determined by the image grid, and not allowed to shift. We will remove this constraint later. Each patch x i is characterized by a set of filter responses {w l,h,b i } which correspond to values b of image filter response h at location l. More precisely, each patch is divided into 6 by 6 cells (i.e. l = 1..36) each of which contains 4 by 4 pixels. For each pixel in cell l, we calculate 37 (h = 1..37) image filter responses which include the 17 filters used in [21], Difference of Gaussian (DOG, 4 filters), Difference of Offset Gaussian (DOOG, 12 filters ) and colors (R,G,B and L). w l,h,b i equals one if the averaged value of filter responses of the 4*4 pixels covered by cell l falls into bin b (the response values are divided into 6 bins), and zero otherwise. Therefore, each patch x i is represented by 7992 (= ) dimensional feature responses {w l,h,b i } in total. We let q = denote the index of the responses of visual features. It is worth emphasizing that our feature representation differs from standard methods [10, 2] where k-means clustering is applied to form visual vocabulary first. By contrast, we skip the clustering step and leave the learning of texton vocabulary together with spatial layout learning into the HDP- 2DHMM which takes over the role of k-means. 3.2 HDP-2DHMM: Coupling Hidden Markov Model with Hierarchical Dirichlet Process A texture is modeled by a 2D Hidden Markov Model (2DHMM) where the nodes correspond to the image patches x i and the compatibility is encoded by the edges connecting 4 neighboring nodes. 3
4 β GEM(α) For each state z {1, 2, 3,...} θ z Dirichlet(γ) π zl DP (α, β) π zt DP (α, β) For each pair of states (z L, z T ) π zl,z T DP (α, β) For each node i in the HMM if L(i) and T (i) : z i (z L(i), z T (i) ) Multinomial(π zl,z T ) if L(i) and T (i) = : z i z L(i) Multinomial(π zl ) if L(i) = and T (i) : z i z T (i) Multinomial(π zt ) x i Multinomial(θ zi ) Figure 3: HDP-2DHMM for texture modeling See the graphical representation of 2DHMM in figure 2. For any node i, let L(i), T (i), R(i), D(i) denote the four neighbors, left, upper, right and lower, respectively. We use z i to index the states of node i which correspond to the cluster labeling of textons. The likelihood model p(x i z i ) which specifies the probability of visual fetures is defined by multinomial distribution parameterized by θ zi specific to its corresponding hidden state z i : where θ zi specify the weights of visual features. x i Multinomial(θ zi ) (1) For node i which is connected to the nodes above and on the left (i.e. L(i) and T (i) ), the probability p(z i z L(i), z T (i) ) of its state z i is only determined by the states (z L(i), z T (i) ) of the connected nodes. The distribution has a form of multinomial distribution parameterized by π zl(i),z T (i) : z i Multinomial(π zl(i),z T (i) ) (2) where π zl(i),z T (i) encodes the transition matrix and thus the spatial layout of textons. For the nodes which are on the top row or the left-most column (i.e. L(i) = or T (i) = ), the distribution of their states are modeled by Multinomial(π zl(i) ) or Multinomial(π zt (i) ) which can be considered as simpler cases. We assume the top left corner can be sampled from any states according to the marginal statistics of states. Without loss of generality, we will skip the details of the boundary cases, but only focus on the nodes whose states should be determined by their top and left nodes jointly. To make a nonparametric Bayesian representation, we need to allow the number of states z i countably infinite and put prior distributions over the parameters θ zi and π zl(i),z T (i). We can achieve this by tying the 2DHMM together with the hierarchical Dirichlet process [5]. We define the prior of θ z as a conjugate Dirichlet prior: θ z Dirichlet(γ) (3) where γ is the concentration hyperparameter which controls how uniform the distribution of θ z is (note θ z specify weights of visual features): as γ increases, it becomes more likely that the visual features have equal probability. Since the likelihood model p(x i z i ) is of multinomial form, the posterior distribution of θ z has a analytic form, still a Dirichlet distribtion. The transition parameters π zl,z T are modeled by a hierarchical Dirichlet process (HDP): β GEM(α) (4) π zl,z T DP (α, β) (5) where we first draw global weights β according to the stick-breaking prior distribution GEM(α). The stick-breaking weights β specify the probability of state which are globally shared among all nodes. The stick-breaking prior produces exponentially decayed weights in expectation such that simple models with less representative clusters (textons) are favored, given few observations, but, there is always a low-probability that small clusters are created to capture details revealed by large, 4
5 complex textures. The concentration hyperparameter α controls the sparseness of states: a larger α leads to more states. The prior of the transition parameter π zl,z T is modeled by a Dirichlet process DP (α, β) which is a distribution over the other distribution β. α is a hyperparameter which controls the variability of π zl,z T over different states across all nodes: as α increases, the state transitions become more regular. Therefore, the HDP makes use of a Dirichlet process prior to place a soft bias towards simpler models (in terms of the number of states and the regularity of state transitions) which explain the texture. The generative process of the HDP-2DHMM is described in figure (3).We now have the full representation of the HDP-2DHMM. But this simplified model does not allow the textons (image patches) to be shifted. We remove this constraint by introducing two hidden variables (u i, v i ) which indicate the displacements of textons associated with node i. We only need to adjust the correspondence between image features x i and hidden states z i. x i is modified to be x ui,v i which refers to image features located at the position with displacement of (u i, v i ) to the position i. Random variables (u i, v i ) are only connected to the observation x i (not shown in figure 2). (u i, v i ) have a uniform prior, but are limited to the small neighborhood of i (maximum 10% shift on one side). 4 Learning HDP-2DHMM In a Bayesian framework, the task of learning HDP-2DHMM (also called Bayesian inference) is to compute the posterior distribution p(θ, π, z x). It is trivial to sample the hidden variables (u, v) because of their uniform prior. For simplicity, we skip the details of sampling u, v. Here, we present an inference procedure for the HDP-2DHMM that is based on Gibbs sampling. Our procedure alternates between two sampling stages: (i) sampling the state assignments z, (ii) sampling the global weights β. Given fixed values for z, β, the posterior of θ can be easily obtained by aggregating statistics of the observations assigned to each state. The posterior of π is Dirichlet. For more details on Dirichlet processes, see [5]. We first instantiate a random hidden state labeling and then iteratively repeat the following two steps. Sampling z. In this stage we sample a state for each node. The probability of node i being assigned state t is given by: P (z i = t z i, β) f xi t (x i )P (z i = t z L(i), z T (i) ) P (z R(i) z i = t, z T (R(i)) )P (z D(i) z L(D(i)), z i = t) (6) The first term ft xi (x i ) denotes the posterior probability of observation x i given all other observations assigned to state t, and z i denotes all state assignments except z i. Let n qt be the number of observations of feature w q with state t. ft xi (x i ) is calculated by: ft xi (x i ) = n qt + γ q ( q q n q t + q γ ) w q i (7) q where γ q is the weight for visual feature w q. The next term P (z i = t z L(i) = r, z T (i) = s) is the probability of state of t, given the states of the nodes on the left and above, i.e. L(i) and T (i). Let n rst be the number of observations with state t whose the left and upper neighbor nodes states are r for L(i) and s for T (i). The probability of generating state t is given by: P (z i = t z L(i) = r, z T (i) = s) = n rst + α β t t n rst + α (8) where β t refers to the weight of state t. This calculation follows the properties of Dirichlet distribution [5]. The last two terms P (z R(i) z i = t, z T (R(i)) ) and P (z D(i) z L(D(i)), z i = t) are the probability of the states of the right and lower neighbor nodes (R(i), D(i)) given z i. These two terms can be computed in a similar form as equation (8). Sampling β. In the second stage, given the assignments z = {z i }, we sample β using the Dirichlet distribution as described in [5]. 5
6 Figure 4: The color of rectangles in columns 2 and 3 correspond to the index (labeling) of textons which are represented by 24*24 image patches. The synthesized images are all 384*384 (16*16 textons /patches). Our method captures both stochastic textures (the last two rows) and more structured textures (the first three rows, see the horizontal and grided layouts). The inferred texton maps for structured textures are simpler (less states/textons) and more regular (less cluttered texton maps) than stochastic textures. 5 Texture Synthesis Once the texton vocabulary and the transition matrix are learnt, the synthesis process first samples the latent texton labeling map according to the probability encoded in the transition matrix. But the HDP-2DHMM is generative only for image features, but not image intensity. To make it practical for image synthesis, image quilting [3] is integrated with the HDP-2DHMM. The final image is then generated by selecting image patches according to the texton labeling map. Image quilting is applied to select and stitch together all the patches in a top-left-to-bottom-right order so that the boundary inconsistency is minimized. The width of the overlap edge is 8 pixels. By contrast to [3] which need to search over all image patches to ensure high rendering quality, our method is only required to search the candidate patches within a local cluster. The HDP-2DHMM is capable of producing high rendering quality because the patches have been grouped based on visual features. Therefore, the synthesis cost is dramatically reduced. We show that the HDP-2DHMM is able to synthesize a 6
7 Figure 5: More synthesized texture images (for each pair, left is input texture, right is synthesized). texture image with size of 384*384 and with comparable quality in one second (25 times faster than image quilting) Experimental Results Texture Learning and Synthesis We use the texture images in [3]. The hyperparameters {α, α0, γ} are set to 10, 1, and 0.5, respectively. The image patch size is fixed to 24*24. All the parameter settings are identical for all images. The learning runs with 10 random initializations each of which takes about 30 sampling iterations to converge. A computer with 2.4 GHz CPU was used. For each image, it takes 100 seconds for learning and 1 second for synthesis (almost 25 times faster than [3]). Figure (4) shows the inferred texton labeling maps, the sampled texton maps and the synthesized texture images. More synthesized images are shown in figure (5). The rendering quality is visually comparable with [3] (not shown) for both structured textures and stochastic textures. It is interesting to see that the HMM-HDP captures different types of texture patterns, such as vertical, horizontal and grided layouts. It suggests that our method is able to discover the semantic texture meaning by learning texton vocabulary and their spatial relations. 7
8 Figure 6: Image segmentation and synthesis. The first three rows show the HDP-2DHMM is able to segment images with mixture of textures and synthesize new textures. The last row shows a failure example where the texton is not well aligned. 6.2 Image Segmentation and Synthesis We also apply the HDP-2DHMM to perform image segmentation and synthesis. Figure (6) shows several examples of natural images which contain mixture of textured regions. The segmentation results are represented by the inferred state assignments (the texton map). In figure (6), one can see that our method successfully divides images into meaningful regions and the synthesized images look visually similar to the input images. These results suggest that the HDP-2DHMM framework is generally useful for low-level vision problems. The last row in figure (6) shows a failure example where the texton is not well aligned. 7 Conclusion This paper describes a novel nonparametric Bayesian method for textrure learning and synthesis. The 2D Hidden Markov Model (HMM) is coupled with the hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grows as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. We demonstrated that the resulting compact representation obtained by the HDP-2DHMM allows fast texture synthesis (under one second) with comparable rendering quality to the state-of-the-art image-based rendering methods. Our results on image segmentation and synthesis suggest that the HDP-2DHMM is generally useful for further applications in low-level vision problems. Acknowledgments. This work was supported by NGA NEGI , MURI Grant N , ARDA VACE, and gifts from Microsoft Research and Google. Thanks to the anonymous reviewers for helpful feedback. 8
9 References [1] Y. N. Wu, S. C. Zhu, and C.-e. Guo, Statistical modeling of texture sketch, in ECCV 02: Proceedings of the 7th European Conference on Computer Vision-Part III, 2002, pp [2] S.-C. Zhu, C.-E. Guo, Y. Wang, and Z. Xu, What are textons? International Journal of Computer Vision, vol. 62, no. 1-2, pp , [3] A. A. Efros and W. T. Freeman, Image quilting for texture synthesis and transfer, in Siggraph, [4] A. Efros and T. Leung, Texture synthesis by non-parametric sampling, in International Conference on Computer Vision, 1999, pp [5] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, Hierarchical dirichlet processes, Journal of the American Statistical Association, [6] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen, The infinite hidden markov model, in NIPS, [7] S. C. Zhu, Y. Wu, and D. Mumford, Filters, random fields and maximum entropy (frame): Towards a unified theory for texture modeling, International Journal of Computer Vision, vol. 27, pp. 1 20, [8] A. Criminisi, P. Perez, and K. Toyama, Region filling and object removal by exemplar-based inpainting, IEEE Trans. on Image Processing, [9] J. Malik, S. Belongie, J. Shi, and T. Leung, Textons, contours and regions: Cue integration in image segmentation, IEEE International Conference on Computer Vision, vol. 2, [10] T. Leung and J. Malik, Representing and recognizing the visual appearance of materials using three-dimensional textons, International Journal of Computer Vision, vol. 43, pp , [11] M. Varma and A. Zisserman, Texture classification: Are filter banks necessary? IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, [12] Y. Liu, W.-C. Lin, and J. H. Hays, Near regular texture analysis and manipulation, ACM Transactions on Graphics (SIGGRAPH 2004), vol. 23, no. 1, pp , August [13] J. Hays, M. Leordeanu, A. A. Efros, and Y. Liu, Discovering texture regularity as a higherorder correspondence problem, in 9th European Conference on Computer Vision, May [14] N. Jojic, B. J. Frey, and A. Kannan, Epitomic analysis of appearance and shape, in In ICCV, 2003, pp [15] A. Kannan, J. Winn, and C. Rother, Clustering appearance and shape by learning jigsaws, in In Advances in Neural Information Processing Systems. MIT Press, [16] J. J. Kivinen, E. B. Sudderth, and M. I. Jordan, Learning multiscale representations of natural scenes using dirichlet processes, IEEE International Conference on Computer Vision, vol. 0, [17] J. Domke, A. Karapurkar, and Y. Aloimonos, Who killed the directed model? in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, [18] L. Cao and L. Fei-Fei, Spatially coherent latent topic model for concurrent object segmentation and classification, in Proceedings of IEEE International Conference on Computer Vision, [19] X. Wang and E. Grimson, Spatial latent dirichlet allocation, in NIPS, [20] L.-Y. Wei and M. Levoy, Fast texture synthesis using tree-structured vector quantization, in SIGGRAPH 00: Proceedings of the 27th annual conference on Computer graphics and interactive techniques, 2000, pp [21] J. Winn, A. Criminisi, and T. Minka, Object categorization by learned universal visual dictionary, in Proceedings of the Tenth IEEE International Conference on Computer Vision,
Spatial Latent Dirichlet Allocation
Spatial Latent Dirichlet Allocation Xiaogang Wang and Eric Grimson Computer Science and Computer Science and Artificial Intelligence Lab Massachusetts Tnstitute of Technology, Cambridge, MA, 02139, USA
More informationApplied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University
Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University NIPS 2008: E. Sudderth & M. Jordan, Shared Segmentation of Natural
More informationTexture Synthesis. Darren Green (
Texture Synthesis Darren Green (www.darrensworld.com) 15-463: Computational Photography Alexei Efros, CMU, Fall 2005 Texture Texture depicts spatially repeating patterns Many natural phenomena are textures
More informationA Shape Aware Model for semi-supervised Learning of Objects and its Context
A Shape Aware Model for semi-supervised Learning of Objects and its Context Abhinav Gupta 1, Jianbo Shi 2 and Larry S. Davis 1 1 Dept. of Computer Science, Univ. of Maryland, College Park 2 Dept. of Computer
More informationImage Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations
Image Inpainting by Hyperbolic Selection of Pixels for Two Dimensional Bicubic Interpolations Mehran Motmaen motmaen73@gmail.com Majid Mohrekesh mmohrekesh@yahoo.com Mojtaba Akbari mojtaba.akbari@ec.iut.ac.ir
More informationSupervised texture detection in images
Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße
More informationVisual Object Recognition
Perceptual and Sensory Augmented Computing Visual Object Recognition Tutorial Visual Object Recognition Bastian Leibe Computer Vision Laboratory ETH Zurich Chicago, 14.07.2008 & Kristen Grauman Department
More informationSpatial distance dependent Chinese restaurant processes for image segmentation
Spatial distance dependent Chinese restaurant processes for image segmentation Soumya Ghosh 1, Andrei B. Ungureanu 2, Erik B. Sudderth 1, and David M. Blei 3 1 Department of Computer Science, Brown University,
More informationEpitomic Analysis of Human Motion
Epitomic Analysis of Human Motion Wooyoung Kim James M. Rehg Department of Computer Science Georgia Institute of Technology Atlanta, GA 30332 {wooyoung, rehg}@cc.gatech.edu Abstract Epitomic analysis is
More informationA Feature Point Matching Based Approach for Video Objects Segmentation
A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer
More informationCombining Top-down and Bottom-up Segmentation
Combining Top-down and Bottom-up Segmentation Authors: Eran Borenstein, Eitan Sharon, Shimon Ullman Presenter: Collin McCarthy Introduction Goal Separate object from background Problems Inaccuracies Top-down
More informationAdmin. Data driven methods. Overview. Overview. Parametric model of image patches. Data driven (Non parametric) Approach 3/31/2008
Admin Office hours straight after class today Data driven methods Assignment 3 out, due in 2 weeks Lecture 8 Projects.. Overview Overview Texture synthesis Quilting Image Analogies Super resolution Scene
More informationApplying Catastrophe Theory to Image Segmentation
Applying Catastrophe Theory to Image Segmentation Mohamad Raad, Majd Ghareeb, Ali Bazzi Department of computer and communications engineering Lebanese International University Beirut, Lebanon Abstract
More informationImproving Recognition through Object Sub-categorization
Improving Recognition through Object Sub-categorization Al Mansur and Yoshinori Kuno Graduate School of Science and Engineering, Saitama University, 255 Shimo-Okubo, Sakura-ku, Saitama-shi, Saitama 338-8570,
More informationTexture Synthesis. Darren Green (
Texture Synthesis Darren Green (www.darrensworld.com) 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Texture Texture depicts spatially repeating patterns Many natural phenomena are textures
More informationLecture 6: Texture. Tuesday, Sept 18
Lecture 6: Texture Tuesday, Sept 18 Graduate students Problem set 1 extension ideas Chamfer matching Hierarchy of shape prototypes, search over translations Comparisons with Hausdorff distance, L1 on
More informationTexture Synthesis and Manipulation Project Proposal. Douglas Lanman EN 256: Computer Vision 19 October 2006
Texture Synthesis and Manipulation Project Proposal Douglas Lanman EN 256: Computer Vision 19 October 2006 1 Outline Introduction to Texture Synthesis Previous Work Project Goals and Timeline Douglas Lanman
More informationTexture. COS 429 Princeton University
Texture COS 429 Princeton University Texture What is a texture? Antonio Torralba Texture What is a texture? Antonio Torralba Texture What is a texture? Antonio Torralba Texture Texture is stochastic and
More informationAnalysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009
Analysis: TextonBoost and Semantic Texton Forests Daniel Munoz 16-721 Februrary 9, 2009 Papers [shotton-eccv-06] J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context
More informationMachine Learning and Data Mining. Clustering (1): Basics. Kalev Kask
Machine Learning and Data Mining Clustering (1): Basics Kalev Kask Unsupervised learning Supervised learning Predict target value ( y ) given features ( x ) Unsupervised learning Understand patterns of
More informationTiled Texture Synthesis
International Journal of Information & Computation Technology. ISSN 0974-2239 Volume 4, Number 16 (2014), pp. 1667-1672 International Research Publications House http://www. irphouse.com Tiled Texture
More informationHybrid Textons: Modeling Surfaces with Reflectance and Geometry
Hybrid Textons: Modeling Surfaces with Reflectance and Geometry Jing Wang and Kristin J. Dana Electrical and Computer Engineering Department Rutgers University Piscataway, NJ, USA {jingwang,kdana}@caip.rutgers.edu
More informationI Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University. Computer Vision: 6. Texture
I Chen Lin, Assistant Professor Dept. of CS, National Chiao Tung University Computer Vision: 6. Texture Objective Key issue: How do we represent texture? Topics: Texture analysis Texture synthesis Shape
More informationColor Image Segmentation
Color Image Segmentation Yining Deng, B. S. Manjunath and Hyundoo Shin* Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 93106-9560 *Samsung Electronics Inc.
More informationRegion-based Segmentation and Object Detection
Region-based Segmentation and Object Detection Stephen Gould Tianshi Gao Daphne Koller Presented at NIPS 2009 Discussion and Slides by Eric Wang April 23, 2010 Outline Introduction Model Overview Model
More informationShweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune
Face sketch photo synthesis Shweta Gandhi, Dr.D.M.Yadav JSPM S Bhivarabai sawant Institute of technology & research Electronics and telecom.dept, Wagholi, Pune Abstract Face sketch to photo synthesis has
More informationCorrecting User Guided Image Segmentation
Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.
More informationSmooth Image Segmentation by Nonparametric Bayesian Inference
Smooth Image Segmentation by Nonparametric Bayesian Inference Peter Orbanz and Joachim M. Buhmann Institute of Computational Science, ETH Zurich European Conference on Computer Vision (ECCV), 2006 The
More informationAn Improved Texture Synthesis Algorithm Using Morphological Processing with Image Analogy
An Improved Texture Synthesis Algorithm Using Morphological Processing with Image Analogy Jiang Ni Henry Schneiderman CMU-RI-TR-04-52 October 2004 Robotics Institute Carnegie Mellon University Pittsburgh,
More informationVisual Learning with Explicit and Implicit Manifolds
Visual Learning with Explicit and Implicit Manifolds --- A Mathematical Model of, Texton, and Primal Sketch Song-Chun Zhu Departments of Statistics and Computer Science University of California, Los Angeles
More informationTexture. Texture. 2) Synthesis. Objectives: 1) Discrimination/Analysis
Texture Texture D. Forsythe and J. Ponce Computer Vision modern approach Chapter 9 (Slides D. Lowe, UBC) Key issue: How do we represent texture? Topics: Texture segmentation Texture-based matching Texture
More information10-701/15-781, Fall 2006, Final
-7/-78, Fall 6, Final Dec, :pm-8:pm There are 9 questions in this exam ( pages including this cover sheet). If you need more room to work out your answer to a question, use the back of the page and clearly
More informationIMA Preprint Series # 2016
VIDEO INPAINTING OF OCCLUDING AND OCCLUDED OBJECTS By Kedar A. Patwardhan Guillermo Sapiro and Marcelo Bertalmio IMA Preprint Series # 2016 ( January 2005 ) INSTITUTE FOR MATHEMATICS AND ITS APPLICATIONS
More informationLight Field Occlusion Removal
Light Field Occlusion Removal Shannon Kao Stanford University kaos@stanford.edu Figure 1: Occlusion removal pipeline. The input image (left) is part of a focal stack representing a light field. Each image
More informationClustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother
Clustering appearance and shape by learning jigsaws Anitha Kannan, John Winn, Carsten Rother Models for Appearance and Shape Histograms Templates discard spatial info articulation, deformation, variation
More informationSegmentation and Grouping April 19 th, 2018
Segmentation and Grouping April 19 th, 2018 Yong Jae Lee UC Davis Features and filters Transforming and describing images; textures, edges 2 Grouping and fitting [fig from Shi et al] Clustering, segmentation,
More informationOutline. Segmentation & Grouping. Examples of grouping in vision. Grouping in vision. Grouping in vision 2/9/2011. CS 376 Lecture 7 Segmentation 1
Outline What are grouping problems in vision? Segmentation & Grouping Wed, Feb 9 Prof. UT-Austin Inspiration from human perception Gestalt properties Bottom-up segmentation via clustering Algorithms: Mode
More informationLocal Features and Bag of Words Models
10/14/11 Local Features and Bag of Words Models Computer Vision CS 143, Brown James Hays Slides from Svetlana Lazebnik, Derek Hoiem, Antonio Torralba, David Lowe, Fei Fei Li and others Computer Engineering
More informationUsing Multiple Segmentations to Discover Objects and their Extent in Image Collections
Using Multiple Segmentations to Discover Objects and their Extent in Image Collections Bryan C. Russell 1 Alexei A. Efros 2 Josef Sivic 3 William T. Freeman 1 Andrew Zisserman 3 1 CS and AI Laboratory
More informationBeyond bags of features: Adding spatial information. Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba Adding spatial information Forming vocabularies from pairs of nearby features doublets
More informationGenerative and discriminative classification techniques
Generative and discriminative classification techniques Machine Learning and Category Representation 2014-2015 Jakob Verbeek, November 28, 2014 Course website: http://lear.inrialpes.fr/~verbeek/mlcr.14.15
More informationAutomatic Learning Image Objects via Incremental Model
IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661, p- ISSN: 2278-8727Volume 14, Issue 3 (Sep. - Oct. 2013), PP 70-78 Automatic Learning Image Objects via Incremental Model L. Ramadevi 1,
More informationNonparametric Clustering of High Dimensional Data
Nonparametric Clustering of High Dimensional Data Peter Meer Electrical and Computer Engineering Department Rutgers University Joint work with Bogdan Georgescu and Ilan Shimshoni Robust Parameter Estimation:
More informationSupervised learning. y = f(x) function
Supervised learning y = f(x) output prediction function Image feature Training: given a training set of labeled examples {(x 1,y 1 ),, (x N,y N )}, estimate the prediction function f by minimizing the
More informationClassifying Images with Visual/Textual Cues. By Steven Kappes and Yan Cao
Classifying Images with Visual/Textual Cues By Steven Kappes and Yan Cao Motivation Image search Building large sets of classified images Robotics Background Object recognition is unsolved Deformable shaped
More informationImage Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing
Image Segmentation Using Iterated Graph Cuts Based on Multi-scale Smoothing Tomoyuki Nagahashi 1, Hironobu Fujiyoshi 1, and Takeo Kanade 2 1 Dept. of Computer Science, Chubu University. Matsumoto 1200,
More informationProbabilistic Generative Models for Machine Vision
Probabilistic Generative Models for Machine Vision bacciu@di.unipi.it Dipartimento di Informatica Università di Pisa Corso di Intelligenza Artificiale Prof. Alessandro Sperduti Università degli Studi di
More informationToday: non-linear filters, and uses for the filters and representations from last time. Review pyramid representations Non-linear filtering Textures
1 Today: non-linear filters, and uses for the filters and representations from last time Review pyramid representations Non-linear filtering Textures 2 Reading Related to today s lecture: Chapter 9, Forsyth&Ponce..
More informationUniversiteit Leiden Opleiding Informatica
Internal Report 2012-2013-09 June 2013 Universiteit Leiden Opleiding Informatica Evaluation of Image Quilting algorithms Pepijn van Heiningen BACHELOR THESIS Leiden Institute of Advanced Computer Science
More informationObject Removal Using Exemplar-Based Inpainting
CS766 Prof. Dyer Object Removal Using Exemplar-Based Inpainting Ye Hong University of Wisconsin-Madison Fall, 2004 Abstract Two commonly used approaches to fill the gaps after objects are removed from
More informationSemantic Segmentation. Zhongang Qi
Semantic Segmentation Zhongang Qi qiz@oregonstate.edu Semantic Segmentation "Two men riding on a bike in front of a building on the road. And there is a car." Idea: recognizing, understanding what's in
More informationLearning How to Inpaint from Global Image Statistics
Learning How to Inpaint from Global Image Statistics Anat Levin Assaf Zomet Yair Weiss School of Computer Science and Engineering, The Hebrew University of Jerusalem, 9194, Jerusalem, Israel E-Mail: alevin,zomet,yweiss
More informationISSN: (Online) Volume 2, Issue 5, May 2014 International Journal of Advance Research in Computer Science and Management Studies
ISSN: 2321-7782 (Online) Volume 2, Issue 5, May 2014 International Journal of Advance Research in Computer Science and Management Studies Research Article / Survey Paper / Case Study Available online at:
More informationTexture April 17 th, 2018
Texture April 17 th, 2018 Yong Jae Lee UC Davis Announcements PS1 out today Due 5/2 nd, 11:59 pm start early! 2 Review: last time Edge detection: Filter for gradient Threshold gradient magnitude, thin
More informationContinuous Visual Vocabulary Models for plsa-based Scene Recognition
Continuous Visual Vocabulary Models for plsa-based Scene Recognition Eva Hörster Multimedia Computing Lab University of Augsburg Augsburg, Germany hoerster@informatik.uniaugsburg.de Rainer Lienhart Multimedia
More informationComputer vision: models, learning and inference. Chapter 10 Graphical Models
Computer vision: models, learning and inference Chapter 10 Graphical Models Independence Two variables x 1 and x 2 are independent if their joint probability distribution factorizes as Pr(x 1, x 2 )=Pr(x
More informationSegmenting Scenes by Matching Image Composites
Segmenting Scenes by Matching Image Composites Bryan C. Russell 1 Alexei A. Efros 2,1 Josef Sivic 1 William T. Freeman 3 Andrew Zisserman 4,1 1 INRIA 2 Carnegie Mellon University 3 CSAIL MIT 4 University
More informationComparing Local Feature Descriptors in plsa-based Image Models
Comparing Local Feature Descriptors in plsa-based Image Models Eva Hörster 1,ThomasGreif 1, Rainer Lienhart 1, and Malcolm Slaney 2 1 Multimedia Computing Lab, University of Augsburg, Germany {hoerster,lienhart}@informatik.uni-augsburg.de
More informationLearning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009
Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer
More informationFast and Enhanced Algorithm for Exemplar Based Image Inpainting (Paper# 132)
Fast and Enhanced Algorithm for Exemplar Based Image Inpainting (Paper# 132) Anupam Agrawal Pulkit Goyal Sapan Diwakar Indian Institute Of Information Technology, Allahabad Image Inpainting Inpainting
More informationBag of Words Models. CS4670 / 5670: Computer Vision Noah Snavely. Bag-of-words models 11/26/2013
CS4670 / 5670: Computer Vision Noah Snavely Bag-of-words models Object Bag of words Bag of Words Models Adapted from slides by Rob Fergus and Svetlana Lazebnik 1 Object Bag of words Origin 1: Texture Recognition
More informationNormalized Texture Motifs and Their Application to Statistical Object Modeling
Normalized Texture Motifs and Their Application to Statistical Obect Modeling S. D. Newsam B. S. Manunath Center for Applied Scientific Computing Electrical and Computer Engineering Lawrence Livermore
More informationCellular Learning Automata-Based Color Image Segmentation using Adaptive Chains
Cellular Learning Automata-Based Color Image Segmentation using Adaptive Chains Ahmad Ali Abin, Mehran Fotouhi, Shohreh Kasaei, Senior Member, IEEE Sharif University of Technology, Tehran, Iran abin@ce.sharif.edu,
More informationNearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications
Nearest Clustering Algorithm for Satellite Image Classification in Remote Sensing Applications Anil K Goswami 1, Swati Sharma 2, Praveen Kumar 3 1 DRDO, New Delhi, India 2 PDM College of Engineering for
More informationMedian filter. Non-linear filtering example. Degraded image. Radius 1 median filter. Today
Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:
More informationNon-linear filtering example
Today Non-linear filtering example Median filter Replace each pixel by the median over N pixels (5 pixels, for these examples). Generalizes to rank order filters. In: In: 5-pixel neighborhood Out: Out:
More informationRobotics Programming Laboratory
Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car
More information2 Proposed Methodology
3rd International Conference on Multimedia Technology(ICMT 2013) Object Detection in Image with Complex Background Dong Li, Yali Li, Fei He, Shengjin Wang 1 State Key Laboratory of Intelligent Technology
More informationFinal Exam Schedule. Final exam has been scheduled. 12:30 pm 3:00 pm, May 7. Location: INNOVA It will cover all the topics discussed in class
Final Exam Schedule Final exam has been scheduled 12:30 pm 3:00 pm, May 7 Location: INNOVA 1400 It will cover all the topics discussed in class One page double-sided cheat sheet is allowed A calculator
More informationMarkov Random Fields for Recognizing Textures modeled by Feature Vectors
Markov Random Fields for Recognizing Textures modeled by Feature Vectors Juliette Blanchet, Florence Forbes, and Cordelia Schmid Inria Rhône-Alpes, 655 avenue de l Europe, Montbonnot, 38334 Saint Ismier
More informationProblem Set 4. Assigned: March 23, 2006 Due: April 17, (6.882) Belief Propagation for Segmentation
6.098/6.882 Computational Photography 1 Problem Set 4 Assigned: March 23, 2006 Due: April 17, 2006 Problem 1 (6.882) Belief Propagation for Segmentation In this problem you will set-up a Markov Random
More informationAnnouncements. Recognition. Recognition. Recognition. Recognition. Homework 3 is due May 18, 11:59 PM Reading: Computer Vision I CSE 152 Lecture 14
Announcements Computer Vision I CSE 152 Lecture 14 Homework 3 is due May 18, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images Given
More informationPatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing
PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing Barnes et al. In SIGGRAPH 2009 발표이성호 2009 년 12 월 3 일 Introduction Image retargeting Resized to a new aspect ratio [Rubinstein
More informationLearning based face hallucination techniques: A survey
Vol. 3 (2014-15) pp. 37-45. : A survey Premitha Premnath K Department of Computer Science & Engineering Vidya Academy of Science & Technology Thrissur - 680501, Kerala, India (email: premithakpnath@gmail.com)
More informationTexture. CS 419 Slides by Ali Farhadi
Texture CS 419 Slides by Ali Farhadi What is a Texture? Texture Spectrum Steven Li, James Hays, Chenyu Wu, Vivek Kwatra, and Yanxi Liu, CVPR 06 Texture scandals!! Two crucial algorithmic points Nearest
More informationText Document Clustering Using DPM with Concept and Feature Analysis
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 10, October 2013,
More informationProbabilistic Graphical Models Part III: Example Applications
Probabilistic Graphical Models Part III: Example Applications Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2014 CS 551, Fall 2014 c 2014, Selim
More informationHuman Head-Shoulder Segmentation
Human Head-Shoulder Segmentation Hai Xin, Haizhou Ai Computer Science and Technology Tsinghua University Beijing, China ahz@mail.tsinghua.edu.cn Hui Chao, Daniel Tretter Hewlett-Packard Labs 1501 Page
More informationTexture. The Challenge. Texture Synthesis. Statistical modeling of texture. Some History. COS526: Advanced Computer Graphics
COS526: Advanced Computer Graphics Tom Funkhouser Fall 2010 Texture Texture is stuff (as opposed to things ) Characterized by spatially repeating patterns Texture lacks the full range of complexity of
More informationSchedule for Rest of Semester
Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration
More informationMore details on presentations
More details on presentations Aim to speak for ~50 min (after 15 min review, leaving 10 min for discussions) Try to plan discussion topics It s fine to steal slides from the Web, but be sure to acknowledge
More informationUsing Maximum Entropy for Automatic Image Annotation
Using Maximum Entropy for Automatic Image Annotation Jiwoon Jeon and R. Manmatha Center for Intelligent Information Retrieval Computer Science Department University of Massachusetts Amherst Amherst, MA-01003.
More informationTrajectory Analysis and Semantic Region Modeling Using A Nonparametric Bayesian Model
Trajectory Analysis and Semantic Region Modeling Using A Nonparametric Bayesian Model Xiaogang Wang 1 Keng Teck Ma 2 Gee-Wah Ng 2 W. Eric L. Grimson 1 1 CS and AI Lab, MIT, 77 Massachusetts Ave., Cambridge,
More informationThe most cited papers in Computer Vision
COMPUTER VISION, PUBLICATION The most cited papers in Computer Vision In Computer Vision, Paper Talk on February 10, 2012 at 11:10 pm by gooly (Li Yang Ku) Although it s not always the case that a paper
More informationProceedings of the International MultiConference of Engineers and Computer Scientists 2018 Vol I IMECS 2018, March 14-16, 2018, Hong Kong
, March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong , March 14-16, 2018, Hong Kong TABLE I CLASSIFICATION ACCURACY OF DIFFERENT PRE-TRAINED MODELS ON THE TEST DATA
More informationImage Composition. COS 526 Princeton University
Image Composition COS 526 Princeton University Modeled after lecture by Alexei Efros. Slides by Efros, Durand, Freeman, Hays, Fergus, Lazebnik, Agarwala, Shamir, and Perez. Image Composition Jurassic Park
More informationLearning Sparse FRAME Models for Natural Image Patterns
International Journal of Computer Vision (IJCV Learning Sparse FRAME Models for Natural Image Patterns Jianwen Xie Wenze Hu Song-Chun Zhu Ying Nian Wu Received: 1 February 2014 / Accepted: 13 August 2014
More informationPatch Descriptors. CSE 455 Linda Shapiro
Patch Descriptors CSE 455 Linda Shapiro How can we find corresponding points? How can we find correspondences? How do we describe an image patch? How do we describe an image patch? Patches with similar
More informationDeep Learning. Deep Learning. Practical Application Automatically Adding Sounds To Silent Movies
http://blog.csdn.net/zouxy09/article/details/8775360 Automatic Colorization of Black and White Images Automatically Adding Sounds To Silent Movies Traditionally this was done by hand with human effort
More informationMR IMAGE SEGMENTATION
MR IMAGE SEGMENTATION Prepared by : Monil Shah What is Segmentation? Partitioning a region or regions of interest in images such that each region corresponds to one or more anatomic structures Classification
More informationBy Suren Manvelyan,
By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116 By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116 By Suren Manvelyan, http://www.surenmanvelyan.com/gallery/7116 By Suren Manvelyan,
More informationIterative MAP and ML Estimations for Image Segmentation
Iterative MAP and ML Estimations for Image Segmentation Shifeng Chen 1, Liangliang Cao 2, Jianzhuang Liu 1, and Xiaoou Tang 1,3 1 Dept. of IE, The Chinese University of Hong Kong {sfchen5, jzliu}@ie.cuhk.edu.hk
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 16: Bag-of-words models Object Bag of words Announcements Project 3: Eigenfaces due Wednesday, November 11 at 11:59pm solo project Final project presentations:
More informationImage Inpainting. Seunghoon Park Microsoft Research Asia Visual Computing 06/30/2011
Image Inpainting Seunghoon Park Microsoft Research Asia Visual Computing 06/30/2011 Contents Background Previous works Two papers Space-Time Completion of Video (PAMI 07)*1+ PatchMatch: A Randomized Correspondence
More informationUnsupervised Identification of Multiple Objects of Interest from Multiple Images: discover
Unsupervised Identification of Multiple Objects of Interest from Multiple Images: discover Devi Parikh and Tsuhan Chen Carnegie Mellon University {dparikh,tsuhan}@cmu.edu Abstract. Given a collection of
More informationDetecting Salient Contours Using Orientation Energy Distribution. Part I: Thresholding Based on. Response Distribution
Detecting Salient Contours Using Orientation Energy Distribution The Problem: How Does the Visual System Detect Salient Contours? CPSC 636 Slide12, Spring 212 Yoonsuck Choe Co-work with S. Sarma and H.-C.
More informationData-driven methods: Video & Texture. A.A. Efros
Data-driven methods: Video & Texture A.A. Efros CS194: Image Manipulation & Computational Photography Alexei Efros, UC Berkeley, Fall 2014 Michel Gondry train video http://www.youtube.com/watch?v=0s43iwbf0um
More informationECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov
ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern
More informationA New Feature Local Binary Patterns (FLBP) Method
A New Feature Local Binary Patterns (FLBP) Method Jiayu Gu and Chengjun Liu The Department of Computer Science, New Jersey Institute of Technology, Newark, NJ 07102, USA Abstract - This paper presents
More informationBlind Image Deblurring Using Dark Channel Prior
Blind Image Deblurring Using Dark Channel Prior Jinshan Pan 1,2,3, Deqing Sun 2,4, Hanspeter Pfister 2, and Ming-Hsuan Yang 3 1 Dalian University of Technology 2 Harvard University 3 UC Merced 4 NVIDIA
More information