OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE

Size: px
Start display at page:

Download "OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE"

Transcription

1 OCCLUSION BOUNDARIES ESTIMATION FROM A HIGH-RESOLUTION SAR IMAGE Wenju He, Marc Jäger, and Olaf Hellwich Berlin University of Technology FR3-1, Franklinstr. 28, Berlin, Germany {wenjuhe, jaeger, hellwich}@fpk.tu-berlin.de ABSTRACT Occlusion is the concept that several objects interfere with one another in an image. This phenomenon is prevalent in high resolution Synthetic Aperture Radar (SAR) images in urban areas. Geometric contents, which enable us to analyze occlusion, are partially observable in high resolution SAR images. Estimation of occlusion boundaries helps to discriminate different objects and localize their extents. An occlusion boundary map also corresponds to an efficient figure / ground segmentation, which would be quite promising for further object analysis. This paper applies a hierarchical framework [1] to extract occlusion boundaries among different objects, e.g. buildings and trees. The framework uses Conditional Random Fields to simultaneously reason about the boundaries and segments. Key words: SAR; urban; occlusion; boundary. 1. INTRODUCTION A Synthetic Aperture Radar (SAR) image is a projection of scattering reflections of 3D scene to slant range representation. Object extents, e.g. geometric information, are usually missing in SAR images. Speckles, SAR imaging mechanisms and geographical configuration of objects make the analysis of SAR images very difficult. In contrast to optical images, SAR images are not capable of reconstructing objects. However, geometric information contents are partially observable in high resolution SAR images. Thus their applications in urban environments are promising, e.g. when combined with interferometric SAR data which are able to provide height information. Occlusion is a common phenomenon in optical images due to the projection of 3D scene to 2D image plane. Occlusion reasoning is an important aspect of the intrinsic 3D understanding from a single image. This effect is handled in [1] by extracting potential occlusion boundaries. The occlusion boundaries define figure/ground labeling. The algorithm can naturally be adjusted to strengthen consistency of interested objects. SAR images are occluded in a different way. The propagations of electromagnetic waves in urban areas are complicated due to complex geometric configuration of man-made structures and surroundings. Multiple reflections happen among objects in the areas. Electromagnetic waves obstructed by objects can not reach some adjacent objects. Scatterers located at the bottom of a building may fall behind scatterers on the top in an image. It is difficult to discriminate neighboring objects in SAR images in urban areas. The boundaries between different objects are usually occluded. An example is that buildings and trees are sometimes situated together and have similar characteristics along their boundaries. Estimation of occlusion boundaries helps to discriminate different objects and localize their extents. It is very important for scene understanding using SAR images. An occlusion boundary map also corresponds to a foreground segmentation, which would be quite promising for object analysis despite the constrains of SAR imaging mechanism. This paper studies the occlusion between different objects in high resolution SAR images in urban areas. For instance, we estimate that buildings occlude trees and shadow, trees occlude grasses, and so on. We adopt an iterative strategy [1] exploring boundary strength and region characteristics coherently to solve this difficult problem. We integrate occlusion boundary estimation with segmentation problem. Initial segmentation is obtained by applying watershed method on polarimetric amplitude data. The boundaries of the generated segments are potential occlusion boundaries. Weak boundaries that are less likely to be occlusions can be removed and the small regions can be grouped if they have the same surface type. Many effective features are adopted in this paper, which help to characterize boundaries and regions efficiently. The boundaries and regions likelihoods are integrated into a Conditional Random Fields (CRF) framework, which models the interaction of boundaries, junctions and regions. The CRF inference outputs the occlusion boundary map. Our goal is to find the boundaries and occlusion relationships. The recovered occlusion boundary map shows major occlusions in a SAR image. Therefore, it would be helpful for 3D scene understanding of a single high resolution SAR image. An accurate occlusion boundary map also defines a high-quality segmentation. The segmen- Proc. of 4th Int. Workshop on Science and Applications of SAR Polarimetry and Polarimetric Interferometry PolInSAR 2009, January 2009, Frascati, Italy (ESA SP-668, April 2009)

2 tation formed by the boundaries also gives an efficient figure/ground segmentation for further object analysis. boundaries together with the segmentation are the input of the second iteration. 2. ALGORITHM Occlusion boundary analysis and image segmentation are integrated and interleaved in the algorithm [1]. Segmentation provides initial boundaries and regions. We gradually estimate occlusion boundaries by iteratively removing weak boundaries and inferencing on the new segmentation. The growing segments provide better spatial support for feature extraction. After several iterations we obtain an occlusion boundary map. Each iteration consists of three steps: (1) compute multiple features for boundaries and regions; (2) inference confidences for boundaries and regions; and (3) compute a hierarchical segmentation by iteratively removing boundaries with lower boundary strength than a given threshold. Regions are merged to form a new segmentation. Each time a weak boundary is about to be removed, boundary likelihood of the enlarged region has to be re-estimated. The new segmentation is used as the initial segmentation for the next iteration. Each iteration produces a new segmentation which enables reasonable complex feature extraction in the next iteration. Our estimation framework consists of three iterations. The first iteration is minimum merging using unary likelihood estimation. Boundaries with smallest occlusion likelihood are eliminated. In the second iteration, we use a CRF model to integrate the unary likelihood with the conditional dependency of a boundary on its preceding boundaries. In the third iteration, the CRF model is extended to model surface evidences on both sides of boundaries. In each iteration we apply the proposed three steps to obtain new and less boundaries, which are supposed to be likely to be occlusion ones. The new probabilistic boundary map is thresholded to give the initial segmentation for the next iteration. The third iteration produces the final occlusion likelihoods and boundaries Minimum merging At the beginning, we adopt watershed segmentation method to segment an image into small regions, which provide an initial hypothesis of the occlusion boundaries. An example is shown in Fig. 1(b). Watershed segmentation generates an over-segmentation with several thousand regions from intensity gradients of polarimetric SAR data. These regions provide nearly true boundaries. They are conservative estimations of the occlusion boundaries. Most of the boundaries are smooth and thus facilitate efficient junction analysis. We extract features for all boundaries and use a boundary classifier to estimate boundary likelihoods. The likelihoods are thresholded to provide a new hypothesis of the occlusion boundaries and a new segmentation. The 2.2. CRF model Both boundaries and regions indicate whether an occlusion boundary exists. On the one hand, the initial boundary map contains a large number of edges. Occlusion boundaries tend to be strong edges. We calculate strength, length and other features for boundaries. On the other hand, the initial segmentation contains lots of small regions. Regions with a same surface label are usually not occluded. Therefore, occlusions estimation will benefit from the integration of boundaries and regions. In the second and third iteration we use CRF to model the interaction of adjacent boundaries and surfaces on both sides. The CRF model inferences over boundary and junctions, modeling boundary strength and enforcing closure and boundary consistency. The model is defined as P (labels data) = 1 N j N e φ j γ e (1) Z where φ j indicates junction factor, γ e indicates surface factor, N j is the number of junctions, N e is the number of boundaries, and 1 Z is normalization item. The factor graph of CRF consists of a junction factor and a surface factor. The junction factor models the strength and continuities of boundaries, i.e. the likelihood of the label of each boundary according to the data, conditioned on its preceding boundaries if exist. The junction factor consists of unary boundary likelihood and conditional continuity likelihood. The surface factor models the likelihood of a boundary conditioned on the region types on each side. A boundary between two regions assigned with the same surface label is less likely to be occluded and thus has a low occlusion likelihood. We learn to detect whether a boundary probably exists between two regions due to occlusion. The CRF model achieves a joint inference over the two factors. Confidences for boundaries and surfaces are computed simultaneously, which are expected to be more stable. It enforces boundary consistency that the left side is object and that the left side occludes the right side. Surface evidence map also helps to guarantee the consistency of object boundaries. Besides, the model is capable to improve surface estimation in the mean time. CRF inference gives occlusion likelihood of boundaries. Boundaries with low likelihood are removed. Therefore, we obtain a new probabilistic boundary map. Given a labeling of boundaries and excluding the surface factor, the CRF model decomposes into single likelihood term for each boundary. This property allows us to learn boundary likelihood and conditional likelihood of junction factor using boosted decision tree, which is j e

3 able to perform feature selection and give probabilistic results. Boundary classifier and boundary continuity classifier are trained to generate potentials in the junction factor. Sum-product belief propagation is used for inference. The CRF outputs occlusion likelihoods of boundaries. Boundaries with low likelihoods are to be removed. Surface evidence maps used in the model are computed using a smaller set of low-level features by the algorithm in [2]. The maps indicate 5 surface types: layover, shadow, tree, grass and unknow class. The unknown class indicates that in meter-resolution SAR images some regions are hard to interpret by eyes. An example of surface map is shown in Fig. 1(d). The maps allow us to inference boundaries between different object types, and to impose penalty to non-occlusion boundaries. They help to enforce consistency between the region labels and boundary labels Features extraction Table 1. Features extracted for a boundary. Boundary Feature Descriptions Region R1. Polarimetric entropy, anisotropy and α differences R2. Sublook coherence and entropy differences R3. Optimized coherence difference R4. HH, VV and HV: amplitude differences R5. Span image: amplitude difference R6. Span histogram: KullbackLeibler (KL) divergence R7. Log span histogram: KL divergence R8. Filter bank responses of span: differences R9. Filter bank responses of log span: differences R10. Texton histogram of span: KL divergence R11. Texton histogram of log span: KL divergence R12. HOG of span: KL divergence R13. HOG of log span: KL divergence R14. Dense SIFT of log span: KL divergence R15. Area: area of region on each side, area ratio R16. Lines: difference of line pixels R17. Parallel lines: percentage difference R18. Position: differences of bounding box coordinates R19. Alignment: horizontal and vertical overlaps Boundary B1. Strength: average Pb B2. Length: length / (perimeter of smaller side) B3. Smoothness: length / (endpoint distance) B4. Orientation: directed orientation B5. Continuity: angle difference at each junction Surface S1. Surface evidences: confidences of each side S2. Surface evidences: differences of S1 We extract a rich set of features for regions in a segmentation. The region features are used to generate boundary features. They include polarimety, amplitude, texture, shape, and other types. We believe that comprehensive features extraction would better characterize different objects in the images. More robust features are expected for evolving regions. Besides these low-level features, we also use surface evidence maps as additional cues. We extract 204 features for each region. Polarimetric SAR data reveal more scattering physics than a single channel image. Therefore, polarimetric decomposition provides informative indicators of approximate main scattering processes involved in a region. We extract polarimetric entropy, anisotropy and α angle. Sub-aperture coherence, entropy and optimized coherence are also helpful, e.g. most coherent scatterers are targets formed by buildings or together with ground. Amplitude of polarimetric SAR data is the most important information for discriminating different objects, since all derived products of polarimetric SAR data, e.g. coherence, are strongly influenced by intensity, i.e., reflection strength. The distribution of SAR amplitude data can be modeled by K distribution, log normal, and so on. For simplicity, we use features extracted from log span image of polarimetric SAR data. The log features are very effective for SAR image segmentation. For span and log span images, we use a filter bank [3] to generate texton histogram. Histogram of oriented gradients (HOG) [4] is another effective feature. We also apply scale-invariant feature transform (SIFT) descriptor [5] to SAR images. Furthermore, we represent area, small lines generated by a line detector [2], position and bounding box as additional region features. Boundaries features are used to learn occlusions. Occlusion boundaries often have strong amplitude gradients. Therefore, the distances of the features of two neighboring regions are efficient boundary features. Probabilistic boundary map produced from polarimetric amplitude image by Pb algorithm [6] provides an important cue of boundary strength. An example of Pb map is shown in Fig. 1(c). We represent the mean Pb along the boundary pixels as a boundary feature. We also extract boundary length, smoothness, orientation, alignment continuity [1]. The extracted 88 boundary features are listed in Tab. 1. We expect that boundaries reasoning will benefit from effective features. We calculate continuity features to describe the conditional dependency of a boundary on its preceding one. The continuity features are the concatenation of boundary features of two adjacent boundaries, and in addition the relative angle between them. 3. EXPERIMENTS 3.1. Dataset The polarimetric SAR data of Copenhagen acquired by EMISAR are used in the experiments. We extract 98 images ( ) from the data. We generate ground truth occlusions boundary for 41 images. We use 31 of them for training, and 10 images to evaluate the estimation ac-

4 (a) (b) Figure 2. Precision-recall curve for classifying whether a boundary is an occlusion boundary in the first iteration Inference (c) (d) Figure 1. (a) A polarimetric SAR image, (b) watershed segmentation (4915 segments), (c) probabilistic boundaries, (d) surface evidences. curacies. Ground truth contains object labels for each region. To generate ground truth, we first segment an image into thousands of regions, and manually group them into object regions. Then we manually label occlusion types of adjacent regions Training For a test image, we apply the classifiers and the models to estimate occlusion boundaries. The image is initially over-segmented by watershed. In the first iteration, we extract features for boundaries and apply the first boundary classifier. Weak boundaries are removed and new segmentation is formed. In the second iteration, we extract boundaries features and continuity features. The inference over junction factor terms gives boundary probabilities. We perform inference over the full model in the third iteration to obtain the final occlusion likelihoods. Fig. 3 and Fig. 4 show two examples of occlusion boundary estimation and corresponding segmentations. Fig. 3(f) shows the segmentation result of the second iteration, which contains more segments and is slightly more accurate than the final segmentation shown in Fig. 3(e) in terms of small objects. Nonetheless, final segmentation contains 290 less segments. Less segments will reduce the computational burden in further applications. This demonstrates the effectiveness of joint inference over junctions and surface evidences in the CRF model. We train three boundary classifiers and two boundary continuity classifiers in the three iterations using a logistic regression version of Adaboost [1]. In each iteration, the classifiers are trained on the segmentation result of the previous iteration. The trained classifiers are then applied to the train data. We transfer the ground truth from the last iteration to current iteration in order to train new classifiers on new regions. In the transfer process, we label each region as the object that has the most pixels in the region, and then label the occlusion types between regions. In the three iterations, we set the thresholds for removing weak boundaries to 0.08, 0.12 and 0.2, respectively. Setting the thresholds is a trade-off between more resulted segments and smoother, more sensible objects. In the second iteration, we restrict the CRF model to the junction factor. The CRF model is extended to the full factors in the third iteration. We impose a penalty (e 0.3 ) for the lack of a boundary between different surface classes, for shadow occluding others, for grass occluding layover or tree, and for unknow class occluding layover or tree Evaluation Table 2. Overall segmentation accuracy BSS and averaged number of segments. BSS Number Normalized cuts 42.48% 400 Our method Iter % 830 Iter % 582 The algorithm is evaluated by measuring the accuracy of boundaries classification and final segmentation. Fig. 2 shows the precision-recall curve for detecting whether an initial boundary is an occlusion boundary. Boundaries are weighted by length in computing the precisions and recalls. We measure the overall segmentation accuracy in terms of best spatial support (BSS) score [7]. For each ground truth region, BSS is the maximum overlap score across all of the segments. It measures how well the best segment covers the region. The segmentation accuracy

5 is shown in Tab. 2. The algorithm is comparable to Normalized cuts, which segments each image into 400 segments. In Normalized cuts segmentation, only log span of polarimetric SAR data is used as feature, and Euclidean distance is used in constructing the distance matrix. (a) (b) (a) (b) (c) (d) (c) (d) (e) (f) (e) Figure 3. An example of boundary result: (a) original image, with RGB colors representing HH, VV and HV channels, (b) ground truth occlusion boundaries, (c) estimated occlusion boundaries, (d) probabilistic boundaries, (e) segmentation defined by boundaries (629 segments), (f) segmentation result of iteration 2 (919 segments). 4. CONCLUSIONS This paper extracts occlusion boundaries from a highresolution SAR image in urban areas. Segmentation and boundary estimation are integrated in the framework. An iterative strategy is adopted to estimate occlusion likelihood and then threshold them to generate occlusion boundaries and segmentations. Increasing regions provide better spatial support, that helps us to better determine whether a boundary is caused by occlusion. The algorithm jointly reasons about boundaries and surfaces that influence the occlusions in SAR images. The obtained promising results of boundaries extraction and seg- (f) Figure 4. Another example of boundary result: (a) original image, (b) ground truth occlusion boundaries, (c) estimated occlusion boundaries, (d) probabilistic boundaries, (e) segmentation defined by boundaries (594 segments), (f) segmentation result of iteration 2 (860 segments). mentation are applicable to further applications, e.g. object detection. The occlusion boundary map is a probabilistic output, which can be integrated into statistical geometric models for urban scene analysis using SAR data. The occlusion boundaries will play an important role in urban understanding using SAR images. REFERENCES [1] Hoiem, D., Stein, A. N., Efros, A. A. & Hebert, M. (2007). Recovering Occlusion Boundaries from a Single Image. In International Conference on Computer Vision. [2] Hoiem, D., Efros, A. & Hebert, M. (2007). Recovering Surface Layout from an Image. International Journal of Computer Vision, 75(1), [3] Varma, M. & Zisserman, A. (2005). A Statistical Approach to Texture Classification from Single Im-

6 ages. International Journal of Computer Vision, 62(1-2), [4] Dalal, N. & Triggs, B. (2005). Histograms of Oriented Gradients for Human Detection. In IEEE Conference on Computer Vision and Pattern Recognition, 2, pp [5] Lowe, D.G. (2004). Distinctive Image Features from Scale-invariant Keypoints. International Journal of Computer Vision, 2(60), [6] Martin, D.R., Fowlkes, C.C. & Malik, J. (2003). Learning to Detect Natural Image Boundaries using Brightness and Texture. In Advances in Neural Information Processing Systems 15 (NIPS), pp [7] Malisiewicz, T. & Efros, A. (2007). Improving Spatial Support for Objects via Multiple Segmentations. In British Machine Vision Conference, pp

CS395T paper review. Indoor Segmentation and Support Inference from RGBD Images. Chao Jia Sep

CS395T paper review. Indoor Segmentation and Support Inference from RGBD Images. Chao Jia Sep CS395T paper review Indoor Segmentation and Support Inference from RGBD Images Chao Jia Sep 28 2012 Introduction What do we want -- Indoor scene parsing Segmentation and labeling Support relationships

More information

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009 Learning and Inferring Depth from Monocular Images Jiyan Pan April 1, 2009 Traditional ways of inferring depth Binocular disparity Structure from motion Defocus Given a single monocular image, how to infer

More information

Region-based Segmentation and Object Detection

Region-based Segmentation and Object Detection Region-based Segmentation and Object Detection Stephen Gould Tianshi Gao Daphne Koller Presented at NIPS 2009 Discussion and Slides by Eric Wang April 23, 2010 Outline Introduction Model Overview Model

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

CS381V Experiment Presentation. Chun-Chen Kuo

CS381V Experiment Presentation. Chun-Chen Kuo CS381V Experiment Presentation Chun-Chen Kuo The Paper Indoor Segmentation and Support Inference from RGBD Images. N. Silberman, D. Hoiem, P. Kohli, and R. Fergus. ECCV 2012. 50 100 150 200 250 300 350

More information

Combining PGMs and Discriminative Models for Upper Body Pose Detection

Combining PGMs and Discriminative Models for Upper Body Pose Detection Combining PGMs and Discriminative Models for Upper Body Pose Detection Gedas Bertasius May 30, 2014 1 Introduction In this project, I utilized probabilistic graphical models together with discriminative

More information

CRF Based Point Cloud Segmentation Jonathan Nation

CRF Based Point Cloud Segmentation Jonathan Nation CRF Based Point Cloud Segmentation Jonathan Nation jsnation@stanford.edu 1. INTRODUCTION The goal of the project is to use the recently proposed fully connected conditional random field (CRF) model to

More information

Linear combinations of simple classifiers for the PASCAL challenge

Linear combinations of simple classifiers for the PASCAL challenge Linear combinations of simple classifiers for the PASCAL challenge Nik A. Melchior and David Lee 16 721 Advanced Perception The Robotics Institute Carnegie Mellon University Email: melchior@cmu.edu, dlee1@andrew.cmu.edu

More information

Automatic Photo Popup

Automatic Photo Popup Automatic Photo Popup Derek Hoiem Alexei A. Efros Martial Hebert Carnegie Mellon University What Is Automatic Photo Popup Introduction Creating 3D models from images is a complex process Time-consuming

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Recovering Occlusion Boundaries from a Single Image

Recovering Occlusion Boundaries from a Single Image Recovering Occlusion Boundaries from a Single Image Derek Hoiem Andrew N. Stein Alexei A. Efros Martial Hebert Robotics Institute Carnegie Mellon University {dhoiem, anstein, efros, hebert}@cs.cmu.edu

More information

Category vs. instance recognition

Category vs. instance recognition Category vs. instance recognition Category: Find all the people Find all the buildings Often within a single image Often sliding window Instance: Is this face James? Find this specific famous building

More information

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction

Computer vision: models, learning and inference. Chapter 13 Image preprocessing and feature extraction Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction Preprocessing The goal of pre-processing is to try to reduce unwanted variation in image due to lighting,

More information

Object Detection Design challenges

Object Detection Design challenges Object Detection Design challenges How to efficiently search for likely objects Even simple models require searching hundreds of thousands of positions and scales Feature design and scoring How should

More information

Contexts and 3D Scenes

Contexts and 3D Scenes Contexts and 3D Scenes Computer Vision Jia-Bin Huang, Virginia Tech Many slides from D. Hoiem Administrative stuffs Final project presentation Nov 30 th 3:30 PM 4:45 PM Grading Three senior graders (30%)

More information

Mobile Human Detection Systems based on Sliding Windows Approach-A Review

Mobile Human Detection Systems based on Sliding Windows Approach-A Review Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg

More information

Discrete Optimization of Ray Potentials for Semantic 3D Reconstruction

Discrete Optimization of Ray Potentials for Semantic 3D Reconstruction Discrete Optimization of Ray Potentials for Semantic 3D Reconstruction Marc Pollefeys Joined work with Nikolay Savinov, Christian Haene, Lubor Ladicky 2 Comparison to Volumetric Fusion Higher-order ray

More information

Classification of objects from Video Data (Group 30)

Classification of objects from Video Data (Group 30) Classification of objects from Video Data (Group 30) Sheallika Singh 12665 Vibhuti Mahajan 12792 Aahitagni Mukherjee 12001 M Arvind 12385 1 Motivation Video surveillance has been employed for a long time

More information

Efficient Acquisition of Human Existence Priors from Motion Trajectories

Efficient Acquisition of Human Existence Priors from Motion Trajectories Efficient Acquisition of Human Existence Priors from Motion Trajectories Hitoshi Habe Hidehito Nakagawa Masatsugu Kidode Graduate School of Information Science, Nara Institute of Science and Technology

More information

CS229: Action Recognition in Tennis

CS229: Action Recognition in Tennis CS229: Action Recognition in Tennis Aman Sikka Stanford University Stanford, CA 94305 Rajbir Kataria Stanford University Stanford, CA 94305 asikka@stanford.edu rkataria@stanford.edu 1. Motivation As active

More information

Detecting Object Instances Without Discriminative Features

Detecting Object Instances Without Discriminative Features Detecting Object Instances Without Discriminative Features Edward Hsiao June 19, 2013 Thesis Committee: Martial Hebert, Chair Alexei Efros Takeo Kanade Andrew Zisserman, University of Oxford 1 Object Instance

More information

Segmentation. Bottom up Segmentation Semantic Segmentation

Segmentation. Bottom up Segmentation Semantic Segmentation Segmentation Bottom up Segmentation Semantic Segmentation Semantic Labeling of Street Scenes Ground Truth Labels 11 classes, almost all occur simultaneously, large changes in viewpoint, scale sky, road,

More information

LEARNING BOUNDARIES WITH COLOR AND DEPTH. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen

LEARNING BOUNDARIES WITH COLOR AND DEPTH. Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen LEARNING BOUNDARIES WITH COLOR AND DEPTH Zhaoyin Jia, Andrew Gallagher, Tsuhan Chen School of Electrical and Computer Engineering, Cornell University ABSTRACT To enable high-level understanding of a scene,

More information

CS 534: Computer Vision Segmentation and Perceptual Grouping

CS 534: Computer Vision Segmentation and Perceptual Grouping CS 534: Computer Vision Segmentation and Perceptual Grouping Ahmed Elgammal Dept of Computer Science CS 534 Segmentation - 1 Outlines Mid-level vision What is segmentation Perceptual Grouping Segmentation

More information

Robotics Programming Laboratory

Robotics Programming Laboratory Chair of Software Engineering Robotics Programming Laboratory Bertrand Meyer Jiwon Shin Lecture 8: Robot Perception Perception http://pascallin.ecs.soton.ac.uk/challenges/voc/databases.html#caltech car

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Learning the Ecological Statistics of Perceptual Organization

Learning the Ecological Statistics of Perceptual Organization Learning the Ecological Statistics of Perceptual Organization Charless Fowlkes work with David Martin, Xiaofeng Ren and Jitendra Malik at University of California at Berkeley 1 How do ideas from perceptual

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Postprint.

Postprint. http://www.diva-portal.org Postprint This is the accepted version of a paper presented at 14th International Conference of the Biometrics Special Interest Group, BIOSIG, Darmstadt, Germany, 9-11 September,

More information

Contexts and 3D Scenes

Contexts and 3D Scenes Contexts and 3D Scenes Computer Vision Jia-Bin Huang, Virginia Tech Many slides from D. Hoiem Administrative stuffs Final project presentation Dec 1 st 3:30 PM 4:45 PM Goodwin Hall Atrium Grading Three

More information

Tri-modal Human Body Segmentation

Tri-modal Human Body Segmentation Tri-modal Human Body Segmentation Master of Science Thesis Cristina Palmero Cantariño Advisor: Sergio Escalera Guerrero February 6, 2014 Outline 1 Introduction 2 Tri-modal dataset 3 Proposed baseline 4

More information

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking

Feature descriptors. Alain Pagani Prof. Didier Stricker. Computer Vision: Object and People Tracking Feature descriptors Alain Pagani Prof. Didier Stricker Computer Vision: Object and People Tracking 1 Overview Previous lectures: Feature extraction Today: Gradiant/edge Points (Kanade-Tomasi + Harris)

More information

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds

Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds 9 1th International Conference on Document Analysis and Recognition Detecting Printed and Handwritten Partial Copies of Line Drawings Embedded in Complex Backgrounds Weihan Sun, Koichi Kise Graduate School

More information

Object Category Detection: Sliding Windows

Object Category Detection: Sliding Windows 04/10/12 Object Category Detection: Sliding Windows Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem Today s class: Object Category Detection Overview of object category detection Statistical

More information

Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University

Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University Applied Bayesian Nonparametrics 5. Spatial Models via Gaussian Processes, not MRFs Tutorial at CVPR 2012 Erik Sudderth Brown University NIPS 2008: E. Sudderth & M. Jordan, Shared Segmentation of Natural

More information

Detection III: Analyzing and Debugging Detection Methods

Detection III: Analyzing and Debugging Detection Methods CS 1699: Intro to Computer Vision Detection III: Analyzing and Debugging Detection Methods Prof. Adriana Kovashka University of Pittsburgh November 17, 2015 Today Review: Deformable part models How can

More information

Introduction to Medical Imaging (5XSA0) Module 5

Introduction to Medical Imaging (5XSA0) Module 5 Introduction to Medical Imaging (5XSA0) Module 5 Segmentation Jungong Han, Dirk Farin, Sveta Zinger ( s.zinger@tue.nl ) 1 Outline Introduction Color Segmentation region-growing region-merging watershed

More information

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks

Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Deep Tracking: Biologically Inspired Tracking with Deep Convolutional Networks Si Chen The George Washington University sichen@gwmail.gwu.edu Meera Hahn Emory University mhahn7@emory.edu Mentor: Afshin

More information

AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS

AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS AUTOMATIC INTERPRETATION OF HIGH RESOLUTION SAR IMAGES: FIRST RESULTS OF SAR IMAGE SIMULATION FOR SINGLE BUILDINGS J. Tao *, G. Palubinskas, P. Reinartz German Aerospace Center DLR, 82234 Oberpfaffenhofen,

More information

Ensemble of Bayesian Filters for Loop Closure Detection

Ensemble of Bayesian Filters for Loop Closure Detection Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information

More information

Multi-Camera Calibration, Object Tracking and Query Generation

Multi-Camera Calibration, Object Tracking and Query Generation MITSUBISHI ELECTRIC RESEARCH LABORATORIES http://www.merl.com Multi-Camera Calibration, Object Tracking and Query Generation Porikli, F.; Divakaran, A. TR2003-100 August 2003 Abstract An automatic object

More information

A Survey of Light Source Detection Methods

A Survey of Light Source Detection Methods A Survey of Light Source Detection Methods Nathan Funk University of Alberta Mini-Project for CMPUT 603 November 30, 2003 Abstract This paper provides an overview of the most prominent techniques for light

More information

3 October, 2013 MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos

3 October, 2013 MVA ENS Cachan. Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Machine Learning for Computer Vision 1 3 October, 2013 MVA ENS Cachan Lecture 2: Logistic regression & intro to MIL Iasonas Kokkinos Iasonas.kokkinos@ecp.fr Department of Applied Mathematics Ecole Centrale

More information

Detection of a Single Hand Shape in the Foreground of Still Images

Detection of a Single Hand Shape in the Foreground of Still Images CS229 Project Final Report Detection of a Single Hand Shape in the Foreground of Still Images Toan Tran (dtoan@stanford.edu) 1. Introduction This paper is about an image detection system that can detect

More information

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009

Analysis: TextonBoost and Semantic Texton Forests. Daniel Munoz Februrary 9, 2009 Analysis: TextonBoost and Semantic Texton Forests Daniel Munoz 16-721 Februrary 9, 2009 Papers [shotton-eccv-06] J. Shotton, J. Winn, C. Rother, A. Criminisi, TextonBoost: Joint Appearance, Shape and Context

More information

Deformable Part Models

Deformable Part Models CS 1674: Intro to Computer Vision Deformable Part Models Prof. Adriana Kovashka University of Pittsburgh November 9, 2016 Today: Object category detection Window-based approaches: Last time: Viola-Jones

More information

Object Detection by 3D Aspectlets and Occlusion Reasoning

Object Detection by 3D Aspectlets and Occlusion Reasoning Object Detection by 3D Aspectlets and Occlusion Reasoning Yu Xiang University of Michigan Silvio Savarese Stanford University In the 4th International IEEE Workshop on 3D Representation and Recognition

More information

CLASSIFICATION OF EARTH TERRAIN COVERS USING THE MODIFIED FOUR- COMPONENT SCATTERING POWER DECOMPOSITION,

CLASSIFICATION OF EARTH TERRAIN COVERS USING THE MODIFIED FOUR- COMPONENT SCATTERING POWER DECOMPOSITION, CLASSIFICATION OF EARTH TERRAIN COVERS USING THE MODIFIED FOUR- COMPONENT SCATTERING POWER DECOMPOSITION, Boularbah Souissi (1), Mounira Ouarzeddine (1),, Aichouche Belhadj-Aissa (1) USTHB, F.E.I, BP N

More information

CS 231A Computer Vision (Fall 2012) Problem Set 3

CS 231A Computer Vision (Fall 2012) Problem Set 3 CS 231A Computer Vision (Fall 2012) Problem Set 3 Due: Nov. 13 th, 2012 (2:15pm) 1 Probabilistic Recursion for Tracking (20 points) In this problem you will derive a method for tracking a point of interest

More information

Detecting and Segmenting Humans in Crowded Scenes

Detecting and Segmenting Humans in Crowded Scenes Detecting and Segmenting Humans in Crowded Scenes Mikel D. Rodriguez University of Central Florida 4000 Central Florida Blvd Orlando, Florida, 32816 mikel@cs.ucf.edu Mubarak Shah University of Central

More information

3D Spatial Layout Propagation in a Video Sequence

3D Spatial Layout Propagation in a Video Sequence 3D Spatial Layout Propagation in a Video Sequence Alejandro Rituerto 1, Roberto Manduchi 2, Ana C. Murillo 1 and J. J. Guerrero 1 arituerto@unizar.es, manduchi@soe.ucsc.edu, acm@unizar.es, and josechu.guerrero@unizar.es

More information

Structured Models in. Dan Huttenlocher. June 2010

Structured Models in. Dan Huttenlocher. June 2010 Structured Models in Computer Vision i Dan Huttenlocher June 2010 Structured Models Problems where output variables are mutually dependent or constrained E.g., spatial or temporal relations Such dependencies

More information

Data-driven Depth Inference from a Single Still Image

Data-driven Depth Inference from a Single Still Image Data-driven Depth Inference from a Single Still Image Kyunghee Kim Computer Science Department Stanford University kyunghee.kim@stanford.edu Abstract Given an indoor image, how to recover its depth information

More information

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation

Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation Large-Scale Traffic Sign Recognition based on Local Features and Color Segmentation M. Blauth, E. Kraft, F. Hirschenberger, M. Böhm Fraunhofer Institute for Industrial Mathematics, Fraunhofer-Platz 1,

More information

Estimating Human Pose in Images. Navraj Singh December 11, 2009

Estimating Human Pose in Images. Navraj Singh December 11, 2009 Estimating Human Pose in Images Navraj Singh December 11, 2009 Introduction This project attempts to improve the performance of an existing method of estimating the pose of humans in still images. Tasks

More information

A Hierarchical Compositional System for Rapid Object Detection

A Hierarchical Compositional System for Rapid Object Detection A Hierarchical Compositional System for Rapid Object Detection Long Zhu and Alan Yuille Department of Statistics University of California at Los Angeles Los Angeles, CA 90095 {lzhu,yuille}@stat.ucla.edu

More information

https://en.wikipedia.org/wiki/the_dress Recap: Viola-Jones sliding window detector Fast detection through two mechanisms Quickly eliminate unlikely windows Use features that are fast to compute Viola

More information

arxiv: v1 [cs.cv] 28 Sep 2018

arxiv: v1 [cs.cv] 28 Sep 2018 Camera Pose Estimation from Sequence of Calibrated Images arxiv:1809.11066v1 [cs.cv] 28 Sep 2018 Jacek Komorowski 1 and Przemyslaw Rokita 2 1 Maria Curie-Sklodowska University, Institute of Computer Science,

More information

Sea Turtle Identification by Matching Their Scale Patterns

Sea Turtle Identification by Matching Their Scale Patterns Sea Turtle Identification by Matching Their Scale Patterns Technical Report Rajmadhan Ekambaram and Rangachar Kasturi Department of Computer Science and Engineering, University of South Florida Abstract

More information

CS 223B Computer Vision Problem Set 3

CS 223B Computer Vision Problem Set 3 CS 223B Computer Vision Problem Set 3 Due: Feb. 22 nd, 2011 1 Probabilistic Recursion for Tracking In this problem you will derive a method for tracking a point of interest through a sequence of images.

More information

Viewpoint Invariant Features from Single Images Using 3D Geometry

Viewpoint Invariant Features from Single Images Using 3D Geometry Viewpoint Invariant Features from Single Images Using 3D Geometry Yanpeng Cao and John McDonald Department of Computer Science National University of Ireland, Maynooth, Ireland {y.cao,johnmcd}@cs.nuim.ie

More information

Supervised texture detection in images

Supervised texture detection in images Supervised texture detection in images Branislav Mičušík and Allan Hanbury Pattern Recognition and Image Processing Group, Institute of Computer Aided Automation, Vienna University of Technology Favoritenstraße

More information

Selection of Scale-Invariant Parts for Object Class Recognition

Selection of Scale-Invariant Parts for Object Class Recognition Selection of Scale-Invariant Parts for Object Class Recognition Gy. Dorkó and C. Schmid INRIA Rhône-Alpes, GRAVIR-CNRS 655, av. de l Europe, 3833 Montbonnot, France fdorko,schmidg@inrialpes.fr Abstract

More information

Part III: Affinity Functions for Image Segmentation

Part III: Affinity Functions for Image Segmentation Part III: Affinity Functions for Image Segmentation Charless Fowlkes joint work with David Martin and Jitendra Malik at University of California at Berkeley 1 Q: What measurements should we use for constructing

More information

Segmentation as Selective Search for Object Recognition in ILSVRC2011

Segmentation as Selective Search for Object Recognition in ILSVRC2011 Segmentation as Selective Search for Object Recognition in ILSVRC2011 Koen van de Sande Jasper Uijlings Arnold Smeulders Theo Gevers Nicu Sebe Cees Snoek University of Amsterdam, University of Trento ILSVRC2011

More information

Correcting User Guided Image Segmentation

Correcting User Guided Image Segmentation Correcting User Guided Image Segmentation Garrett Bernstein (gsb29) Karen Ho (ksh33) Advanced Machine Learning: CS 6780 Abstract We tackle the problem of segmenting an image into planes given user input.

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

CS 558: Computer Vision 13 th Set of Notes

CS 558: Computer Vision 13 th Set of Notes CS 558: Computer Vision 13 th Set of Notes Instructor: Philippos Mordohai Webpage: www.cs.stevens.edu/~mordohai E-mail: Philippos.Mordohai@stevens.edu Office: Lieb 215 Overview Context and Spatial Layout

More information

Closing the Loop in Scene Interpretation

Closing the Loop in Scene Interpretation Closing the Loop in Scene Interpretation Derek Hoiem Beckman Institute University of Illinois dhoiem@uiuc.edu Alexei A. Efros Robotics Institute Carnegie Mellon University efros@cs.cmu.edu Martial Hebert

More information

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION

MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION MULTI ORIENTATION PERFORMANCE OF FEATURE EXTRACTION FOR HUMAN HEAD RECOGNITION Panca Mudjirahardjo, Rahmadwati, Nanang Sulistiyanto and R. Arief Setyawan Department of Electrical Engineering, Faculty of

More information

Local invariant features

Local invariant features Local invariant features Tuesday, Oct 28 Kristen Grauman UT-Austin Today Some more Pset 2 results Pset 2 returned, pick up solutions Pset 3 is posted, due 11/11 Local invariant features Detection of interest

More information

2D Image Processing Feature Descriptors

2D Image Processing Feature Descriptors 2D Image Processing Feature Descriptors Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 Overview

More information

Local Image Features

Local Image Features Local Image Features Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial This section: correspondence and alignment

More information

STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE. Nan Hu. Stanford University Electrical Engineering

STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE. Nan Hu. Stanford University Electrical Engineering STRUCTURAL EDGE LEARNING FOR 3-D RECONSTRUCTION FROM A SINGLE STILL IMAGE Nan Hu Stanford University Electrical Engineering nanhu@stanford.edu ABSTRACT Learning 3-D scene structure from a single still

More information

Separating Objects and Clutter in Indoor Scenes

Separating Objects and Clutter in Indoor Scenes Separating Objects and Clutter in Indoor Scenes Salman H. Khan School of Computer Science & Software Engineering, The University of Western Australia Co-authors: Xuming He, Mohammed Bennamoun, Ferdous

More information

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching

CS 4495 Computer Vision A. Bobick. Motion and Optic Flow. Stereo Matching Stereo Matching Fundamental matrix Let p be a point in left image, p in right image l l Epipolar relation p maps to epipolar line l p maps to epipolar line l p p Epipolar mapping described by a 3x3 matrix

More information

Computer Vision for HCI. Topics of This Lecture

Computer Vision for HCI. Topics of This Lecture Computer Vision for HCI Interest Points Topics of This Lecture Local Invariant Features Motivation Requirements, Invariances Keypoint Localization Features from Accelerated Segment Test (FAST) Harris Shi-Tomasi

More information

Recognizing Apples by Piecing Together the Segmentation Puzzle

Recognizing Apples by Piecing Together the Segmentation Puzzle Recognizing Apples by Piecing Together the Segmentation Puzzle Kyle Wilshusen 1 and Stephen Nuske 2 Abstract This paper presents a system that can provide yield estimates in apple orchards. This is done

More information

Edges and Binary Images

Edges and Binary Images CS 699: Intro to Computer Vision Edges and Binary Images Prof. Adriana Kovashka University of Pittsburgh September 5, 205 Plan for today Edge detection Binary image analysis Homework Due on 9/22, :59pm

More information

Radar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory

Radar Target Identification Using Spatial Matched Filters. L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Radar Target Identification Using Spatial Matched Filters L.M. Novak, G.J. Owirka, and C.M. Netishen MIT Lincoln Laboratory Abstract The application of spatial matched filter classifiers to the synthetic

More information

Recovering Intrinsic Images from a Single Image

Recovering Intrinsic Images from a Single Image Recovering Intrinsic Images from a Single Image Marshall F Tappen William T Freeman Edward H Adelson MIT Artificial Intelligence Laboratory Cambridge, MA 02139 mtappen@ai.mit.edu, wtf@ai.mit.edu, adelson@ai.mit.edu

More information

Structured Completion Predictors Applied to Image Segmentation

Structured Completion Predictors Applied to Image Segmentation Structured Completion Predictors Applied to Image Segmentation Dmitriy Brezhnev, Raphael-Joel Lim, Anirudh Venkatesh December 16, 2011 Abstract Multi-image segmentation makes use of global and local features

More information

Histogram and watershed based segmentation of color images

Histogram and watershed based segmentation of color images Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation

More information

Eye Detection by Haar wavelets and cascaded Support Vector Machine

Eye Detection by Haar wavelets and cascaded Support Vector Machine Eye Detection by Haar wavelets and cascaded Support Vector Machine Vishal Agrawal B.Tech 4th Year Guide: Simant Dubey / Amitabha Mukherjee Dept of Computer Science and Engineering IIT Kanpur - 208 016

More information

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images

Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images Applying Synthetic Images to Learning Grasping Orientation from Single Monocular Images 1 Introduction - Steve Chuang and Eric Shan - Determining object orientation in images is a well-established topic

More information

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance

Summarization of Egocentric Moving Videos for Generating Walking Route Guidance Summarization of Egocentric Moving Videos for Generating Walking Route Guidance Masaya Okamoto and Keiji Yanai Department of Informatics, The University of Electro-Communications 1-5-1 Chofugaoka, Chofu-shi,

More information

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation

Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation ÖGAI Journal 24/1 11 Colour Segmentation-based Computation of Dense Optical Flow with Application to Video Object Segmentation Michael Bleyer, Margrit Gelautz, Christoph Rhemann Vienna University of Technology

More information

Chapter 9 Object Tracking an Overview

Chapter 9 Object Tracking an Overview Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging

More information

Exploiting the High Dimensionality of Polarimetric Interferometric Synthetic Aperture Radar Observations

Exploiting the High Dimensionality of Polarimetric Interferometric Synthetic Aperture Radar Observations Exploiting the High Dimensionality of Polarimetric Interferometric Synthetic Aperture Radar Observations Robert Riley rriley@sandia.gov R. Derek West rdwest@sandia.gov SAND2017 11133 C This work was supported

More information

Scene Matching on Imagery

Scene Matching on Imagery Scene Matching on Imagery There are a plethora of algorithms in existence for automatic scene matching, each with particular strengths and weaknesses SAR scenic matching for interferometry applications

More information

Segmentation of Images

Segmentation of Images Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a

More information

Boundaries and Sketches

Boundaries and Sketches Boundaries and Sketches Szeliski 4.2 Computer Vision James Hays Many slides from Michael Maire, Jitendra Malek Today s lecture Segmentation vs Boundary Detection Why boundaries / Grouping? Recap: Canny

More information

Motion Estimation. There are three main types (or applications) of motion estimation:

Motion Estimation. There are three main types (or applications) of motion estimation: Members: D91922016 朱威達 R93922010 林聖凱 R93922044 謝俊瑋 Motion Estimation There are three main types (or applications) of motion estimation: Parametric motion (image alignment) The main idea of parametric motion

More information

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai

C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT Chennai Traffic Sign Detection Via Graph-Based Ranking and Segmentation Algorithm C. Premsai 1, Prof. A. Kavya 2 School of Computer Science, School of Computer Science Engineering, Engineering VIT Chennai, VIT

More information

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection

CHAPTER 3. Single-view Geometry. 1. Consequences of Projection CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.

More information

Texton Clustering for Local Classification using Scene-Context Scale

Texton Clustering for Local Classification using Scene-Context Scale Texton Clustering for Local Classification using Scene-Context Scale Yousun Kang Tokyo Polytechnic University Atsugi, Kanakawa, Japan 243-0297 Email: yskang@cs.t-kougei.ac.jp Sugimoto Akihiro National

More information

Development in Object Detection. Junyuan Lin May 4th

Development in Object Detection. Junyuan Lin May 4th Development in Object Detection Junyuan Lin May 4th Line of Research [1] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection, CVPR 2005. HOG Feature template [2] P. Felzenszwalb,

More information

Epithelial rosette detection in microscopic images

Epithelial rosette detection in microscopic images Epithelial rosette detection in microscopic images Kun Liu,3, Sandra Ernst 2,3, Virginie Lecaudey 2,3 and Olaf Ronneberger,3 Department of Computer Science 2 Department of Developmental Biology 3 BIOSS

More information

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor

COSC160: Detection and Classification. Jeremy Bolton, PhD Assistant Teaching Professor COSC160: Detection and Classification Jeremy Bolton, PhD Assistant Teaching Professor Outline I. Problem I. Strategies II. Features for training III. Using spatial information? IV. Reducing dimensionality

More information

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester

Topics to be Covered in the Rest of the Semester. CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Topics to be Covered in the Rest of the Semester CSci 4968 and 6270 Computational Vision Lecture 15 Overview of Remainder of the Semester Charles Stewart Department of Computer Science Rensselaer Polytechnic

More information