Bitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4

Size: px
Start display at page:

Download "Bitangent 3. Bitangent 1. dist = max Region A. Region B. Bitangent 2. Bitangent 4"

Transcription

1 Ecient pictograph detection Dietrich Buesching TU Muenchen, Fakultaet fuer Informatik FG Bildverstehen Munich, Germany 1 Introduction Pictographs are ubiquitous in many technical environments. Usually they are of high contrast and positioned so that they are visible from many directions. This facilitates their detection and makes it interesting for several tasks (e.g. automatic navigation of mobile robots, trac sign recognition). Using pictographs for automatic navigation could make the installation of special purpose landmarks unnecessary. In our work we allow for 6 degrees of freedom in the pose of the target pictograph. This makes simple comparisons (e.g. correlation) of model and image very time consuming. Instead we employ recognition by alignment (e.g. [5]). This method proceeds in several steps: 1) Feature points are detected in model and image. Possible features are for example bitangents (lines which are tangent to two dierent sections of a curve - usually to two sides of a concavity) and corners. 2) Pose hypothesis generation by matching of model and image features. 3) Hypothesis verication by comparison of model and image edges. We use ane transformations ximg y img = A xmod y mod + b as approximations of the transformation from model to image. This is reasonable as long as dierences in depth of object parts are small in comparison to the distance to the object. We extend the alignment approach to planar object recogniton by two contributions. We employ bitangents on pairs of image regions (see g. 1, right) for feature point calculation. Constraints limit the number of possible pairs that have to be considered during matching. Our second contribution is the selection of a small set of stable model features in a training phase. Similar to [1] we call these features "focus features". During recognition all selected model focus features are matched with all appropriate image features. The selection of a small set of focus features is therefore critical for ecient recognition. 2 The recognition algorithm These ideas were integrated in a system for pictograph detection. Recognition starts with a segmentation of the image in regions of homogeneous brightness. In the next step feature points are detected. Since an ane transformation is used, a minimum of three feature points are neccesary for hypothesis generation. The features used are simple bitangents on one region (a), pairs of such bitangents (b), tripels of consecutive corners on a region boundary (c) and bitangents on the convex hulls of pairs of image regions (d).

2 Some comments on these feature types: a) Bitangents are detected with the algorithm described in [3]. The point with maximal distance to the bitangent on the part of the curve between the two bitangency points is the third ane invariant point (see gure 1, left). b) Some pictographs consist of regions with more than one concavity. Often a least-squares solution for the transform based on matching pairs of bitangents is more robust. c) Corners are not ane invariant features, but some salient corners can be detected in a wide range of transformations. By using three concecutive corners the number of possible pose hypotheses remains linear in the number of corners. d) With an algorithm described in [2] bitangents on pairs of image regions can be computed in an expected time of O(log 2 (n)), where n is the number of vertices on the convex hull of the regions. This improves on a previous algorithm [4] that has a complexity of O(n). A focus feature consists of two bitangents with four invariant points (again a least-squares solution is used for pose computation). Figure 5 illustrates feature types a,b and d. A corner feature is used in the lower part of gure 3. P1 P2 Bitangent 1 Bitangent 3 dist = max Region A P3 Bitangent 2 Region B Bitangent 4 Figure 1: Left: Bitangent on a concavity. The points of tangency (P1 and P2) and the point of the curve between these two points with maximum distance to the bitangent (P3) are ane invariant. Right: Two disjoint convex regions and the four tangents touching both regions 2.1 Verication of pose hypotheses The verication of the generated pose hypotheses is very similar to the method used in [5]. Predicted model edgels are veried by nding an image edgel with similar orientation and the same polarity close to the predicted position. Currently used tolerances are 5 pixel in position and 30 o in orientation. The used model and image edges are the boundaries of regions found during segmentation. As in [5] verication is accelerated by the use of a distance transformation of the binary edge image. The fraction of transformed model edges found in the image is used as a measurement of the quality of a hypothesis. A hypothesis will be accepted if this fraction is above a threshold computed during the selection of focus features (section 3). A further acceleration is possible by using only a fraction (currently a fth) of the model edgels in a preliminary verication step. If the fraction of veried edgels is less than half of the threshold for acceptance, the hypothesis is immediately rejected without checking the remaining model edgels. For many pictographs the reliability of verication can be increased by the additional matching of intensities. This second stage of verication is only invoked if the fraction of edgels found in the rst stage exceeds the threshold for recognition. Intensities are matched for points perpendicular to the matched edge segments. In the present implementation points are sparsely sampled (every third edge segment, 2 pixel spacing in the direction normal to the edges) with a maximum distance of 8 pixels to the model edges. The position of matched image points is calculated relative to the matched image edges rather than to the transformed model edges. This ensures that the tolerance to small positional deviations of chamfer matching also applies to the matching of intensities. Information about the position of the matched image edges can be propagated during the calculation of the distance transform. In order to evaluate the match of intensities the histogramms of pixels matched to black and white model pixels are analyzed. Pairs of peaks in these two histogramms with a plausible dierence in intensity are considered. A pair of peaks is selected so that as many matched pixels as possible are similar (dierences can be attributed to noise) to the intensity of the peaks. The ratio of correctly matched intensity values is considered in the overall match quality measure. It would also be possible to use normalized cross correlation for the comparison of the intensities of matched model and image pixels. However for partially occluded objects there are often spurious

3 matched with bright model pixel matched with dark model pixel 300 No. pixel intensity Figure 2: Left : A correct object hypothesis; for white model edges a corresponding image edge was found. Right: Histogram of the intensities of image pixels corresponding to model pixels that are close to the matched edge pixels (see text). matches of model edges with the occluding clutter. If the intensity of nearby pixels is used for correlation the results are dicult to predict. The analysis of histogramms seems to be more robust because outliers are simply ignored. For pictographs with thin lines and other ne details intensity information was not very reliable (presumably due to smoothing). In the experimental evaluation of section 4 matching of intensities was only used for the recognition of partially occluded objects (which did not contain many ne structures). 2.2 Constraints for the matching of pairs of regions The main problem with the use of pairs of regions for the generation of pose hypotheses is the quadratic increase of the number of pairs with the number of regions. Constraints on image regions that can be matched with a given pair of model regions are used to prune this search space. One of the used constraints for the matching of regions is based on the ratio of the areas of pairs of model and image regions. This ratio is invariant against ane transformations. In the current implementation some dierences between model and image regions caused by instabilities in the localization of edges (up to 0.7 pixel) were accepted. Often some prior knowledge of the pose of an object is available. This knowledge implies constraints on possible ane transformations from model to image. A simple constraint concerns the area of image regions that can matched to model regions. s min and s max are bounds on the length of the vectors s 2 min A mod A img s 2 max A mod a11 a 21 and a12 a 22 derived from the transformation matrix A. The lengths of these vectors indicate the scaling of the x and y components of the model. They are inuenced by the distance of the object and its rotation out of the image plane. A mod and A img are areas of corresponding model and image regions.

4 Another constraint on possible correspondences of model and image regions is based on the amount of stretching or compresssion that is possible for a given vector between points in the model. Bounds on this value are caused by limitations on the rotation of the pictograph out of the image plane. They can be used to derive constraints on the ratio of areas of model and image regions and the distance of the pair of regions. The transformation matrix A can be written as a composition of a rotation described by a matrix R w1, a scaling in x- and y-direction and another rotation with matrix R w2 : A = R w1 sx 0 0 s y R w2 It is assumed that the ratio between the two values s x and s y is less than a value skew max. Without loss of generality we suppose that s x s y. It follows that 1 sy s x skew max. For the distances between the centers of image and model regions d img und d mod we dene: d rel = dimg d mod. All distances within the object are deformed by a factor that is bounded by s x and s y : In addition: Now it follows that: s x d img d mod s y =) s 2 x d 2 rel s 2 y (1) A img A mod = s x s y (according to the denition of s x and s y ) A img = s 2 s y x A mod s x s 2 x skew max d 2 rel skew max and also: A img = s 2 s x s y A mod s y 2 y skew max d2 rel skew max If s y s x and therefore 1 sx s y skew max, the inequality 1 is reversed and the resulting estimations remain the same. Another constraint on image regions implies that the mean intensity of an image region is within loose bounds (currently 150) of the intensity of the corresponding model region. For the recognition of the pictograph in the left part of gure 6 the range of possible scales was limited between 0.2 and 4.0 and the rotation out of the image plane was assumed to be less than For this example the mentioned constraints reduced the number of pose hypotheses generated by pairs of image regions from to Feature selection The major goal of the feature selection phase is to nd stable focus features which are necessary to compute good pose hypotheses. It is possible to compare features by analysing their properties. For example the used features tend to give good pose hypotheses when the distances between feature points are large. However this determination of feature stability by analysis or heuristics has some drawbacks. Feature detection is a complex process with several steps. It is dicult to describe the result of this process by some simple rules. Therefore in this work we evaluate the stabilities of features empirically using a set of training images of the given object (pictograph). Object poses in the training images should be representative for the range of object poses expected in object detection tasks. In the performed experiments a high resolution image of a given pictograph was articially transformed with dierent transformations in order to obtain the training images. One of the training images is used for the generation of candidate focus features. These are all bitangents, tripels of consecutive corners along a region boundary and pairs of bitangents on two image regions found in this training image. In order to limit the number of interregion bitangent features, only regions with an area above a given threshold are used. For every suitable pair of image regions two features are generated: the outer bitangents 1 and 2 and the inner bitangents 3 and 4. The stability of recognition with the dierent possible focus features is measured by using the remaining training images. Each focus feature is used to nd the object in these images. For this task the algorithms used in the production phase of recognition and described in the previous sections are employed. The stability of recognition is measured as the fraction of transformed model edgels found close to corresponding image edgels for the correct matching of the tested focus feature. This fraction is shown for two candidate focus features

5 in gure 3. The transformation from model to image pose is known for the training images. Therefore it is possible to decide if a generated pose hypothesis is correct. For every possible focus feature and all training images the quality of the correct pose hypothesis is determined. Based on this information one or more focus features are selected so that the minimum match quality of the best focus feature over all images M mingood = min im2im max f2f matchquality(f; im) (Im: test images, F: selected focus features) is exceeding a given threshold. A greedy search strategy is used for this problem. Figure 3: The evaluation of two focus features. Upper row: Models with focus features indicated. Lower row: Test of focus features with real images (veried edgels: white): fraction in left pictograph (bitangents on the leftmost and rightmost part of the suitcase): 0.93; in right pictograph (three consecutive corners on the right side of the glass): 0.73 In order to determine the detection threshold for the production phase the maximum match quality M maxbad using the selected focus features in a set of training images not containing the model object is computed. The threshold is then calculated as min(m mingood ; M mingood+m maxbad + (current implementation: = 1:0; = 2:5). 4 Results and discussion The proposed recognition method was used in experiments with 24 pictographs found at airports. They are shown in g. 4. For every pictograph the selected focus features were used to detect the unoccluded pictograph in 10 test images. 17 pictographs were detected in all 10 test images. 3 were missed once and the remaining 4 were not detected in 3-5 images. Two false positive detections occured for a single pictograph. Most of the diculties with recognition were caused by an instable segmentation of thin lines or small regions. The left part of g. 6 is a typical example for the recognition of an unoccluded pictograph. Fig. 5 also illustrates the selected focus features for some pictographs. For all pictographs 21 bitangents on pairs of regions, 3 simple bitangents, 5 pairs of simple bitangents and 2 triples of corners were selected. This demonstrates the high stability of hypotheses generated with bitangents on pairs of regions. Recognition time was measured on a HP K-260 Workstation (180 MHz). The segmentation of the test images of size pixel took 3-4 seconds. For the following matching phase between 1 and 3 s were used. 31 focus

6 Figure 4: The used 24 pictographs Figure 5: Examples for pictographs and the selected focus features: bitangents on pairs of image regions, except for "glass" (pair of bitangents) and "aircraft" (single bitangent)

7 features were selected from 996 candidate features for all 24 images. Therefore without feature selection matching would have taken about 32 times as long. A major fraction of candidate features were bitangents on region pairs. If these bitangents are used for complex models a selection of focus features is almost mandatory. In order to evaluate the recognition of partially occluded objects views of 6 pictographs were used (altogether 72 images). The degree of occlusion was approximately 50% in these images. Occlusions were anticipated during feature selection and dierent features were chosen than in the unoccluded case. The target pictograph was found in 59 cases and 22 false positives occured. With the additional use of intensity information the results could be improved to 69 correct recognitions and 7 false positive detections. For the comparison of intensities model and image points close to the matched edgels were used. The right part of gure 6 is a typical example for a successful recognition. Figure 6: Examples for successful detections in test images, left: unoccluded pictograph, middle: partially occluded object (used features were bitangents on the regions "frontlight of car" and "body of key"), right: magnied versions of the found pictographs The experimental results show that our approach allows for reliable and fast recognition of most of the used objects. For pictographs with ne details more reliable segmentation methods (e.g. as used in OCR) are likely to improve recognition accuracy. The entire approach could be made more powerful by using additional feature types (e.g. pairs of straight lines). References [1] R.C. Bolles and R.A. Cain. Recognizing and locating partially visible objects: the local-feature-focus method. International Journal of Robotics Research, 1(3):57{82, [2] D. Buesching. Ecient pictograph detection using bitangents. Technical report available at Technische Universitaet Muenchen, Forschungsgruppe Bildverstehen, [3] D. Buesching. Eciently nding bitangents. In International Conference on Pattern Recognition, Track A, pages 428{432, Vienna, August [4] F.P. Preparata and S.J. Hong. Convex hulls of nite sets of points in two and three dimensions. Communications of the ACM, 20(2):87{93, February [5] C.A. Rothwell, A. Zisserman, D.A. Forsyth, and J.L. Mundy. Planar object recognition using projective shape representation. International Journal of Computer Vision, 16:57{99, 1995.

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1 Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)

Features Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE) Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so

More information

Department of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan

Department of Electrical Engineering, Keio University Hiyoshi Kouhoku-ku Yokohama 223, Japan Shape Modeling from Multiple View Images Using GAs Satoshi KIRIHARA and Hideo SAITO Department of Electrical Engineering, Keio University 3-14-1 Hiyoshi Kouhoku-ku Yokohama 223, Japan TEL +81-45-563-1141

More information

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction Volume, Issue 8, August ISSN: 77 8X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com A Combined Edge-Based Text

More information

2 Algorithm Description Active contours are initialized using the output of the SUSAN edge detector [10]. Edge runs that contain a reasonable number (

2 Algorithm Description Active contours are initialized using the output of the SUSAN edge detector [10]. Edge runs that contain a reasonable number ( Motion-Based Object Segmentation Using Active Contours Ben Galvin, Kevin Novins, and Brendan McCane Computer Science Department, University of Otago, Dunedin, New Zealand. Abstract: The segmentation of

More information

Edge and corner detection

Edge and corner detection Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements

More information

Occluded Facial Expression Tracking

Occluded Facial Expression Tracking Occluded Facial Expression Tracking Hugo Mercier 1, Julien Peyras 2, and Patrice Dalle 1 1 Institut de Recherche en Informatique de Toulouse 118, route de Narbonne, F-31062 Toulouse Cedex 9 2 Dipartimento

More information

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features

Template Matching Rigid Motion. Find transformation to align two images. Focus on geometric features Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition

More information

CS 231A Computer Vision (Winter 2014) Problem Set 3

CS 231A Computer Vision (Winter 2014) Problem Set 3 CS 231A Computer Vision (Winter 2014) Problem Set 3 Due: Feb. 18 th, 2015 (11:59pm) 1 Single Object Recognition Via SIFT (45 points) In his 2004 SIFT paper, David Lowe demonstrates impressive object recognition

More information

Task Driven Perceptual Organization for Extraction of Rooftop Polygons. Christopher Jaynes, Frank Stolle and Robert Collins

Task Driven Perceptual Organization for Extraction of Rooftop Polygons. Christopher Jaynes, Frank Stolle and Robert Collins Task Driven Perceptual Organization for Extraction of Rooftop Polygons Christopher Jaynes, Frank Stolle and Robert Collins Abstract A new method for extracting planar polygonal rooftops in monocular aerial

More information

VISION-BASED HANDLING WITH A MOBILE ROBOT

VISION-BASED HANDLING WITH A MOBILE ROBOT VISION-BASED HANDLING WITH A MOBILE ROBOT STEFAN BLESSING TU München, Institut für Werkzeugmaschinen und Betriebswissenschaften (iwb), 80290 München, Germany, e-mail: bl@iwb.mw.tu-muenchen.de STEFAN LANSER,

More information

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet:

Local qualitative shape from stereo. without detailed correspondence. Extended Abstract. Shimon Edelman. Internet: Local qualitative shape from stereo without detailed correspondence Extended Abstract Shimon Edelman Center for Biological Information Processing MIT E25-201, Cambridge MA 02139 Internet: edelman@ai.mit.edu

More information

Requirements for region detection

Requirements for region detection Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform

More information

CS4670: Computer Vision

CS4670: Computer Vision CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5

Binary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5 Binary Image Processing CSE 152 Lecture 5 Announcements Homework 2 is due Apr 25, 11:59 PM Reading: Szeliski, Chapter 3 Image processing, Section 3.3 More neighborhood operators Binary System Summary 1.

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Semantic Object Recognition in Digital Images

Semantic Object Recognition in Digital Images Semantic Object Recognition in Digital Images Semantic Object Recognition in Digital Images Falk Schmidsberger and Frieder Stolzenburg Hochschule Harz, Friedrichstr. 57 59 38855 Wernigerode, GERMANY {fschmidsberger,fstolzenburg}@hs-harz.de

More information

Template Matching Rigid Motion

Template Matching Rigid Motion Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition

More information

Chapter 11 Arc Extraction and Segmentation

Chapter 11 Arc Extraction and Segmentation Chapter 11 Arc Extraction and Segmentation 11.1 Introduction edge detection: labels each pixel as edge or no edge additional properties of edge: direction, gradient magnitude, contrast edge grouping: edge

More information

Affine-invariant shape matching and recognition under partial occlusion

Affine-invariant shape matching and recognition under partial occlusion Title Affine-invariant shape matching and recognition under partial occlusion Author(s) Mai, F; Chang, CQ; Hung, YS Citation The 17th IEEE International Conference on Image Processing (ICIP 2010), Hong

More information

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47

Flow Estimation. Min Bai. February 8, University of Toronto. Min Bai (UofT) Flow Estimation February 8, / 47 Flow Estimation Min Bai University of Toronto February 8, 2016 Min Bai (UofT) Flow Estimation February 8, 2016 1 / 47 Outline Optical Flow - Continued Min Bai (UofT) Flow Estimation February 8, 2016 2

More information

images (e.g. intensity edges) using the Hausdor fraction. The technique backgrounds. We report some simple recognition experiments in which

images (e.g. intensity edges) using the Hausdor fraction. The technique backgrounds. We report some simple recognition experiments in which Proc. of the European Conference on Computer Vision, pages 536-545, 1996. Object Recognition Using Subspace Methods Daniel P. Huttenlocher, Ryan H. Lilien and Clark F. Olson Department of Computer Science,

More information

CSE 252B: Computer Vision II

CSE 252B: Computer Vision II CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points

More information

Using Projective Invariants for Constant Time Library Indexing in Model Based Vision

Using Projective Invariants for Constant Time Library Indexing in Model Based Vision Using Projective Invariants for Constant Time Library Indexing in Model Based Vision C.A. Rothwell, A. Zisserman, D.A.Forsyth and J.L. Mundy* Robotics Research Group Department of Engineering Science University

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Lecture 8 Object Descriptors

Lecture 8 Object Descriptors Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

COMPUTER AND ROBOT VISION

COMPUTER AND ROBOT VISION VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington T V ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California

More information

Motion Estimation and Optical Flow Tracking

Motion Estimation and Optical Flow Tracking Image Matching Image Retrieval Object Recognition Motion Estimation and Optical Flow Tracking Example: Mosiacing (Panorama) M. Brown and D. G. Lowe. Recognising Panoramas. ICCV 2003 Example 3D Reconstruction

More information

TA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999)

TA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999) TA Section 7 Problem Set 3 SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999) Sam Corbett-Davies TA Section 7 02-13-2014 Distinctive Image Features from Scale-Invariant

More information

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4

Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Advanced Video Content Analysis and Video Compression (5LSH0), Module 4 Visual feature extraction Part I: Color and texture analysis Sveta Zinger Video Coding and Architectures Research group, TU/e ( s.zinger@tue.nl

More information

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing

CS 4495 Computer Vision A. Bobick. CS 4495 Computer Vision. Features 2 SIFT descriptor. Aaron Bobick School of Interactive Computing CS 4495 Computer Vision Features 2 SIFT descriptor Aaron Bobick School of Interactive Computing Administrivia PS 3: Out due Oct 6 th. Features recap: Goal is to find corresponding locations in two images.

More information

Using Game Theory for Image Segmentation

Using Game Theory for Image Segmentation Using Game Theory for Image Segmentation Elizabeth Cassell Sumanth Kolar Alex Yakushev 1 Introduction 21st March 2007 The goal of image segmentation, is to distinguish objects from background. Robust segmentation

More information

Object Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University

Object Modeling from Multiple Images Using Genetic Algorithms. Hideo SAITO and Masayuki MORI. Department of Electrical Engineering, Keio University Object Modeling from Multiple Images Using Genetic Algorithms Hideo SAITO and Masayuki MORI Department of Electrical Engineering, Keio University E-mail: saito@ozawa.elec.keio.ac.jp Abstract This paper

More information

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera

Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Recognition of a Predefined Landmark Using Optical Flow Sensor/Camera Galiev Ilfat, Alina Garaeva, Nikita Aslanyan The Department of Computer Science & Automation, TU Ilmenau 98693 Ilmenau ilfat.galiev@tu-ilmenau.de;

More information

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882

Building a Panorama. Matching features. Matching with Features. How do we build a panorama? Computational Photography, 6.882 Matching features Building a Panorama Computational Photography, 6.88 Prof. Bill Freeman April 11, 006 Image and shape descriptors: Harris corner detectors and SIFT features. Suggested readings: Mikolajczyk

More information

Shape Descriptor using Polar Plot for Shape Recognition.

Shape Descriptor using Polar Plot for Shape Recognition. Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that

More information

Analysis of Binary Images

Analysis of Binary Images Analysis of Binary Images Introduction to Computer Vision CSE 52 Lecture 7 CSE52, Spr 07 The appearance of colors Color appearance is strongly affected by (at least): Spectrum of lighting striking the

More information

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 09 130219 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Feature Descriptors Feature Matching Feature

More information

A Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland

A Real Time System for Detecting and Tracking People. Ismail Haritaoglu, David Harwood and Larry S. Davis. University of Maryland W 4 : Who? When? Where? What? A Real Time System for Detecting and Tracking People Ismail Haritaoglu, David Harwood and Larry S. Davis Computer Vision Laboratory University of Maryland College Park, MD

More information

Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera

Exploration of Unknown or Partially Known. Prof. Dr. -Ing. G. Farber. the current step. It can be derived from a third camera Exploration of Unknown or Partially Known Environments? Darius Burschka, Christof Eberst Institute of Process Control Computers Prof. Dr. -Ing. G. Farber Technische Universitat Munchen 80280 Munich, Germany

More information

Recognition. Clark F. Olson. Cornell University. work on separate feature sets can be performed in

Recognition. Clark F. Olson. Cornell University. work on separate feature sets can be performed in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 907-912, 1996. Connectionist Networks for Feature Indexing and Object Recognition Clark F. Olson Department of Computer

More information

Layered Scene Decomposition via the Occlusion-CRF Supplementary material

Layered Scene Decomposition via the Occlusion-CRF Supplementary material Layered Scene Decomposition via the Occlusion-CRF Supplementary material Chen Liu 1 Pushmeet Kohli 2 Yasutaka Furukawa 1 1 Washington University in St. Louis 2 Microsoft Research Redmond 1. Additional

More information

Patch-based Object Recognition. Basic Idea

Patch-based Object Recognition. Basic Idea Patch-based Object Recognition 1! Basic Idea Determine interest points in image Determine local image properties around interest points Use local image properties for object classification Example: Interest

More information

2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence

2 Sensor Feature Extraction 2. Geometrical Constraints The three{dimensional information can be retrieved from a pair of images if the correspondence In: Proc. of the 3rd Asian Conference on Computer Vision, Hong Kong, Vol. I, January 998. Identication of 3D Reference Structures for Video-Based Localization? Darius Burschka and Stefan A. Blum Laboratory

More information

2 ATTILA FAZEKAS The tracking model of the robot car The schematic picture of the robot car can be seen on Fig.1. Figure 1. The main controlling task

2 ATTILA FAZEKAS The tracking model of the robot car The schematic picture of the robot car can be seen on Fig.1. Figure 1. The main controlling task NEW OPTICAL TRACKING METHODS FOR ROBOT CARS Attila Fazekas Debrecen Abstract. In this paper new methods are proposed for intelligent optical tracking of robot cars the important tools of CIM (Computer

More information

1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by

1st frame Figure 1: Ball Trajectory, shadow trajectory and a reference player 48th frame the points S and E is a straight line and the plane formed by Physics-based 3D Position Analysis of a Soccer Ball from Monocular Image Sequences Taeone Kim, Yongduek Seo, Ki-Sang Hong Dept. of EE, POSTECH San 31 Hyoja Dong, Pohang, 790-784, Republic of Korea Abstract

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

Judging Whether Multiple Silhouettes Can Come from the Same Object

Judging Whether Multiple Silhouettes Can Come from the Same Object Judging Whether Multiple Silhouettes Can Come from the Same Object David Jacobs 1, eter Belhumeur 2, and Ian Jermyn 3 1 NEC Research Institute 2 Yale University 3 New York University Abstract. We consider

More information

Techniques. IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale. Phone: Fax:

Techniques. IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale. Phone: Fax: Incorporating Learning in Motion Planning Techniques Luca Maria Gambardella and Marc Haex IDSIA, Istituto Dalle Molle di Studi sull'intelligenza Articiale Corso Elvezia 36 - CH - 6900 Lugano Phone: +41

More information

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58

Image Features: Local Descriptors. Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 Image Features: Local Descriptors Sanja Fidler CSC420: Intro to Image Understanding 1/ 58 [Source: K. Grauman] Sanja Fidler CSC420: Intro to Image Understanding 2/ 58 Local Features Detection: Identify

More information

Segmentation and Tracking of Partial Planar Templates

Segmentation and Tracking of Partial Planar Templates Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract

More information

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK

TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK TRANSPARENT OBJECT DETECTION USING REGIONS WITH CONVOLUTIONAL NEURAL NETWORK 1 Po-Jen Lai ( 賴柏任 ), 2 Chiou-Shann Fuh ( 傅楸善 ) 1 Dept. of Electrical Engineering, National Taiwan University, Taiwan 2 Dept.

More information

Local features: detection and description. Local invariant features

Local features: detection and description. Local invariant features Local features: detection and description Local invariant features Detection of interest points Harris corner detection Scale invariant blob detection: LoG Description of local patches SIFT : Histograms

More information

Chapter 11 Representation & Description

Chapter 11 Representation & Description Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT

3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT 3D-OBJECT DETECTION METHOD BASED ON THE STEREO IMAGE TRANSFORMATION TO THE COMMON OBSERVATION POINT V. M. Lisitsyn *, S. V. Tikhonova ** State Research Institute of Aviation Systems, Moscow, Russia * lvm@gosniias.msk.ru

More information

Model Based Pose Estimation. from Uncertain Data. Thesis submitted for the degree \Doctor of Philosophy" Yacov Hel-Or

Model Based Pose Estimation. from Uncertain Data. Thesis submitted for the degree \Doctor of Philosophy Yacov Hel-Or Model Based Pose Estimation from Uncertain Data Thesis submitted for the degree \Doctor of Philosophy" Yacov Hel-Or Submitted to the Senate of the Hebrew University in Jerusalem (1993) ii This work was

More information

The International Workshop on Automatic Face and Gesture Recognition, Zurich, June 26-28, 1995 TASC. 55 Walkers Brook Drive

The International Workshop on Automatic Face and Gesture Recognition, Zurich, June 26-28, 1995 TASC. 55 Walkers Brook Drive The International Workshop on Automatic Face and Gesture Recognition, Zurich, June 26-28, 1995 Face Recognition from Frontal and Prole Views Gaile G. Gordon TASC 55 Walkers Brook Drive Reading, MA 1867

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

3D Models and Matching

3D Models and Matching 3D Models and Matching representations for 3D object models particular matching techniques alignment-based systems appearance-based systems GC model of a screwdriver 1 3D Models Many different representations

More information

Implementing the Scale Invariant Feature Transform(SIFT) Method

Implementing the Scale Invariant Feature Transform(SIFT) Method Implementing the Scale Invariant Feature Transform(SIFT) Method YU MENG and Dr. Bernard Tiddeman(supervisor) Department of Computer Science University of St. Andrews yumeng@dcs.st-and.ac.uk Abstract The

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

CITS 4402 Computer Vision

CITS 4402 Computer Vision CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi

Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi Discovering Visual Hierarchy through Unsupervised Learning Haider Razvi hrazvi@stanford.edu 1 Introduction: We present a method for discovering visual hierarchy in a set of images. Automatically grouping

More information

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing

Prof. Fanny Ficuciello Robotics for Bioengineering Visual Servoing Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level

More information

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 16, 1996 WIT Press,   ISSN Shape-invariant object detection with large scale changes John DeCatrel Department of Computer Science, Florida State University, Tallahassee, FL 32306-4019 EMail: decatrel@cs.fsu.edu Abstract This paper

More information

82 REGISTRATION OF RETINOGRAPHIES

82 REGISTRATION OF RETINOGRAPHIES 82 REGISTRATION OF RETINOGRAPHIES 3.3 Our method Our method resembles the human approach to image matching in the sense that we also employ as guidelines features common to both images. It seems natural

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 14 130307 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Stereo Dense Motion Estimation Translational

More information

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision

Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision Supplementary Material : Partial Sum Minimization of Singular Values in RPCA for Low-Level Vision Due to space limitation in the main paper, we present additional experimental results in this supplementary

More information

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to

More information

Center for Automation Research, University of Maryland. The independence measure is the residual normal

Center for Automation Research, University of Maryland. The independence measure is the residual normal Independent Motion: The Importance of History Robert Pless, Tomas Brodsky, and Yiannis Aloimonos Center for Automation Research, University of Maryland College Park, MD, 74-375 Abstract We consider a problem

More information

Outline 7/2/201011/6/

Outline 7/2/201011/6/ Outline Pattern recognition in computer vision Background on the development of SIFT SIFT algorithm and some of its variations Computational considerations (SURF) Potential improvement Summary 01 2 Pattern

More information

Effects Of Shadow On Canny Edge Detection through a camera

Effects Of Shadow On Canny Edge Detection through a camera 1523 Effects Of Shadow On Canny Edge Detection through a camera Srajit Mehrotra Shadow causes errors in computer vision as it is difficult to detect objects that are under the influence of shadows. Shadow

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Lecture 18 Representation and description I. 2. Boundary descriptors

Lecture 18 Representation and description I. 2. Boundary descriptors Lecture 18 Representation and description I 1. Boundary representation 2. Boundary descriptors What is representation What is representation After segmentation, we obtain binary image with interested regions

More information

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than

(a) (b) (c) Fig. 1. Omnidirectional camera: (a) principle; (b) physical construction; (c) captured. of a local vision system is more challenging than An Omnidirectional Vision System that finds and tracks color edges and blobs Felix v. Hundelshausen, Sven Behnke, and Raul Rojas Freie Universität Berlin, Institut für Informatik Takustr. 9, 14195 Berlin,

More information

Chamfer matching. More on template matching. Distance transform example. Computing the distance transform. Shape based matching.

Chamfer matching. More on template matching. Distance transform example. Computing the distance transform. Shape based matching. Chamfer matching Given: binary image, B, of edge and local feature locations binary edge template, T, of shape we want to match More on template matching Shape based matching Let D be an array in registration

More information

Journal of Industrial Engineering Research

Journal of Industrial Engineering Research IWNEST PUBLISHER Journal of Industrial Engineering Research (ISSN: 2077-4559) Journal home page: http://www.iwnest.com/aace/ Mammogram Image Segmentation Using voronoi Diagram Properties Dr. J. Subash

More information

Comparison of system using and without using the shape classifier Misdetection

Comparison of system using and without using the shape classifier Misdetection Identity Score Pose SpCar.6 32:77 o Truck.9 4:29 o Ambulance. :52 o Figure 5: Sample of the Object Database..6 Comparison of system using and without using the shape classifier.5.4 Misdetection.3.2. results

More information

Thomas Labe. University ofbonn. A program for the automatic exterior orientation called AMOR was developed by Wolfgang

Thomas Labe. University ofbonn. A program for the automatic exterior orientation called AMOR was developed by Wolfgang Contributions to the OEEPE-Test on Automatic Orientation of Aerial Images, Task A - Experiences with AMOR Thomas Labe Institute of Photogrammetry University ofbonn laebe@ipb.uni-bonn.de (in: OEEPE Publication

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Occlusion Detection of Real Objects using Contour Based Stereo Matching

Occlusion Detection of Real Objects using Contour Based Stereo Matching Occlusion Detection of Real Objects using Contour Based Stereo Matching Kenichi Hayashi, Hirokazu Kato, Shogo Nishida Graduate School of Engineering Science, Osaka University,1-3 Machikaneyama-cho, Toyonaka,

More information

3.1. Solution for white Gaussian noise

3.1. Solution for white Gaussian noise Low complexity M-hypotheses detection: M vectors case Mohammed Nae and Ahmed H. Tewk Dept. of Electrical Engineering University of Minnesota, Minneapolis, MN 55455 mnae,tewk@ece.umn.edu Abstract Low complexity

More information

High-Level Computer Vision

High-Level Computer Vision High-Level Computer Vision Detection of classes of objects (faces, motorbikes, trees, cheetahs) in images Recognition of specific objects such as George Bush or machine part #45732 Classification of images

More information

Fast and Robust Projective Matching for Fingerprints using Geometric Hashing

Fast and Robust Projective Matching for Fingerprints using Geometric Hashing Fast and Robust Projective Matching for Fingerprints using Geometric Hashing Rintu Boro Sumantra Dutta Roy Department of Electrical Engineering, IIT Bombay, Powai, Mumbai - 400 076, INDIA {rintu, sumantra}@ee.iitb.ac.in

More information

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.

Colorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science. Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Model Based Object Recognition 2 Object Recognition Overview Instance recognition Recognize a known

More information

Verslag Project beeldverwerking A study of the 2D SIFT algorithm

Verslag Project beeldverwerking A study of the 2D SIFT algorithm Faculteit Ingenieurswetenschappen 27 januari 2008 Verslag Project beeldverwerking 2007-2008 A study of the 2D SIFT algorithm Dimitri Van Cauwelaert Prof. dr. ir. W. Philips dr. ir. A. Pizurica 2 Content

More information

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS.

SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. SUPPLEMENTARY FILE S1: 3D AIRWAY TUBE RECONSTRUCTION AND CELL-BASED MECHANICAL MODEL. RELATED TO FIGURE 1, FIGURE 7, AND STAR METHODS. 1. 3D AIRWAY TUBE RECONSTRUCTION. RELATED TO FIGURE 1 AND STAR METHODS

More information

2 Proposed Methodology

2 Proposed Methodology 3rd International Conference on Multimedia Technology(ICMT 2013) Object Detection in Image with Complex Background Dong Li, Yali Li, Fei He, Shengjin Wang 1 State Key Laboratory of Intelligent Technology

More information

Design Specication. Group 3

Design Specication. Group 3 Design Specication Group 3 September 20, 2012 Project Identity Group 3, 2012/HT, "The Robot Dog" Linköping University, ISY Name Responsibility Phone number E-mail Martin Danelljan Design 072-372 6364 marda097@student.liu.se

More information

CS664 Lecture #21: SIFT, object recognition, dynamic programming

CS664 Lecture #21: SIFT, object recognition, dynamic programming CS664 Lecture #21: SIFT, object recognition, dynamic programming Some material taken from: Sebastian Thrun, Stanford http://cs223b.stanford.edu/ Yuri Boykov, Western Ontario David Lowe, UBC http://www.cs.ubc.ca/~lowe/keypoints/

More information

DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK

DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK DESIGNING A REAL TIME SYSTEM FOR CAR NUMBER DETECTION USING DISCRETE HOPFIELD NETWORK A.BANERJEE 1, K.BASU 2 and A.KONAR 3 COMPUTER VISION AND ROBOTICS LAB ELECTRONICS AND TELECOMMUNICATION ENGG JADAVPUR

More information

Accelerating Pattern Matching or HowMuchCanYouSlide?

Accelerating Pattern Matching or HowMuchCanYouSlide? Accelerating Pattern Matching or HowMuchCanYouSlide? Ofir Pele and Michael Werman School of Computer Science and Engineering The Hebrew University of Jerusalem {ofirpele,werman}@cs.huji.ac.il Abstract.

More information