2D Matching of 3D Moving Objects in Color Outdoor Scenes *
|
|
- Rudolph Armstrong
- 5 years ago
- Views:
Transcription
1 2D Matching of 3D Moving Objects in Color Outdoor Scenes * Marie-Pierre Dubuisson and Ani1 K. Jain Department of Computer Science Michigan State University East Lansing, MI dubuisso@cps.msu.edu, jain@cps.msu.edu Abstract This paper describes an object matching system which is able to extract objects of interest from outdoor scenes and match them. Our application (in the domain of IVHS involves measuring the average travel time in a roa d network. The extraction of the object of interest is performed by fusing multiple cues including motion, color, edges, and model information. Two objects extracted from images captured by two independent cameras at different times are then matched to evaluate their similarity. Color indexing based on histogram matching is used to avoid matching all possible pairs of objects. To resolve ambiguities, further matching is done by measuring the Hausdorff distance between two sets of edge points. The object matching system was given 2 sets of 40 vehicles. It was able to identify 23 of the 30 correct matches and all the false matches were rejected. Color indexing reduced the number of candidates for a match from 40 to 2. This matching accuracy is adequate to obtain a reliable estimate of the average travel time. 1 Introduction Matching is a generic operation in image processing and computer vision which is used to determine the similarity of two entities (points, curves, subimages, objects, etc. of the same type. A number of different operations f all under the broad term of matching. Template matching is performed by evaluating the cross-correlation of a template and the corresponding window in the image. Image matching consists of determining the correspondence between image features such as points, lines, corners, or homogeneous regions, in two different images. We define object matching as a procedure for deciding whether two different 3D objects are identical based on their 2D images. The two objects are sensed possibly in different backgrounds by two independent cameras at different,times, but in approximately the same pose. An object matching system is simpler to construct than an object recognition system [8]. In an object matching system, precise models of the 3D objects are usually not available and not required. Instead, we have two (or more) sensed images of the objects Research supported by a grant from the U.S. Department of Transportation through the Great Lakes Center for Truck Transportation Research available to us. The objects are observed in approximately the same aspect, are entirely contained in the image, are not occluded, but are sensed in different environments. The output of an object matching system consists of a measure indicating the similarity of two objects independent of the backgrounds in the two images. It is important to realize that a robust extraction of the object of interest needs to be performed before any matching can occur due to different and complex backgrounds in the two images. Our application for object matchiri is in the domain of the IVHS (Intelligent Vehicle$Highway Systems) program [ll]. One of the two primary objectives of IVHS is to reduce travel time by assisting the traveler to avoid congested traffic situations and find the minimum travel time path through a. road network. One solution to the problem of measuring the average time for vehicles to go from one point to another in a road network is to match license plates. However, the camera needs to be focused on the license plate and the license plate has to be large enough in the sensed image so that individual characters can be recognized. These constraints are difficult to overcome in dense traffic situations, especially when the vehicles are moving at high speed. We are studying the feasibility of the following approach: cameras are placed on the sides of the road to capture the images, and then vehicles are matched with previously observed vehicles using color and edge features. In this application, it is not necessary to find all the correct matches; it is enough to find a sufficient number of reliable matches in order to compute some statistical information about the travel time of a vehicle from one point to another in a road network. A block diagram of our object matching system is shown in Figure 1. Two cameras are placed about a mile apart from each other in a road network. Image analysis is performed on both the road image sequences to extract the objects of interest (the front vehicle in the lane closest to the camera). The object extraction procedure is described in Figure 2. An integration of several cues, including a motion segmentation mask of the moving areas in the image sequence, homogeneous color regions, and edge information is used to identify the moving objects in the image. Given a generic model of a vehicle, incremental grouping and constrained search are then used to locate the front vehicle. The object extraction proce /94 $3.00 Q 1994 IEEE 887
2 Figure 1: A block diagram of the object matching system. Figure 2: A block diagram of the object extraction system. dure gives a set of edge points both inside and on the contour of the vehicle. A small window is extracted in the vicinity of the vehicle doors to characterize the color of the vehicle. The vehicles observed by the first camera are stored ill a database as a set of edge points and a color patch. Color histogram matching is used as an indexing step to reduce the number of potential matches. Edge matching is then performed on the reduced database to further refine the match and compute the similarity between the vehicles. 2 Segmentation of Moving Objects Our method to extract moving objects from an image sequence was first published in [4]; it was then enhanced and reported in [5]. The method is based on the fusion of a motion segmentation mask obtained by image subtraction and a set of homogeneous color regions obtained by the integration of a split-and-merge algorithm and edges resulting from the Canny edge detector. The flowchart of the algorithm is given in the top half of Figure 2. Motion segmentation based on image subtraction is used to obtain an approximate location of the moving objects in the input image sequence. Three color frames fl, f 2, and f3 containing the moving object are extracted at times tl, t2, and t3, respectively from the image sequence. The difference image defined as d(i,j) = max {Ifi(i,j) - fi(i,j)i x Ifk(i,j) - fk3(i,j)t} k=r,g,r (?I is thresholded to keep only those pixels where a significant change has taken place. A motion segmentation mask M, corresponding to the moving areas is generated by filling in the gaps in the thresholded image. For color segmentation, the regions obtained by the split-and-merge algorithm [6] adapted to color images are further merged using ed e information obtained &Jm the Canny edge detector f2] based on three criteria (boundary strength, color similarity, and connec- tivity measure). This procedure results in a set of regions R = {RI, R2,.., RP} which are fused with the motion segmentation mask M, to produce an accurate mask of the moving object MO. The motion segmentation mask M, is first smoothed using active contours (or snakes) as described by Kass et al. [9]. We have used the dynamic programming approach proposed by Amini et al. [l] to minimize the total energy. The contour generated by the snakes algorithm is then used to decide whether a region R, belongs to a moving object(s) and construct the new mask MO. Figure 3 compares the accuracy of the motion segmentation mask M, and the moving object mask MO. Figure 4 shows a few examples of the contours of moving vehicles in different backgrounds which were extracted by our algorithm. These results appear reasonable. We have also shown that the same algorithm produces accurate contours of moving objects in different domains [5]. 3 Model-based segmentation In the presence of multiple moving vehicles in the scene (Figures 4(e)-(f)), our segmentation algorithm groups them together to form a single moving object. 888
3 (4 (b) Figure 6: Fitting a generic model of a vehicle to the edge points: (a) single vehicle in the scene; (b) two vehicles in the scene. objects can be defined by the following algorithm. Figure 4: Contour extraction of moving vehicles. L 10.2L e 1 < 0.4L Figure 5: A generic model of a vehicle. For our application, the front vehicle (closest to the camera) needs to be isolated. In the situation shown in Figure 4(e), if the velocities of the vehicles were known, they could probably be used to separate the two vehicles because they are moving in opposite directions. In the case of Figure 4(f), however, the two vehicles are moving in the same direction and the velocity information alone would not be precise enough to separate them. Our algorithm for the extraction of the front vehicle is presented in the bottom half of Figure 2. We use a top-down approach to find an instance of the vehicle model in the image. Of course, it is not feasible to create a model for all possible vehicles that can be found on roads and then perform object recognition; the resulting model database would be too large. Therefore, we have defined a generic and rather coarse model of a vehicle consisting of five linear features pi, i = 1,...,5 as shown in Figure 5. The edge points generated by color segmentation which are inside the mask MO and their line segment approximations are used to find the model features in the image. Using the constraints on the lengths of the line segments and the angles between them in the model, we can use incremental grouping to fit the model to the data. Note that since we are looking for only the front vehicle, it is unlikely that any of its features will be occluded in the scene. Incremental grouping using constrained search for non-occluded 0 find (PI, the first feature in the model; 0 for i = 2,..., N, where N is the number of features find (pi constrained by all (oj, j = 1,..., i- 1. The five features which are extracted from the image are then joined to build a model of the vehicle. Figures 6 shows examples of the extracted model (superimposed on the edges) in an image containing only one vehicle and an image containing two moving vehicles. This top-down approach produces a very crude mask M, of the front vehicle. A final mask Mj of the front vehicle is obtained by combining MO, M,, and the set of homogeneous color regions R = {RI,..., Rp}. Let nf denote the number of pixels in region Rd that are inside the mask.mu n MO. Decide that region Ri is part of the final mask Mj if nf/ni > A 7 x 7 averaging filter is also applied before the final mask Mj is obtained. Figure 7 shows examples of the front vehicle extraction process applied to 6 different image sequences containing one or more moving vehicles where the contour of the final mask of the front vehicle Mj is overlaid on the input image. These contours show the robustness of our model-based segmentation scheme. 4 Color Indexing Most object recognition techniques first generate a number of hypotheses about the identity and pose of the object, and then select the best hypothesis. Since the verification stage for selecting the best hypothesis is often very time consuming, particularly when the model database is large, indexing methods have been frequently used in object recognition to reduce the number of hypotheses generated. Indexing is also needed in our system to avoid expensive edge matching between all possible pairs of objects. In our application, the object database contains the vehicles observed by camera 1 (see Figure 1). When a vehicle is detected by the second camera, it is first compared with all the vehicles in the database using color information. More precise matching is done only if the colors of the two vehicles are sufficiently similar. 889
4 Figure 8: Location of the color patch. Figure 7: E xtraction of the front vehicle. Swain and IaUard [lo] proposed a method for Omparing color images using histograms. Multi-dimensional color histograms are compared using a minimum operator. We have slightly modified their measure to handle object matching instead of matching an is* lated model and a scene containing the mode], We have used RGB color features. The color histogram matching score, between two histograms Hl(r, g, b) and H2(r, g, b) is defined as follows: are inherently very similar, so a comparison based on vehicle contour alone will not have enough discriminatory power. Some authors have presented matching methods based on feature points using relaxation techniques. These techniques have the disadvantage of being computationally demanding when the number of eature points is large. A diffgrent approach is to match relational structures (graphs or stars) using subgraph isomorphism. Recently, Huttenlocher et al. [7] proposed a method for Omparing edge images using he Hausdorff distance. This technique is very appropriate for our application since the two objects are already described point by edge sets points A a;d The is Hausdorff defined as: distance between two (2) where 1. I denotes the number of pixels used to compute the histogram. Note that S HI, H2) takes values between 0 and 1, and Sc(H1, i 2) = 1 if and only if one and is some norm (e.g., Manhattan or Euclidean). The Hausdorff distance measures the similarity between two point sets at fixed positions. Huttenlocher et al. proposed an efficient algorithm to compute the Hausdorff distance between all possible relative positions of the two sets and then find the minimum distance. Let 7 be the set of all possible transformations t. The matching score between the two point sets is defined as: of the histograms is totally included in the other one. Initial experiments on color matching indicated that the entire vehicle should not be used to compute the 3D color histogram. Since all the vehicles have typically black wheels and grayish hubcabs and windows, the 3D color histogram always shows a peak for these colors, no matter which vehicle is being observed. Therefore, we have decided to use only a part of the vehicle to compute the 3D color histogram. A small color patch (20 x 50 pixels) is extracted near the location of the vehicle doors as shown in Figure 8 using model information and is used to compute the color histogram (lh1l = lh21 = 1000). 5 Edge Matching Given two objects that have been extracted from their respective scenes, the object matching system computes the similarity between them. A number of techniques for object (2D shape) matching using moments or Fourier descriptors have been presented in the literature. The advantage of these methods is that they are invariant to size, rotation, and translation, but the main disadvantage is that they are not information-preserving. The difficulty in using these techniques for our application is that different vehicles D,(A,B) = minh(d,tb) t 7 (5) For our application, the translation between the two point sets can be approximated by the translation between their centers of masses (Cl and Cz). The rotation between the two images in our sensing setup is negligible. The vehicles in the sensed images captured by camera 2 were a little bit smaller than the vehicles in the images taken by camera 1. So, we define the set of possible transformations as : 7, {(tzlty,s) s.t < s < 1, (CI= - C2, - 5) < ts < (CI= - Cz, + 5), (6) (Cl, - c2, - 5) < t, < (Cl, - c2, + 5))
5 6 Experimental Results R.oad image sequences were recorded using two Sharp camcorders in Troy, Michigan on May 24, 1993 around 2:45 pm. The speeds of the vehicles were between 30 and 35 mph, and the traffic was busy enough to result in multiple moving vehicles in the same image frame. The tapes were played on a VCR in our laboratory and a 3-frame sequence for each vehicle was digitized. The first set of 40 vehicles {Vi,..., Vi,} was obtained from the first tape (camera l), and a second set of 40 vehicles {Yt,..., Vio}was obtained from the second tape (camera 2). Ten of the vehicles in the first set did not have any matching vehicle in the second set, resulting in only 30 true matches out of 40 possible matches. For the IVHS application considered here, missing some of the true matches will not affect the statistical measurement of travel time in a road network, but false matches will produce wrong results. So, we have implemented a reject option that would avoid false matches. The moving object extraction process was applied to all the 40 3-frame sequences from set 1 and 40 color patches and edge images were stored in the database of vehicles seen by the first camera. The same processing steps were applied to the second set. Color indexing was performed first. The color patch of vehicle V; was matched with the color patches of all the vehicles Vi, j = 1,..., 40 in the first set using color histogram matching. The color histogram matching measure S, is a similarity measure; the higher its value, the more similar the two vehicles. For an indexing technique to be efficient, it must reduce the number of candidate matches. In other words, the true match should be highly ranked compared to the other matches. The rank of the true match was the highest in 18 out of 30 cases, and second highest in 9 cases. But, in 13 cases, either the match did not exist or it had a lower rank. We have decided to keep only the 2 vehicles having the highest color histogram matching scores as candidates for edge matching. This retains 27 of the 30 true matches. As a result, the number of candidate matches was reduced from 40 to 2, resulting in a 95% reduction in the edge matching time. Edge matching based on the Hausdorff distance was then used to match vehicle Vf and {V; s.t. rank(s,(v:,vj)) 5 2) to compute an edge matching score De. The total matching score is defined as S, = %. Since S, is a similarity measure and De is a dissimilarity measure, St is a similarity measure. The highest value of St should correspond to the true match. Two total matching scores Stl and St, were computed for each vehicle V?. A match should be accepted or rejected based on these two numbers. A match was accepted if both the highest total matching score and the difference between the two total matching scores exceeded some thresholds. "(St,, St,) > 0.2 IS,, - S,I > 0.1 (7) This resulted in all the false matches to be rejected and 23 of the 30 true matches to be axepted. 7 Summary The proposed object matching system has been shown to meet the requirements of our application in the IVHS domain. We have shown that it is possible to accurately extract the moving objects in a color image using a 3-frame sequence. A generic model of a vehicle and an incremental grouping approach using constrained search enable us to extract the front vehicle. We have also shown that the indexing process based on color histogram matching is powerful in reducing the size of the object database to be matched. The matching results are very encouraging. We were able to find 23 of the 30 true matches and eliminate all 10 false matches. References A. Amini, T. Weymouth, and R. Jain. Using dynamic programming for solving variational problems in vision. IEEE Trans. PAMI, 12(9): , J. Canny. A computational approitch to edge detection. IEEE Trans. PAMI, 8(6): , V. Coutance. La Couleur en Vision par Ordinateur: Application ci la Robotique. PhD thesis, Laboratoire d'automatique et d'analyse des Systkmes du CNRS, Universitk Paul Sabatier, Toulouse, France, M.-P. Dubuisson and A. K. Jain. Object contour extraction using color and motion. In Proc. IEEE Conf. Computer Vision and Pattern Recognition, pages , New York, NY, M.-P. Dubuisson and A. K. Jain. Contour extraction of moving objects in complex outdoor scenes. to appear in International Journal of Computer Vision, S. Horowitz and T. Pavlidis. Picture segmentation by a tree traversal algorithm. Journal of the ACM, 23(2): , D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge. Comparing images using the Hausdorff distance. IEEE Trans. PAMI, 15(9): , A. K. Jain and P. J. Flynn (eds.). 3D Object Recognition Systems. Elsevier, Amsterdam, M. Kass, A. Witkin, and D. Terzopoulos. Snakes: Active contour models. Int. Journal of Computer Vision, pages , M. J. Swain and D. H. Ballard. Color indexing. Int. Journal of Computer Vision, 7(1):11-32, Transportation Research Board. Advanced Vehicle and Highway Technologies. Technical Report No. 232, National Research Council,
Segmentation and Matching of Vehicles Road Images
TRANSPORTATION RESEARCH RECORD 1412 57 Segmentation and Matching of Vehicles Road Images ID MARIE-PIERRE DUBUISSON, ANIL K. JAIN, AND WILLIAM C. TAYLOR The use of image processing for the segmentation
More informationOptimal Grouping of Line Segments into Convex Sets 1
Optimal Grouping of Line Segments into Convex Sets 1 B. Parvin and S. Viswanathan Imaging and Distributed Computing Group Information and Computing Sciences Division Lawrence Berkeley National Laboratory,
More informationReal-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition
Real-Time Model-Based Hand Localization for Unsupervised Palmar Image Acquisition Ivan Fratric 1, Slobodan Ribaric 1 1 University of Zagreb, Faculty of Electrical Engineering and Computing, Unska 3, 10000
More informationUnconstrained License Plate Detection Using the Hausdorff Distance
SPIE Defense & Security, Visual Information Processing XIX, Proc. SPIE, Vol. 7701, 77010V (2010) Unconstrained License Plate Detection Using the Hausdorff Distance M. Lalonde, S. Foucher, L. Gagnon R&D
More informationCORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM
CORRELATION BASED CAR NUMBER PLATE EXTRACTION SYSTEM 1 PHYO THET KHIN, 2 LAI LAI WIN KYI 1,2 Department of Information Technology, Mandalay Technological University The Republic of the Union of Myanmar
More informationA New Shape Matching Measure for Nonlinear Distorted Object Recognition
A New Shape Matching Measure for Nonlinear Distorted Object Recognition S. Srisuky, M. Tamsriy, R. Fooprateepsiri?, P. Sookavatanay and K. Sunaty Department of Computer Engineeringy, Department of Information
More informationChapter 9 Object Tracking an Overview
Chapter 9 Object Tracking an Overview The output of the background subtraction algorithm, described in the previous chapter, is a classification (segmentation) of pixels into foreground pixels (those belonging
More informationMULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES
MULTIVIEW REPRESENTATION OF 3D OBJECTS OF A SCENE USING VIDEO SEQUENCES Mehran Yazdi and André Zaccarin CVSL, Dept. of Electrical and Computer Engineering, Laval University Ste-Foy, Québec GK 7P4, Canada
More informationModel-based segmentation and recognition from range data
Model-based segmentation and recognition from range data Jan Boehm Institute for Photogrammetry Universität Stuttgart Germany Keywords: range image, segmentation, object recognition, CAD ABSTRACT This
More informationA Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation
A Fast and Accurate Eyelids and Eyelashes Detection Approach for Iris Segmentation Walid Aydi, Lotfi Kamoun, Nouri Masmoudi Department of Electrical National Engineering School of Sfax Sfax University
More information2 Algorithm Description Active contours are initialized using the output of the SUSAN edge detector [10]. Edge runs that contain a reasonable number (
Motion-Based Object Segmentation Using Active Contours Ben Galvin, Kevin Novins, and Brendan McCane Computer Science Department, University of Otago, Dunedin, New Zealand. Abstract: The segmentation of
More informationCritique: Efficient Iris Recognition by Characterizing Key Local Variations
Critique: Efficient Iris Recognition by Characterizing Key Local Variations Authors: L. Ma, T. Tan, Y. Wang, D. Zhang Published: IEEE Transactions on Image Processing, Vol. 13, No. 6 Critique By: Christopher
More informationФУНДАМЕНТАЛЬНЫЕ НАУКИ. Информатика 9 ИНФОРМАТИКА MOTION DETECTION IN VIDEO STREAM BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING
ФУНДАМЕНТАЛЬНЫЕ НАУКИ Информатика 9 ИНФОРМАТИКА UDC 6813 OTION DETECTION IN VIDEO STREA BASED ON BACKGROUND SUBTRACTION AND TARGET TRACKING R BOGUSH, S ALTSEV, N BROVKO, E IHAILOV (Polotsk State University
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More informationColor Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model
This paper appears in: IEEE International Conference on Systems, Man and Cybernetics, 1999 Color Image Segmentation Editor Based on the Integration of Edge-Linking, Region Labeling and Deformable Model
More informationAppearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization
Appearance-Based Place Recognition Using Whole-Image BRISK for Collaborative MultiRobot Localization Jung H. Oh, Gyuho Eoh, and Beom H. Lee Electrical and Computer Engineering, Seoul National University,
More informationSegmentation and Tracking of Partial Planar Templates
Segmentation and Tracking of Partial Planar Templates Abdelsalam Masoud William Hoff Colorado School of Mines Colorado School of Mines Golden, CO 800 Golden, CO 800 amasoud@mines.edu whoff@mines.edu Abstract
More informationFeature Extraction and Image Processing, 2 nd Edition. Contents. Preface
, 2 nd Edition Preface ix 1 Introduction 1 1.1 Overview 1 1.2 Human and Computer Vision 1 1.3 The Human Vision System 3 1.3.1 The Eye 4 1.3.2 The Neural System 7 1.3.3 Processing 7 1.4 Computer Vision
More informationProf. Fanny Ficuciello Robotics for Bioengineering Visual Servoing
Visual servoing vision allows a robotic system to obtain geometrical and qualitative information on the surrounding environment high level control motion planning (look-and-move visual grasping) low level
More informationTemplate Matching Rigid Motion
Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition
More informationEE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm
EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant
More informationReal-time Detection of Illegally Parked Vehicles Using 1-D Transformation
Real-time Detection of Illegally Parked Vehicles Using 1-D Transformation Jong Taek Lee, M. S. Ryoo, Matthew Riley, and J. K. Aggarwal Computer & Vision Research Center Dept. of Electrical & Computer Engineering,
More informationFast Vehicle Detection and Counting Using Background Subtraction Technique and Prewitt Edge Detection
International Journal of Computer Science and Telecommunications [Volume 6, Issue 10, November 2015] 8 ISSN 2047-3338 Fast Vehicle Detection and Counting Using Background Subtraction Technique and Prewitt
More informationDetection and Classification of Vehicles
Detection and Classification of Vehicles Gupte et al. 2002 Zeeshan Mohammad ECG 782 Dr. Brendan Morris. Introduction Previously, magnetic loop detectors were used to count vehicles passing over them. Advantages
More informationStudy on the Signboard Region Detection in Natural Image
, pp.179-184 http://dx.doi.org/10.14257/astl.2016.140.34 Study on the Signboard Region Detection in Natural Image Daeyeong Lim 1, Youngbaik Kim 2, Incheol Park 1, Jihoon seung 1, Kilto Chong 1,* 1 1567
More informationDesigning Applications that See Lecture 7: Object Recognition
stanford hci group / cs377s Designing Applications that See Lecture 7: Object Recognition Dan Maynes-Aminzade 29 January 2008 Designing Applications that See http://cs377s.stanford.edu Reminders Pick up
More informationRange Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation
Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical
More informationImage Processing, Analysis and Machine Vision
Image Processing, Analysis and Machine Vision Milan Sonka PhD University of Iowa Iowa City, USA Vaclav Hlavac PhD Czech Technical University Prague, Czech Republic and Roger Boyle DPhil, MBCS, CEng University
More informationTemplate Matching Rigid Motion. Find transformation to align two images. Focus on geometric features
Template Matching Rigid Motion Find transformation to align two images. Focus on geometric features (not so much interesting with intensity images) Emphasis on tricks to make this efficient. Problem Definition
More informationPixel-Pair Features Selection for Vehicle Tracking
2013 Second IAPR Asian Conference on Pattern Recognition Pixel-Pair Features Selection for Vehicle Tracking Zhibin Zhang, Xuezhen Li, Takio Kurita Graduate School of Engineering Hiroshima University Higashihiroshima,
More informationI. INTRODUCTION. Figure-1 Basic block of text analysis
ISSN: 2349-7637 (Online) (RHIMRJ) Research Paper Available online at: www.rhimrj.com Detection and Localization of Texts from Natural Scene Images: A Hybrid Approach Priyanka Muchhadiya Post Graduate Fellow,
More informationCombining Appearance and Topology for Wide
Combining Appearance and Topology for Wide Baseline Matching Dennis Tell and Stefan Carlsson Presented by: Josh Wills Image Point Correspondences Critical foundation for many vision applications 3-D reconstruction,
More informationAn Interactive Technique for Robot Control by Using Image Processing Method
An Interactive Technique for Robot Control by Using Image Processing Method Mr. Raskar D. S 1., Prof. Mrs. Belagali P. P 2 1, E&TC Dept. Dr. JJMCOE., Jaysingpur. Maharashtra., India. 2 Associate Prof.
More informationHistogram and watershed based segmentation of color images
Histogram and watershed based segmentation of color images O. Lezoray H. Cardot LUSAC EA 2607 IUT Saint-Lô, 120 rue de l'exode, 50000 Saint-Lô, FRANCE Abstract A novel method for color image segmentation
More informationStudy on road sign recognition in LabVIEW
IOP Conference Series: Materials Science and Engineering PAPER OPEN ACCESS Study on road sign recognition in LabVIEW To cite this article: M Panoiu et al 2016 IOP Conf. Ser.: Mater. Sci. Eng. 106 012009
More informationOBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION *
OBJECT TRACKING AND RECOGNITION BY EDGE MOTOCOMPENSATION * L. CAPODIFERRO, M. GRILLI, F. IACOLUCCI Fondazione Ugo Bordoni Via B. Castiglione 59, 00142 Rome, Italy E-mail: licia@fub.it A. LAURENTI, G. JACOVITTI
More informationFeatures Points. Andrea Torsello DAIS Università Ca Foscari via Torino 155, Mestre (VE)
Features Points Andrea Torsello DAIS Università Ca Foscari via Torino 155, 30172 Mestre (VE) Finding Corners Edge detectors perform poorly at corners. Corners provide repeatable points for matching, so
More informationOBJECT RECOGNITION: OBTAINING 2-D RECONSTRUCTIONS FROM COLOR EDGES. G. Bellaire**, K. Talmi*, E. Oezguer *, and A. Koschan*
Proc. IEEE Symposium on Image Analysis and Interpretation, April 5-7, 1998, Tucson, Arizona, USA, pp. 192-197. OBJECT RECOGNITION: OBTAINING 2-D RECONSTRUCTIONS FROM COLOR EDGES G. Bellaire**, K. Talmi*,
More informationTopic 4 Image Segmentation
Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive
More informationInvariant-based data model for image databases
Invariant-based data model for image databases Michael Kliot Ehud Rivlin Computer Science Department Technion - IIT, Haifa, 32000 Israel 32000 Abstract We describe a new invariant-based data model for
More informationFor Information on SNAKEs. Active Contours (SNAKES) Improve Boundary Detection. Back to boundary detection. This is non-parametric
Active Contours (SNAKES) Back to boundary detection This time using perceptual grouping. This is non-parametric We re not looking for a contour of a specific shape. Just a good contour. For Information
More informationComponent-based Face Recognition with 3D Morphable Models
Component-based Face Recognition with 3D Morphable Models B. Weyrauch J. Huang benjamin.weyrauch@vitronic.com jenniferhuang@alum.mit.edu Center for Biological and Center for Biological and Computational
More informationEstimation of common groundplane based on co-motion statistics
Estimation of common groundplane based on co-motion statistics Zoltan Szlavik, Laszlo Havasi 2, Tamas Sziranyi Analogical and Neural Computing Laboratory, Computer and Automation Research Institute of
More informationPedestrian Detection with Radar and Computer Vision
Pedestrian Detection with Radar and Computer Vision camera radar sensor Stefan Milch, Marc Behrens, Darmstadt, September 25 25 / 26, 2001 Pedestrian accidents and protection systems Impact zone: 10% opposite
More informationHOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationA Feature Point Matching Based Approach for Video Objects Segmentation
A Feature Point Matching Based Approach for Video Objects Segmentation Yan Zhang, Zhong Zhou, Wei Wu State Key Laboratory of Virtual Reality Technology and Systems, Beijing, P.R. China School of Computer
More informationUniversity of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision
report University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision Web Server master database User Interface Images + labels image feature algorithm Extract
More informationLocal Features: Detection, Description & Matching
Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British
More informationHybrid Biometric Person Authentication Using Face and Voice Features
Paper presented in the Third International Conference, Audio- and Video-Based Biometric Person Authentication AVBPA 2001, Halmstad, Sweden, proceedings pages 348-353, June 2001. Hybrid Biometric Person
More informationRepositorio Institucional de la Universidad Autónoma de Madrid.
Repositorio Institucional de la Universidad Autónoma de Madrid https://repositorio.uam.es Esta es la versión de autor de la comunicación de congreso publicada en: This is an author produced version of
More informationIntegrating Intensity and Texture in Markov Random Fields Segmentation. Amer Dawoud and Anton Netchaev. {amer.dawoud*,
Integrating Intensity and Texture in Markov Random Fields Segmentation Amer Dawoud and Anton Netchaev {amer.dawoud*, anton.netchaev}@usm.edu School of Computing, University of Southern Mississippi 118
More informationSATELLITE IMAGE REGISTRATION THROUGH STRUCTURAL MATCHING 1
SATELLITE IMAGE REGISTRATION THROUGH STRUCTURAL MATCHING 1 1. I NTRODUCT I ON Leonardo Sant'Anna Bins Flavio R. Dias Velasco Instituto de Pesquisas Espaciais - INPE Caixa Postal 515-12201 - Sao Jose dos
More informationAnnouncements. Recognition (Part 3) Model-Based Vision. A Rough Recognition Spectrum. Pose consistency. Recognition by Hypothesize and Test
Announcements (Part 3) CSE 152 Lecture 16 Homework 3 is due today, 11:59 PM Homework 4 will be assigned today Due Sat, Jun 4, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying
More informationObject and Class Recognition I:
Object and Class Recognition I: Object Recognition Lectures 10 Sources ICCV 2005 short courses Li Fei-Fei (UIUC), Rob Fergus (Oxford-MIT), Antonio Torralba (MIT) http://people.csail.mit.edu/torralba/iccv2005
More informationEDGE BASED REGION GROWING
EDGE BASED REGION GROWING Rupinder Singh, Jarnail Singh Preetkamal Sharma, Sudhir Sharma Abstract Image segmentation is a decomposition of scene into its components. It is a key step in image analysis.
More informationTexture Segmentation by Windowed Projection
Texture Segmentation by Windowed Projection 1, 2 Fan-Chen Tseng, 2 Ching-Chi Hsu, 2 Chiou-Shann Fuh 1 Department of Electronic Engineering National I-Lan Institute of Technology e-mail : fctseng@ccmail.ilantech.edu.tw
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 6: Feature matching and alignment Szeliski: Chapter 6.1 Reading Last time: Corners and blobs Scale-space blob detector: Example Feature descriptors We know
More informationDETECTION OF CHANGES IN SURVEILLANCE VIDEOS. Longin Jan Latecki, Xiangdong Wen, and Nilesh Ghubade
DETECTION OF CHANGES IN SURVEILLANCE VIDEOS Longin Jan Latecki, Xiangdong Wen, and Nilesh Ghubade CIS Dept. Dept. of Mathematics CIS Dept. Temple University Temple University Temple University Philadelphia,
More informationRecognition (Part 4) Introduction to Computer Vision CSE 152 Lecture 17
Recognition (Part 4) CSE 152 Lecture 17 Announcements Homework 5 is due June 9, 11:59 PM Reading: Chapter 15: Learning to Classify Chapter 16: Classifying Images Chapter 17: Detecting Objects in Images
More informationSegmentation of Images
Segmentation of Images SEGMENTATION If an image has been preprocessed appropriately to remove noise and artifacts, segmentation is often the key step in interpreting the image. Image segmentation is a
More informationIMPROVING THE PERFORMANCE OF CONTENT-BASED IMAGE RETRIEVAL SYSTEMS WITH COLOR IMAGE PROCESSING TOOLS
IMPROVING THE PERFORMANCE OF CONTENT-BASED IMAGE RETRIEVAL SYSTEMS WITH COLOR IMAGE PROCESSING TOOLS Fabio Costa Advanced Technology & Strategy (CGISS) Motorola 8000 West Sunrise Blvd. Plantation, FL 33322
More informationObject Shape Recognition in Image for Machine Vision Application
Object Shape Recognition in Image for Machine Vision Application Mohd Firdaus Zakaria, Hoo Seng Choon, and Shahrel Azmin Suandi Abstract Vision is the most advanced of our senses, so it is not surprising
More informationObject Recognition in Living Creatures. Object Recognition. Goals of Object Recognition. Object Recognition with Computers. Object Recognition Issues
Object Recognition Object Recognition in Living Creatures Most important aspect of visual perception Least understood Young children can recognize large variety of objects Child can generalize from a few
More informationAutomatic Logo Detection and Removal
Automatic Logo Detection and Removal Miriam Cha, Pooya Khorrami and Matthew Wagner Electrical and Computer Engineering Carnegie Mellon University Pittsburgh, PA 15213 {mcha,pkhorrami,mwagner}@ece.cmu.edu
More informationDetection and recognition of moving objects using statistical motion detection and Fourier descriptors
Detection and recognition of moving objects using statistical motion detection and Fourier descriptors Daniel Toth and Til Aach Institute for Signal Processing, University of Luebeck, Germany toth@isip.uni-luebeck.de
More informationHAND-GESTURE BASED FILM RESTORATION
HAND-GESTURE BASED FILM RESTORATION Attila Licsár University of Veszprém, Department of Image Processing and Neurocomputing,H-8200 Veszprém, Egyetem u. 0, Hungary Email: licsara@freemail.hu Tamás Szirányi
More informationCIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS
CIRCULAR MOIRÉ PATTERNS IN 3D COMPUTER VISION APPLICATIONS Setiawan Hadi Mathematics Department, Universitas Padjadjaran e-mail : shadi@unpad.ac.id Abstract Geometric patterns generated by superimposing
More informationImage Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah
Image Segmentation Ross Whitaker SCI Institute, School of Computing University of Utah What is Segmentation? Partitioning images/volumes into meaningful pieces Partitioning problem Labels Isolating a specific
More informationMotion Detection. Final project by. Neta Sokolovsky
Motion Detection Final project by Neta Sokolovsky Introduction The goal of this project is to recognize a motion of objects found in the two given images. This functionality is useful in the video processing
More informationLane Detection using Fuzzy C-Means Clustering
Lane Detection using Fuzzy C-Means Clustering Kwang-Baek Kim, Doo Heon Song 2, Jae-Hyun Cho 3 Dept. of Computer Engineering, Silla University, Busan, Korea 2 Dept. of Computer Games, Yong-in SongDam University,
More informationImage Segmentation. Ross Whitaker SCI Institute, School of Computing University of Utah
Image Segmentation Ross Whitaker SCI Institute, School of Computing University of Utah What is Segmentation? Partitioning images/volumes into meaningful pieces Partitioning problem Labels Isolating a specific
More informationOptimization. Intelligent Scissors (see also Snakes)
Optimization We can define a cost for possible solutions Number of solutions is large (eg., exponential) Efficient search is needed Global methods: cleverly find best solution without considering all.
More informationWater-Filling: A Novel Way for Image Structural Feature Extraction
Water-Filling: A Novel Way for Image Structural Feature Extraction Xiang Sean Zhou Yong Rui Thomas S. Huang Beckman Institute for Advanced Science and Technology University of Illinois at Urbana Champaign,
More informationSnakes operating on Gradient Vector Flow
Snakes operating on Gradient Vector Flow Seminar: Image Segmentation SS 2007 Hui Sheng 1 Outline Introduction Snakes Gradient Vector Flow Implementation Conclusion 2 Introduction Snakes enable us to find
More informationCHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37
Extended Contents List Preface... xi About the authors... xvii CHAPTER 1 Introduction 1 1.1 Overview... 1 1.2 Human and Computer Vision... 2 1.3 The Human Vision System... 4 1.3.1 The Eye... 5 1.3.2 The
More informationPART-LEVEL OBJECT RECOGNITION
PART-LEVEL OBJECT RECOGNITION Jaka Krivic and Franc Solina University of Ljubljana Faculty of Computer and Information Science Computer Vision Laboratory Tržaška 25, 1000 Ljubljana, Slovenia {jakak, franc}@lrv.fri.uni-lj.si
More informationDEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION. Ani1 K. Jain and Nicolae Duta
DEFORMABLE MATCHING OF HAND SHAPES FOR USER VERIFICATION Ani1 K. Jain and Nicolae Duta Department of Computer Science and Engineering Michigan State University, East Lansing, MI 48824-1026, USA E-mail:
More informationReal Time Stereo Vision Based Pedestrian Detection Using Full Body Contours
Real Time Stereo Vision Based Pedestrian Detection Using Full Body Contours Ion Giosan, Sergiu Nedevschi, Silviu Bota Technical University of Cluj-Napoca {Ion.Giosan, Sergiu.Nedevschi, Silviu.Bota}@cs.utcluj.ro
More informationMobile Human Detection Systems based on Sliding Windows Approach-A Review
Mobile Human Detection Systems based on Sliding Windows Approach-A Review Seminar: Mobile Human detection systems Njieutcheu Tassi cedrique Rovile Department of Computer Engineering University of Heidelberg
More informationImage Segmentation. Shengnan Wang
Image Segmentation Shengnan Wang shengnan@cs.wisc.edu Contents I. Introduction to Segmentation II. Mean Shift Theory 1. What is Mean Shift? 2. Density Estimation Methods 3. Deriving the Mean Shift 4. Mean
More informationTA Section 7 Problem Set 3. SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999)
TA Section 7 Problem Set 3 SIFT (Lowe 2004) Shape Context (Belongie et al. 2002) Voxel Coloring (Seitz and Dyer 1999) Sam Corbett-Davies TA Section 7 02-13-2014 Distinctive Image Features from Scale-Invariant
More informationTHE description and representation of the shape of an object
Enhancement of Shape Description and Representation by Slope Ali Salem Bin Samma and Rosalina Abdul Salam Abstract Representation and description of object shapes by the slopes of their contours or borders
More informationEnsemble of Bayesian Filters for Loop Closure Detection
Ensemble of Bayesian Filters for Loop Closure Detection Mohammad Omar Salameh, Azizi Abdullah, Shahnorbanun Sahran Pattern Recognition Research Group Center for Artificial Intelligence Faculty of Information
More informationROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL
ROBUST OBJECT TRACKING BY SIMULTANEOUS GENERATION OF AN OBJECT MODEL Maria Sagrebin, Daniel Caparròs Lorca, Daniel Stroh, Josef Pauli Fakultät für Ingenieurwissenschaften Abteilung für Informatik und Angewandte
More informationImproving Latent Fingerprint Matching Performance by Orientation Field Estimation using Localized Dictionaries
Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 11, November 2014,
More informationObject Detection System
A Trainable View-Based Object Detection System Thesis Proposal Henry A. Rowley Thesis Committee: Takeo Kanade, Chair Shumeet Baluja Dean Pomerleau Manuela Veloso Tomaso Poggio, MIT Motivation Object detection
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationFeature Matching and Robust Fitting
Feature Matching and Robust Fitting Computer Vision CS 143, Brown Read Szeliski 4.1 James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial Project 2 questions? This
More information1216 P a g e 2.1 TRANSLATION PARAMETERS ESTIMATION. If f (x, y) F(ξ,η) then. f(x,y)exp[j2π(ξx 0 +ηy 0 )/ N] ) F(ξ- ξ 0,η- η 0 ) and
An Image Stitching System using Featureless Registration and Minimal Blending Ch.Rajesh Kumar *,N.Nikhita *,Santosh Roy *,V.V.S.Murthy ** * (Student Scholar,Department of ECE, K L University, Guntur,AP,India)
More informationShape Descriptor using Polar Plot for Shape Recognition.
Shape Descriptor using Polar Plot for Shape Recognition. Brijesh Pillai ECE Graduate Student, Clemson University bpillai@clemson.edu Abstract : This paper presents my work on computing shape models that
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 10 Segmentation 14/02/27 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationIMPROVED SIDE MATCHING FOR MATCHED-TEXTURE CODING
IMPROVED SIDE MATCHING FOR MATCHED-TEXTURE CODING Guoxin Jin 1, Thrasyvoulos N. Pappas 1 and David L. Neuhoff 2 1 EECS Department, Northwestern University, Evanston, IL 60208 2 EECS Department, University
More informationFace Recognition for Mobile Devices
Face Recognition for Mobile Devices Aditya Pabbaraju (adisrinu@umich.edu), Srujankumar Puchakayala (psrujan@umich.edu) INTRODUCTION Face recognition is an application used for identifying a person from
More informationVehicle Dimensions Estimation Scheme Using AAM on Stereoscopic Video
Workshop on Vehicle Retrieval in Surveillance (VRS) in conjunction with 2013 10th IEEE International Conference on Advanced Video and Signal Based Surveillance Vehicle Dimensions Estimation Scheme Using
More informationA Two-stage Scheme for Dynamic Hand Gesture Recognition
A Two-stage Scheme for Dynamic Hand Gesture Recognition James P. Mammen, Subhasis Chaudhuri and Tushar Agrawal (james,sc,tush)@ee.iitb.ac.in Department of Electrical Engg. Indian Institute of Technology,
More informationCS 664 Segmentation. Daniel Huttenlocher
CS 664 Segmentation Daniel Huttenlocher Grouping Perceptual Organization Structural relationships between tokens Parallelism, symmetry, alignment Similarity of token properties Often strong psychophysical
More informationDynamic skin detection in color images for sign language recognition
Dynamic skin detection in color images for sign language recognition Michal Kawulok Institute of Computer Science, Silesian University of Technology, Akademicka 16, 44-100 Gliwice, Poland michal.kawulok@polsl.pl
More informationVideo shot segmentation using late fusion technique
Video shot segmentation using late fusion technique by C. Krishna Mohan, N. Dhananjaya, B.Yegnanarayana in Proc. Seventh International Conference on Machine Learning and Applications, 2008, San Diego,
More informationRecognizing hand-drawn images using shape context
Recognizing hand-drawn images using shape context Gyozo Gidofalvi Department of Computer Science and Engineering University of California, San Diego La Jolla, CA 92037 gyozo@cs.ucsd.edu Abstract The objective
More informationCS201: Computer Vision Introduction to Tracking
CS201: Computer Vision Introduction to Tracking John Magee 18 November 2014 Slides courtesy of: Diane H. Theriault Question of the Day How can we represent and use motion in images? 1 What is Motion? Change
More information