System 1 Overview. System 1 Overview. Geometric Model-based Object Recognition. This Lecture: Geometric description Next Lecture: Model matching
|
|
- Easter Wilson
- 5 years ago
- Views:
Transcription
1 System 1 Overview How to discriminate between and also estimate image positions System 1 Overview Geometric Model-based Object Recognition This Lecture: Geometric description Next Lecture: Model matching vs Geometric Model-based Object Recognition Pose estimation Verification AV: 2D Geometric vision Fisher system 1 slide 1 AV: 2D Geometric vision Fisher system 1 slide 2 Motivation - automated visual inspection Manufacturing High speed product verification Largest use of computer vision systems worldwide Most western manufacturing has some visual quality control Given: Isolated binary image object Assume: Introduction 1. Geometric shape models for parts to be recognized (,) (12,) (4,4) (8,4) (,4) (12,4) (4,12) (8,12) AV: 2D Geometric vision Fisher system 1 slide 3 AV: 2D Geometric vision Fisher system 1 slide 4
2 2. Image feature positions Data Description Goal: describe parts in same vocabulary of boundary shapes as model Get object pixels that lie on the boundary Split pixels into straight line sets System does: 1. Matches image and model features 2. Estimates transformation mapping model onto data Find corners where the lines meet (Here we ignore curved boundaries.) AV: 2D Geometric vision Fisher system 1 slide 5 AV: 2D Geometric vision Fisher system 1 slide 6 Boundary Finding 1) Get points that lie on boundary: [r,c] = find( bwperim(image,4) == 1 ) Removing Dangling Spurs Spur: any boundary pixel with only 1 neighbor inside a 3x3 neighborhood ) Remove any spurs on boundary, track and segment [sr,sc] = removespurs(r,c,h,w); [tr,tc] = boundarytrack(sr,sc); [cr,cc] = findcorners(tr,tc); AV: 2D Geometric vision Fisher system 1 slide 7 AV: 2D Geometric vision Fisher system 1 slide 8
3 Removing Unnecessary Boundary Pixels changed=1; while changed==1 changed = ; [sr,sc] = find(work==1); % work: boundary pixels for i = 1 : length(sr) % check each boundary point neigh = work(sr(i)-1:sr(i)+1,sc(i)-1:sc(i)+1); count=sum(sum(neigh)); if count < 3 % only point and at most work(sr(i),sc(i)) = ; % 1 neighbor so remove it changed=1; Trailing s omitted. Find unnecessary corners: * - boundary point to keep c - boundary point to remove - boundary point thru here somehow shaded box - interior or exterior pixel thick red box - pixel neighbourhood inspected * c * * c * c * * * * c AV: 2D Geometric vision Fisher system 1 slide 9 AV: 2D Geometric vision Fisher system 1 slide 1 Raw boundary: Boundary Cleaning Results 23 Cleaned boundary: AV: 2D Geometric vision Fisher system 1 slide 11 AV: 2D Geometric vision Fisher system 1 slide 12
4 Getting a Consecutive Boundary Track Tracking Results Despurred boundary (unorganized point set): TRACK TO FIRST UNTRACKED BOUNDARY PIXEL ENCOUNTERED AS i GOES NEXT DIRECTIONS x NEXT = (LAST i) MOD EXAMPLE TRACKING X X LAST MOVE = 3 NEXT MOVE = 8,1,2,3,4,5, Tracked boundary (consecutive point set): AV: 2D Geometric vision Fisher system 1 slide 13 AV: 2D Geometric vision Fisher system 1 slide 14 Midlecture Problem Given the following tracking sequence: a b c d f X X e g What is the order of pixels to be considered for tracking to the next pixel AV: 2D Geometric vision Fisher system 1 slide A Recursive splitting the boundary into linear segments B Find leftmost point A 2. Find rightmost point B 3. Split points in set A >B and B >A: (a) Find line thru current segment points X & Y (b) Find point Z furthest from the line at distance d (c) If d is less than a threshold, then this segment finished (d) Otherwise, create new sets X >Z and Z >Y and recurse AV: 2D Geometric vision Fisher system 1 slide 16
5 Recursive Splitting Algorithm Y Recursive Splitting Code function recsplit(r,c,threshold) global numlines lines n = length(r); % total number of points vec = [c(n)-c(1), r(1)-r(n)]; % unit vector vec = vec/norm(vec); % perpicular to XY X d1 d2 Z % find point furthest from line maxdist = ; for i = 1 : n dist = abs( [r(i) - r(1), c(i) - c(1)] * vec ); if dist > maxdist maxdist = dist; maxindex = i; % where furthest AV: 2D Geometric vision Fisher system 1 slide 17 AV: 2D Geometric vision Fisher system 1 slide 18 % check for splitting by testing maximum point distance if maxdist < threshold % then it s a single line - save it numlines = numlines + 1; lines(numlines,1) = r(1); lines(numlines,2) = c(1); lines(numlines,3) = r(n); lines(numlines,4) = c(n); else % otherwise it needs to be split up recsplit(r(1:maxindex),c(1:maxindex),threshold); recsplit(r(maxindex:n),c(maxindex:n),threshold); Segmented boundary: Splitting Results AV: 2D Geometric vision Fisher system 1 slide 19 AV: 2D Geometric vision Fisher system 1 slide 2
6 Describing Lines Endpoints Length True Length Full Result Set (249,123)-(261,127) 13 - (261,127)-(373,289) (373,289)-(39,316) 32 - (39,316)-(388,33) 14 - (388,33)-(85,536) (85,536)-(13,437) (13,437)-(25,268) (25,268)-(186,171) (186,171)-(249,123) Input into matcher: extra lines, short lines, longer lines AV: 2D Geometric vision Fisher system 1 slide 21 AV: 2D Geometric vision Fisher system 1 slide 22 Running Program >> doall Input model to image scale factor (float) 3.9 Want to use live test data (,1) Test image file stem (filestem) TESTDATA1/f initial_split = ans = Want to process another image 2 (,1) 1 numlines = 1 AV: 2D Geometric vision Fisher system 1 slide 23 AV: 2D Geometric vision Fisher system 1 slide 24
7 Discussion 1. Simple boundary track and segment process 2. Gives compact line-based description 3. May have some extra segments 4. Segments may be too long or short 5. Description is input into matcher What Have We Learned Introduction to Data cleaning Boundary/Curve tracking Curve segmentation From pixels to descriptions Next: Matching descriptions to models AV: 2D Geometric vision Fisher system 1 slide 25 AV: 2D Geometric vision Fisher system 1 slide 26 System 1 Overview How to discriminate between these How to estimate object positions System 1 Overview Geometric model-based recognition processes Last Lecture: geometric description This Lecture: model matching vs Geometric model-based recognition pose estimation Verification AV: 2D Geometric vision Fisher system 1 slide 27 AV: 2D Geometric vision Fisher system 1 slide 28
8 Introduction Given: Sets of model lines {m i } in a scene coordinate system Set of image lines {d j } in an image coordinate system Image to scene scale conversion factor σ (pixels to cm) Do: 1. Match image and model lines {(m i, d j )} 2. Estimate transformation mapping model onto data: R, t 3. Verify matching and pose estimate Output: identity and position (R, t) Interpretation Tree matching Goal: Correspondence between subset of M model features {m i } and D data features {d j } Complete (exhaustive, depth-first) search - if a match exists, it will be found Needs a wildcard ( * ) data feature to match model features with no corresponding data feature (occlusion, segmentation failure) Can find multiple solutions Result: {(m i, d ji )} set of matched features AV: 2D Geometric vision Fisher system 1 slide 29 AV: 2D Geometric vision Fisher system 1 slide 3 Search Tree Expand by model feature at each new level m1 m2 mm d1 d1 d2 d1 d2... d dd... dd * dd * Any given node in tree represents a set of matches {(m i, d ji )} * Reducing Search Complexity Do we need to consider all paths in search tree No: Suppose current match state has these pairs matched: {(m i, d ji )}, i = 1..k Given a new pair (m k+1, d jk+1 ) 1. unary test(m k+1, d jk+1 ) - terminates exting search path if new pair has incompatible properties 2. binary test(m k+1, d jk+1, m x, d jx ) for all x = 1..k - terminates exting search path AV: 2D Geometric vision Fisher system 1 slide 31 AV: 2D Geometric vision Fisher system 1 slide 32
9 if new pair has incompatible properties with each previous pairing on this tree branch (as all parts of the same object are compatible). 3. Early success limit L - can stop search when have {(m i, d ji )}, i = 1..L compatible pairs Midlecture Problem What are good unary/binary properties to test if matching parts with sets of circular holes Eg: 4. Early failure limit L - can stop search when can never get L pairs on this path. If have t non-wildcard matches on this path out of k pairings, then fail if t + (M k) < L AV: 2D Geometric vision Fisher system 1 slide 33 AV: 2D Geometric vision Fisher system 1 slide 34 Computational Complexity M model feature tree levels. D data features on each level plus 1 wildcard Worst case: (D + 1) M nodes in tree to visit p u - probability that any random model feature and any random data feature pass unary test p b - probability that any 2 random model features and any 2 random data features pass binary test Then, if p b MD < 2, then the average case complexity of ITREE search is O(LD 2 ) Much smaller, but can still be substantial AV: 2D Geometric vision Fisher system 1 slide 35 IT algorithm matlab code % interpretation tree - match model and data lines until % Limit are successfully paired or can never get Limit % model - current model % numm - number of lines in the model % mlevel - last matched model feature % Limit - early termination threshold % pairs(:,2) - paired model-data features % numpairs - number of paired features function ok=itree(model,numm,mlevel,limit,pairs,numpairs) global Models numlines datalines % check for termination conditions if numpairs >= Limit % enough pairs to verify AV: 2D Geometric vision Fisher system 1 slide 36
10 [theta,trans] = estimatepose(model,numpairs,pairs) for p = 1 : 4 ok = verifymatch(theta(p),trans(p,:),model, numpairs,pairs); if ok return % successful verification return % failure to verify - continue search % never enough pairs if numpairs + numm - mlevel < Limit ok=; return % normal case - see if we can ext pair list mlevel = mlevel+1; for d = 1 : numlines % try all data lines % do unary test if unarytest(model,mlevel,d) % do all binary tests passed=1; for p = 1 : numpairs if ~binarytest(model,mlevel,d,pairs(p,1),pairs(p,2)) passed=; break AV: 2D Geometric vision Fisher system 1 slide 37 AV: 2D Geometric vision Fisher system 1 slide 38 if passed % passed all tests: add to matched pairs and recurse pairs(numpairs+1,1)=mlevel; pairs(numpairs+1,2)=d; ok=itree(model,numm,mlevel,limit,pairs,numpairs+1); if ok return % successful verification Algorithm Block Diagram % wildcard case - go to next model feature ok = itree(model,numm,mlevel,limit,pairs,numpairs); AV: 2D Geometric vision Fisher system 1 slide 39 AV: 2D Geometric vision Fisher system 1 slide 4
11 Line matching unary test DATA LINES Line matching binary tests MODEL LINES DATA LINE MODEL LINE l d l m Pass test if σl m (1 δ u ) l d σl m (1 + δ u ) Allows for calibration and segmentation errors Position indepent property (δ u =.3 typical) α β Pass test if α β δ b Allows for calibration and segmentation errors Position indepent property (δ b =.2 radians typical) Also: don t allow duplicate use of model or data lines AV: 2D Geometric vision Fisher system 1 slide 41 AV: 2D Geometric vision Fisher system 1 slide 42 Matching performance Limit L = number of model lines - 1 Tries all models Stops at first verified model instance for each model Different Matched Models & Instances Image True Model Tee Thin L Thick L 1 Tee Tee Tee Tee Thin L Thin L Thin L Thin L Thick L Thick L Thick L Thick L 2 3 AV: 2D Geometric vision Fisher system 1 slide 43 AV: 2D Geometric vision Fisher system 1 slide 44
12 Pose Estimation Goal: eliminate invalid matches & find object pose Given a set {(m i, d ji )}, i = 1..L of compatible pairs Find the rotation R and translation t that transforms the model onto the data features. This is the pose or position Let R = cos(θ) sin(θ) sin(θ) cos(θ) be the rotation matrix If p is a model point, then R p + t is the transformed model point Usually estimate rotation R first and then translation t Estimating Rotation Given model line i points {( m i1, m i2 )} Corresponding data line points {( d i1, d i2 )} u m i1 m i2 Model line direction unit vector: i u i = Data line direction unit vector: m i2 m i1 m i2 m i1 d v i = i2 d i1 d i2 d i1 AV: 2D Geometric vision Fisher system 1 slide 45 AV: 2D Geometric vision Fisher system 1 slide 46 If no data errors, want R such that v i = ±R u i (± as don t know if points are in same order) But, as we have errors least squares solution Step 1: compute vectors perpicular to v i If v i = (v x1, v y1 ), then perpicular is ( v yi, v xi ) Step 2: compute error between v i and R u i Use dot product of R u i and perpicular, which equals sin() of angular error, which is small, so sin(error). = error Step 3: Reformulate error Let R = cos(θ) sin(θ) sin(θ) cos(θ) Multiplying out and grouping terms: ǫ i = (v xi u yi v yi u xi, v yi u yi + v xi u xi )(cos(θ), sin(θ)) Make a matrix equation ǫ = D(cos(θ), sin(θ)) Each row of L vector ǫ is ǫ i and each row of L 2 matrix D is (v xi u yi v yi u xi, v yi u yi + v xi u xi ) The least square error is ǫ ǫ = (cos(θ), sin(θ))d D(cos(θ), sin(θ)) ǫ i = ( v yi, v xi )R(u xi, u yi ) AV: 2D Geometric vision Fisher system 1 slide 47 AV: 2D Geometric vision Fisher system 1 slide 48
13 Step 4: Finding rotation that minimizes least square error Let D D = e f g h Then, we minimize (cos(θ), sin(θ)) e f (cos(θ), sin(θ)) = g h ecos(θ) 2 + (f + g)cos(θ)sin(θ) + hsin(θ) 2 Differentiate wrt θ and set equal to gives: (f + g)cos(θ) 2 + 2(h e)cos(θ)sin(θ) (f + g)sin(θ) 2 = Solving gives: tan(θ) = (h e) ± (e h) 2 + (f + g) 2 (f + g) Four θ solutions (2 for ±, 2 for tan(θ) = tan(π + θ)). Try to verify all 4. Divide by cos(θ) 2 (if cos(θ) = then use special case) gives: (f + g)tan(θ) 2 + 2(e h)tan(θ) (f + g) = AV: 2D Geometric vision Fisher system 1 slide 49 AV: 2D Geometric vision Fisher system 1 slide 5 Estimating Translation By Least Squares d i2 Verification Transform model lines into place: for each m i compute σr m i + t w i d i1 ε i σrm i2+ t For each model-data line pair, do 3 tests: Test 1: Are model and data lines parallel (For simplicity, use m i in notation instead of σr m i + t) d i2 σrm + t w i is perpicular to rotated model line i i1 d i1 m i2 Offset error ǫ i = ( d i1 σr m i1 t) w i Differentiate i ǫ2 i wrt t, set equal to and solve for t gives: t = ( w i w i) 1 w i w i(d i1 σr m i1 ) If m i1 m i1 m i2 m i1 m i2 then OK (threshold =.9) d i1 d i2 d i1 d i2 > threshold AV: 2D Geometric vision Fisher system 1 slide 51 AV: 2D Geometric vision Fisher system 1 slide 52
14 Test 2: Are model and data lines close Test 3: Do model and data lines overlap d i2 d i2 u i m i2 w i d i1 ε i d i1 m i2 m i1 Let (r, s) = m i1 m i2 m i1 m i2 and w i = ( s, r) For k = i1, i2, compute ǫ i = ( d k m i1 ) w i If ǫ i < threshold then OK (threshold = 15 pixels) m i1 λ i For k = i1, i2, compute λ k = ( d k m i1 ) u i If tolerance m i1 m i2 λ k (1 + tolerance) m i1 m i2, then OK (tolerance =.3) AV: 2D Geometric vision Fisher system 1 slide 53 AV: 2D Geometric vision Fisher system 1 slide 54 Confusion Matrix Verified Position Result Examples Limit = number of model lines - 1 Est Est Est No Tee Thin L Thick L Est True Tee 4 True Thin L 3 1 True Thick L 4 Image 8 had Thin L model flipped over. Matching process can be exted to allow this. AV: 2D Geometric vision Fisher system 1 slide 55 AV: 2D Geometric vision Fisher system 1 slide 56
15 Discussion Efficient if good unary/binary tests Suitable for 5% (estimated) flat parts Similar techniques for shapes other than straight lines: circular arcs, corners, holes,... Extable to 3D (future lectures) Extensions for perspective projection What Have We Learned Introduction to Geometric Model-based Object Recognition General Feature Matching Algorithm 2D Least Squares rotation and translation estimation algorithms 2D Geometric Verification Algorithm AV: 2D Geometric vision Fisher system 1 slide 57 AV: 2D Geometric vision Fisher system 1 slide 58
Binocular Stereo Vision. System 6 Introduction Is there a Wedge in this 3D scene?
System 6 Introduction Is there a Wedge in this 3D scene? Binocular Stereo Vision Data a stereo pair of images! Given two 2D images of an object, how can we reconstruct 3D awareness of it? AV: 3D recognition
More informationEE 584 MACHINE VISION
EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationCOMP30019 Graphics and Interaction Scan Converting Polygons and Lines
COMP30019 Graphics and Interaction Scan Converting Polygons and Lines Department of Computer Science and Software Engineering The Lecture outline Introduction Scan conversion Scan-line algorithm Edge coherence
More informationDD2429 Computational Photography :00-19:00
. Examination: DD2429 Computational Photography 202-0-8 4:00-9:00 Each problem gives max 5 points. In order to pass you need about 0-5 points. You are allowed to use the lecture notes and standard list
More informationHOUGH TRANSFORM CS 6350 C V
HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges
More informationLecture 9: Hough Transform and Thresholding base Segmentation
#1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting
More informationUNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences
UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 43 / INF 935 Digital image analysis Date: Thursday December 4, 4 Exam hours: 4.3-8.3 (4 hours) Number of pages: 6 pages Enclosures:
More informationVisual Recognition: Image Formation
Visual Recognition: Image Formation Raquel Urtasun TTI Chicago Jan 5, 2012 Raquel Urtasun (TTI-C) Visual Recognition Jan 5, 2012 1 / 61 Today s lecture... Fundamentals of image formation You should know
More informationIntroduction to Homogeneous coordinates
Last class we considered smooth translations and rotations of the camera coordinate system and the resulting motions of points in the image projection plane. These two transformations were expressed mathematically
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Q K 1 u v 1 What is pose estimation?
More informationLecture 15: Segmentation (Edge Based, Hough Transform)
Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................
More informationComputer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11
Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationColorado School of Mines. Computer Vision. Professor William Hoff Dept of Electrical Engineering &Computer Science.
Professor William Hoff Dept of Electrical Engineering &Computer Science http://inside.mines.edu/~whoff/ 1 Model Based Object Recognition 2 Object Recognition Overview Instance recognition Recognize a known
More informationChapter 3. Sukhwinder Singh
Chapter 3 Sukhwinder Singh PIXEL ADDRESSING AND OBJECT GEOMETRY Object descriptions are given in a world reference frame, chosen to suit a particular application, and input world coordinates are ultimately
More informationEdge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels
Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface
More informationImage warping , , Computational Photography Fall 2017, Lecture 10
Image warping http://graphics.cs.cmu.edu/courses/15-463 15-463, 15-663, 15-862 Computational Photography Fall 2017, Lecture 10 Course announcements Second make-up lecture on Friday, October 6 th, noon-1:30
More informationMorphological Image Processing
Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor
More information2D and 3D Transformations AUI Course Denbigh Starkey
2D and 3D Transformations AUI Course Denbigh Starkey. Introduction 2 2. 2D transformations using Cartesian coordinates 3 2. Translation 3 2.2 Rotation 4 2.3 Scaling 6 3. Introduction to homogeneous coordinates
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationN-Views (1) Homographies and Projection
CS 4495 Computer Vision N-Views (1) Homographies and Projection Aaron Bobick School of Interactive Computing Administrivia PS 2: Get SDD and Normalized Correlation working for a given windows size say
More informationCOMP 558 lecture 19 Nov. 17, 2010
COMP 558 lecture 9 Nov. 7, 2 Camera calibration To estimate the geometry of 3D scenes, it helps to know the camera parameters, both external and internal. The problem of finding all these parameters is
More informationChapter 18. Geometric Operations
Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;
More informationSrikumar Ramalingam. Review. 3D Reconstruction. Pose Estimation Revisited. School of Computing University of Utah
School of Computing University of Utah Presentation Outline 1 2 3 Forward Projection (Reminder) u v 1 KR ( I t ) X m Y m Z m 1 Backward Projection (Reminder) Q K 1 q Presentation Outline 1 2 3 Sample Problem
More informationCMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014
CMSC427: Computer Graphics Lecture Notes Last update: November 21, 2014 TA: Josh Bradley 1 Linear Algebra Review 1.1 Vector Multiplication Suppose we have a vector a = [ x a y a ] T z a. Then for some
More informationEN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform
EN1610 Image Understanding Lab # 4: Corners, Interest Points, Hough Transform The goal of this fourth lab is to ˆ Learn how to detect corners, and use them in a tracking application ˆ Learn how to describe
More informationAn Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy
An Automated Image-based Method for Multi-Leaf Collimator Positioning Verification in Intensity Modulated Radiation Therapy Chenyang Xu 1, Siemens Corporate Research, Inc., Princeton, NJ, USA Xiaolei Huang,
More informationLinear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ x + 5y + 7z 9x + 3y + 11z
Basic Linear Algebra Linear algebra deals with matrixes: two-dimensional arrays of values. Here s a matrix: [ 1 5 ] 7 9 3 11 Often matrices are used to describe in a simpler way a series of linear equations.
More informationProblem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1
Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page
More informationTEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions
TEAMS National Competition Middle School Version Photometry Solution Manual 25 Questions Page 1 of 14 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging
More information2D/3D Geometric Transformations and Scene Graphs
2D/3D Geometric Transformations and Scene Graphs Week 4 Acknowledgement: The course slides are adapted from the slides prepared by Steve Marschner of Cornell University 1 A little quick math background
More informationHomework #1. Displays, Image Processing, Affine Transformations, Hierarchical modeling, Projections
Computer Graphics Instructor: rian Curless CSEP 557 Winter 213 Homework #1 Displays, Image Processing, Affine Transformations, Hierarchical modeling, Projections Assigned: Tuesday, January 22 nd Due: Tuesday,
More informationIntroduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory
Introduction Computer Vision & Digital Image Processing Morphological Image Processing I Morphology a branch of biology concerned with the form and structure of plants and animals Mathematical morphology
More informationTopic 6 Representation and Description
Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation
More informationApproximation Algorithms for Geometric Intersection Graphs
Approximation Algorithms for Geometric Intersection Graphs Subhas C. Nandy (nandysc@isical.ac.in) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Outline
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent
More informationExample 2: Straight Lines. Image Segmentation. Example 3: Lines and Circular Arcs. Example 1: Regions
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image Example : Straight Lines. into
More informationOffline Simultaneous Localization and Mapping (SLAM) using Miniature Robots
Offline Simultaneous Localization and Mapping (SLAM) using Miniature Robots Objectives SLAM approaches SLAM for ALICE EKF for Navigation Mapping and Network Modeling Test results Philipp Schaer and Adrian
More informationExample 1: Regions. Image Segmentation. Example 3: Lines and Circular Arcs. Example 2: Straight Lines. Region Segmentation: Segmentation Criteria
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image Example 1: Regions. into linear
More informationTriangulation and Convex Hull. 8th November 2018
Triangulation and Convex Hull 8th November 2018 Agenda 1. Triangulation. No book, the slides are the curriculum 2. Finding the convex hull. Textbook, 8.6.2 2 Triangulation and terrain models Here we have
More informationTEAMS National Competition High School Version Photometry Solution Manual 25 Questions
TEAMS National Competition High School Version Photometry Solution Manual 25 Questions Page 1 of 15 Photometry Questions 1. When an upright object is placed between the focal point of a lens and a converging
More informationconvolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection
COS 429: COMPUTER VISON Linear Filters and Edge Detection convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection Reading:
More informationBinary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5
Binary Image Processing CSE 152 Lecture 5 Announcements Homework 2 is due Apr 25, 11:59 PM Reading: Szeliski, Chapter 3 Image processing, Section 3.3 More neighborhood operators Binary System Summary 1.
More informationRepresenting 2D Transformations as Matrices
Representing 2D Transformations as Matrices John E. Howland Department of Computer Science Trinity University One Trinity Place San Antonio, Texas 78212-7200 Voice: (210) 999-7364 Fax: (210) 999-7477 E-mail:
More informationAlbert M. Vossepoel. Center for Image Processing
Albert M. Vossepoel www.ph.tn.tudelft.nl/~albert scene image formation sensor pre-processing image enhancement image restoration texture filtering segmentation user analysis classification CBP course:
More informationHOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis
INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to
More informationCS1114: Study Guide 2
CS4: Study Guide 2 This document covers the topics we ve covered in the second part of the course. Please refer to the class slides for more details. Polygons and convex hulls A polygon is a set of 2D
More informationCHAPTER 3. Single-view Geometry. 1. Consequences of Projection
CHAPTER 3 Single-view Geometry When we open an eye or take a photograph, we see only a flattened, two-dimensional projection of the physical underlying scene. The consequences are numerous and startling.
More informationLecture 25: Bezier Subdivision. And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10
Lecture 25: Bezier Subdivision And he took unto him all these, and divided them in the midst, and laid each piece one against another: Genesis 15:10 1. Divide and Conquer If we are going to build useful
More informationmorphology on binary images
morphology on binary images Ole-Johan Skrede 10.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo After original slides
More informationTriangle Rasterization
Triangle Rasterization Computer Graphics COMP 770 (236) Spring 2007 Instructor: Brandon Lloyd 2/07/07 1 From last time Lines and planes Culling View frustum culling Back-face culling Occlusion culling
More informationMidterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer
Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer The exam consists of 10 questions. There are 2 points per question for a total of 20 points. You
More informationHomework #1. Displays, Image Processing, Affine Transformations, Hierarchical Modeling
Computer Graphics Instructor: Brian Curless CSE 457 Spring 215 Homework #1 Displays, Image Processing, Affine Transformations, Hierarchical Modeling Assigned: Thursday, April 9 th Due: Thursday, April
More informationSection 2.3: Monte Carlo Simulation
Section 2.3: Monte Carlo Simulation Discrete-Event Simulation: A First Course c 2006 Pearson Ed., Inc. 0-13-142917-5 Discrete-Event Simulation: A First Course Section 2.3: Monte Carlo Simulation 1/1 Section
More informationComputer Vision. Coordinates. Prof. Flávio Cardeal DECOM / CEFET- MG.
Computer Vision Coordinates Prof. Flávio Cardeal DECOM / CEFET- MG cardeal@decom.cefetmg.br Abstract This lecture discusses world coordinates and homogeneous coordinates, as well as provides an overview
More informationAutomatic Image Alignment (feature-based)
Automatic Image Alignment (feature-based) Mike Nese with a lot of slides stolen from Steve Seitz and Rick Szeliski 15-463: Computational Photography Alexei Efros, CMU, Fall 2006 Today s lecture Feature
More informationCV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more
CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured
More informationLecture 8: Fitting. Tuesday, Sept 25
Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation
More information18.02 Multivariable Calculus Fall 2007
MIT OpenCourseWare http://ocw.mit.edu 18.02 Multivariable Calculus Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.02 Problem Set 4 Due Thursday
More informationOBJECT DESCRIPTION - FEATURE EXTRACTION
INF 4300 Digital Image Analysis OBJECT DESCRIPTION - FEATURE EXTRACTION Fritz Albregtsen 1.10.011 F06 1.10.011 INF 4300 1 Today We go through G&W section 11. Boundary Descriptors G&W section 11.3 Regional
More information6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note
6-DOF Model Based Tracking via Object Coordinate Regression Supplemental Note Alexander Krull, Frank Michel, Eric Brachmann, Stefan Gumhold, Stephan Ihrke, Carsten Rother TU Dresden, Dresden, Germany The
More informationWikipedia - Mysid
Wikipedia - Mysid Erik Brynjolfsson, MIT Filtering Edges Corners Feature points Also called interest points, key points, etc. Often described as local features. Szeliski 4.1 Slides from Rick Szeliski,
More informationMulti-view Surface Inspection Using a Rotating Table
https://doi.org/10.2352/issn.2470-1173.2018.09.iriacv-278 2018, Society for Imaging Science and Technology Multi-view Surface Inspection Using a Rotating Table Tomoya Kaichi, Shohei Mori, Hideo Saito,
More informationIntroduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature
More information(Refer Slide Time: 00:01:27 min)
Computer Aided Design Prof. Dr. Anoop Chawla Department of Mechanical engineering Indian Institute of Technology, Delhi Lecture No. # 01 An Introduction to CAD Today we are basically going to introduce
More informationGraphics (Output) Primitives. Chapters 3 & 4
Graphics (Output) Primitives Chapters 3 & 4 Graphic Output and Input Pipeline Scan conversion converts primitives such as lines, circles, etc. into pixel values geometric description a finite scene area
More informationPolar Coordinates. 2, π and ( )
Polar Coordinates Up to this point we ve dealt exclusively with the Cartesian (or Rectangular, or x-y) coordinate system. However, as we will see, this is not always the easiest coordinate system to work
More informationMorphological Image Processing
Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures
More information3D Geometry and Camera Calibration
3D Geometr and Camera Calibration 3D Coordinate Sstems Right-handed vs. left-handed 2D Coordinate Sstems ais up vs. ais down Origin at center vs. corner Will often write (u, v) for image coordinates v
More informationAgenda. Rotations. Camera calibration. Homography. Ransac
Agenda Rotations Camera calibration Homography Ransac Geometric Transformations y x Transformation Matrix # DoF Preserves Icon translation rigid (Euclidean) similarity affine projective h I t h R t h sr
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 7: Image Alignment and Panoramas What s inside your fridge? http://www.cs.washington.edu/education/courses/cse590ss/01wi/ Projection matrix intrinsics projection
More informationCOMP 175 COMPUTER GRAPHICS. Ray Casting. COMP 175: Computer Graphics April 26, Erik Anderson 09 Ray Casting
Ray Casting COMP 175: Computer Graphics April 26, 2018 1/41 Admin } Assignment 4 posted } Picking new partners today for rest of the assignments } Demo in the works } Mac demo may require a new dylib I
More informationCPSC 425: Computer Vision
1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 12, 2016 Topics:
More informationEdge and corner detection
Edge and corner detection Prof. Stricker Doz. G. Bleser Computer Vision: Object and People Tracking Goals Where is the information in an image? How is an object characterized? How can I find measurements
More informationExtracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang
Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion
More informationComputer Graphics. - Rasterization - Philipp Slusallek
Computer Graphics - Rasterization - Philipp Slusallek Rasterization Definition Given some geometry (point, 2D line, circle, triangle, polygon, ), specify which pixels of a raster display each primitive
More informationIntroduction to Computer Vision
Introduction to Computer Vision Michael J. Black Oct 2009 Motion estimation Goals Motion estimation Affine flow Optimization Large motions Why affine? Monday dense, smooth motion and regularization. Robust
More informationCSE 252B: Computer Vision II
CSE 252B: Computer Vision II Lecturer: Serge Belongie Scribes: Jeremy Pollock and Neil Alldrin LECTURE 14 Robust Feature Matching 14.1. Introduction Last lecture we learned how to find interest points
More informationEinführung in Visual Computing
Einführung in Visual Computing 186.822 Rasterization Werner Purgathofer Rasterization in the Rendering Pipeline scene objects in object space transformed vertices in clip space scene in normalized device
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationGraphics and Interaction Transformation geometry and homogeneous coordinates
433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation
More informationAnnouncements, schedule. Lecture 8: Fitting. Weighted graph representation. Outline. Segmentation by Graph Cuts. Images as graphs
Announcements, schedule Lecture 8: Fitting Tuesday, Sept 25 Grad student etensions Due of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review
More informationAnnouncements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10
Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal
More informationCOMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates
COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation
More information2. Data Preprocessing
2. Data Preprocessing Contents of this Chapter 2.1 Introduction 2.2 Data cleaning 2.3 Data integration 2.4 Data transformation 2.5 Data reduction Reference: [Han and Kamber 2006, Chapter 2] SFU, CMPT 459
More informationLecture 3: Art Gallery Problems and Polygon Triangulation
EECS 396/496: Computational Geometry Fall 2017 Lecture 3: Art Gallery Problems and Polygon Triangulation Lecturer: Huck Bennett In this lecture, we study the problem of guarding an art gallery (specified
More informationAgenda. Rotations. Camera models. Camera calibration. Homographies
Agenda Rotations Camera models Camera calibration Homographies D Rotations R Y = Z r r r r r r r r r Y Z Think of as change of basis where ri = r(i,:) are orthonormal basis vectors r rotated coordinate
More informationIdentifying Layout Classes for Mathematical Symbols Using Layout Context
Rochester Institute of Technology RIT Scholar Works Articles 2009 Identifying Layout Classes for Mathematical Symbols Using Layout Context Ling Ouyang Rochester Institute of Technology Richard Zanibbi
More informationCS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges
CS 4495 Computer Vision Linear Filtering 2: Templates, Edges Aaron Bobick School of Interactive Computing Last time: Convolution Convolution: Flip the filter in both dimensions (right to left, bottom to
More informationAugmented Reality VU. Computer Vision 3D Registration (2) Prof. Vincent Lepetit
Augmented Reality VU Computer Vision 3D Registration (2) Prof. Vincent Lepetit Feature Point-Based 3D Tracking Feature Points for 3D Tracking Much less ambiguous than edges; Point-to-point reprojection
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationChapters 7 & 8. Parallel and Perpendicular Lines/Triangles and Transformations
Chapters 7 & 8 Parallel and Perpendicular Lines/Triangles and Transformations 7-2B Lines I can identify relationships of angles formed by two parallel lines cut by a transversal. 8.G.5 Symbolic Representations
More informationElaborazione delle Immagini Informazione Multimediale. Raffaella Lanzarotti
Elaborazione delle Immagini Informazione Multimediale Raffaella Lanzarotti HOUGH TRANSFORM Paragraph 4.3.2 of the book at link: szeliski.org/book/drafts/szeliskibook_20100903_draft.pdf Thanks to Kristen
More informationSegmentation and Grouping
Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation
More informationLecture 16: Computer Vision
CS4442/9542b: Artificial Intelligence II Prof. Olga Veksler Lecture 16: Computer Vision Motion Slides are from Steve Seitz (UW), David Jacobs (UMD) Outline Motion Estimation Motion Field Optical Flow Field
More information