MaNGA Technical Note Finding Fiber Bundles in an Image

Size: px
Start display at page:

Download "MaNGA Technical Note Finding Fiber Bundles in an Image"

Transcription

1 MaNGA Technical Note Finding Fiber Bundles in an Image Jeffrey W Percival 05-May-2015 Contents 1 Introduction 2 2 Description of the Problem 3 3 Hexagon Nomenclature and Methods Ranks and Coordinate Systems Numbering Schemes V-Groove Blocks Overview of the Analysis Sobel Operator Hough Transform Blob Detection

2 4.4 Spike Detection Absolute Orientation Problem Motor Stage Alignment Summary 15 1 Introduction The University of Wisconsin Fiber Test Stand Project is an effort to build a highly automated fiber throughput measurement system to be used in the AS3 MaNGA project. MaNGA is the Mapping Nearby Galaxies at APO survey of the Sloan Digital Sky Survey III. See future/manga.php. The items being tested are bundles of fiber optics, arranged at one end (the front end) into a packed hexagonal shape, and at the back end into a linear arrangement suitable for feeding a spectrograph. The test stand consists of a stabilized point source of light at the front end, and a sensitive photometer at the back end. A motorized XY-stage at the front end, holding the bundle, brings each fiber, sequentially, under the point source. At the back end, a motorized XY-stage holding the linear group of fibers is moved around in front of the photometer, searching for the illuminated fiber, and the photometer measures the transmitted light. Before-and-after measurements of the stabilized point source provide a baseline against which the fibers are compared. This measurement acts as an Acceptance Test Procedure (ATP) for the bundles manufactured by the C- Technologies Corporation ( as well as providing calibration data for the survey measurements at the Apache Point Observatory ( At the front end, the front surface of the fiber bundle sits at the focal plane of an Ethernet camera with a sensor of 1624x bit pixels. A beam splitter 2

3 Figure 1: Hexagonal array of polished fibers, and point source illuminating the central fiber sends the image of the point source into the camera as well, so the camera can see the spot falling onto the packed hexagon of polished fibers ends. A light at the back end can illuminate all the fibers at once, allowing the front-end camera to see the whole bundle of fibers at once. See Figure 1. 2 Description of the Problem The tasks necessary for the front end control are: Image the bundle Recognize the orientation of the fiber bundle (center position, rotation, scale) Locate each fiber to sub-pixel precision 3

4 Hexagon Rank Number of Fibers Table 1: Hexagon Ranks and Number of Elements Calculate the deviation of each fiber from its position in an ideal hexagon Calculate the linear transformation between the image coordinates and the motor stage coordinates (zero point, scale, and rotation) Provide precise bundle movement for spot positioning Allow for mapping one fiber numbering scheme to another The scheme for locating the fibers in the image must be robust in the presence of variations in fiber brightness due to differences in opacity and illumination, blemishes in the fiber cores, and perturbations of fiber position due to packing irregularities. We chose a multi-pass set of algorithms performed in a parameter space that is insensitive to these effects. 3 Hexagon Nomenclature and Methods 3.1 Ranks and Coordinate Systems We use the term rank to describe the size of the hexagon. Table 1 shows the number of fibers for each value of rank. Two coordinate systems suggest themselves when describing hexagonal arrays. See Figure 2. It is convenient to switch between them as needed. For 4

5 Figure 2: Cartesian and Hexagonal Coordinate Systems the standard Cartesian coordinate system we use X and Y in units of fiber radius (in packing, radius refers to the total fiber, core + jacket; in the image analysis below, radius refers just to the optical core). The A-B system uses units of fiber diameter. The A-B hexagonal coordinate system is nice because each fiber position can be represented as a pair of integers. As an additional convenience, the fiber numbers falling on the +A axis correspond to the number of fibers interior to that rank. Fiber (4,0) is the rightmost fiber in a rank=4 hexagon, while fiber (-2, 4) is at top center. In the Cartesian system, the Y coordinate is given in multiples of the algebraic irrational 3. The conversion between the two coordinate systems is x = 2 a + b (1) and y = b 3. (2) 5

6 (a) Spiral Numbering (b) C-Tech Numbering Figure 3: Hexagon Numbering Schemes 3.2 Numbering Schemes We use two fiber numbering schemes. The most natural scheme designates the central fiber as 0, and sequentially numbers each layer of fibers as the rank increases. This scheme has the advantage that a fiber s numerical assignment doesn t change as more fibers are added. We call this spiral numbering (Figure 3, left). A different scheme is convenient when manufacturing the bundle. This scheme numbers the fibers sequentially as they are added to the V-Groove Block (see next section). They are sequential in the linear array at the back end. At the front end, the fibers take on a serpentine numbering, moving back and forth across the hexagonal array. We have fiber maps for all ranks and both schemes at html/documents/software.shtml. 3.3 V-Groove Blocks For smaller bundles, the fibers are arranged at the back end in one line, in a V-Groove Block. This is a machined piece with one groove for each fiber, with all the fibers captured by a top piece. For the larger bundles, one block is not sufficient. For those bundles, the fibers are apportioned across several blocks. For a rank=6 (127 fiber) bundle, 4 V-Groove Blocks are used, the 6

7 Spacing Code A B C Block Spacing mm mm mm Table 2: V-Groove Block Spacing Codes Bundle Size Rank=2 (19 fibers) Rank=4 (61 fibers) Rank=6 (127 fibers) V-Groove Blocks A19 A30, A31 A37, A30, B30, C30 Table 3: Bundle Sizes and V-Groove Block Assignments first with 37 fibers, and the other three with 30 fibers each. The fiber maps use colors to designate different V-Groove Blocks. The V-Groove Blocks are named according to their groove spacing and number of grooves. Table 2 lists the V-Groove block spacings encountered in the MaNGA bundles. Table 3 gives the V-Groove Block assignments for MaNGA bundles. 4 Overview of the Analysis The fundamental problem we address is determining the location of each fiber in the camera s image to sub-pixel accuracy. Figure 1 shows that the fibers present themselves as sharply defined illuminated circles. In image space, considering the brightness value v as height in the 3-dimensional space of (x, y, v), the fibers can initially be considered to be flat-topped right cylinders. In such a space, the fiber positions can be measured with a simple brightness-weighted centroid calculation: p = vi p i vi, (3) 7

8 where v i is the brightness value measured in an 8-bit greyscale (0-255), p i is the vector position of pixel i, and p is the measured centroid. Various real-world considerations complicate this simple view. These considerations are: Light leaks These can create illumination smudges in the image, or gradients across the sensor that can bias a brightness-weighted centroid away from the true center of the fiber Fiber Blemishes Fibers can sport irregularities in their imaged surfaces: nicks due to polishing and handling, or irregularly-shaped regions of low or no throughput. These blemishes will weigh in during the centroid calculation. Dark blemishes will push the calculated centroid away from themselves. Additional sources of light The calibration spot can fall on the image, either on a fiber (see Figure 1), between two fibers, or somewhere outside the bundle but still in the field of view. This spot is not as well formed as the illuminated fiber surfaces, and will either bias the centroid calculation (in the opposite sense of a dark blemish) or masquerade as a fiber far out of position. The considerations led us away from using the simple brightness-weighted centroid as a solution. In addition, our experience in doing geometrical calculations in the image domain (for the reasons just described, among others) drove our attention to the many delights of working in transform spaces. To that end, we use the following steps in determining the locations of fibers and the orientation of the bundle: Sobel Transform This is a discrete differentiation operator, essentially a 2- D first derivative of the image. This removes bright regions, brightness gradients, and other artifacts of low spatial frequency. It turns filled circles into rings. See Figure 4(b). Hough Transform This is a feature extraction algorithm that can be made sensitive to the objects we desire (illuminated circles of a given radius) and insensitive to undesired objects (stray spots and circles of undesired 8

9 radii). This essentially uses circular rings to triangulate on their center. See Figure 4(c). Blob Detection This is a form of convolutional sharpening. The Hough transform leaves us with shapes with known properties in the transform space. We use the Laplacian of Gaussian algorithm. We convolve the image with a Gaussian kernel of an appropriate extent, then apply the Laplacian operator. This produces a strong response to features produced by the fibers. See Figure 4(d). Spike Detection Finally, we end up with a transform space consisting of bright spikes representing fibers amid a general confusion of low-level noise. We seek out spikes having a significant height in units of meansubtracted standard deviations. Absolute Orientation Problem (AOP) The Absolute Orientation Problem takes two lists of points, and finds the similarity transformation parameters (rotation, translation, and scaling) that give the least mean squared error (Umeyama, complete reference below) between these two lists. In other words, given the pixel coordinates of the found fibers as imaged through the camera optics onto the sensor, and the AB-space coordinates of an ideal hexagonal bundle (see Figure 2), what is the best transformation between the camera s pixel space and the ideal AB space? Note that this solution allows us to determine the small perturbations of fibers from their ideal locations. The residuals can be computed between the actual, measured fiber positions and their position in an ideal bundle as determined by the best fit solution to all the fibers. 4.1 Sobel Operator The Sobel Operator is a discrete differentiation operator (see wikipedia.org/wiki/sobel_operator) implemented as a two-step 2-D convolution using the kernels G x = (4)

10 Figure 4: (a, upper left) A rank-2 (19 fiber) bundle with the point source falling on the central fiber. (b, upper right) The image after the Sobel (derivative) transform has been applied. (c, lower left) The image after the circular Hough transform has been applied. (d, lower right) The image after convolutional sharpening. and G y = (5) with the final pixel value given by G = G 2 x + G 2 y (6) 4.2 Hough Transform The Hough transform finds circles in the image. See org/wiki/hough_transform. 10

11 If you don t know the radii of the circles, you can find that too, but it s computationally much more expensive. We use our knowledge of the fiber radii as an input, and use the algorithm only for the centers. We use a voting procedure. The voting table is an array of integers the same size as the image, initially zero. We move though the image, examining each pixel. If the pixel is illuminated, then we start casting some votes. (You can use your own criterion for illuminated ; we just use pixels with non-zero counts.) This illuminated pixel could be on the circumference of many circles. Each such circle would be centered one radius away from the pixel being examined. So, we vote for each such circle by incrementing all the pixels in the voting table that are one radius away from the current pixel. If (x, y) are the coordinates of the current pixel, then all pixels (a, b) satisfying (x a) 2 + (y b) 2 = r 2 (7) represent the possible centers of circles containing the pixel (x, y). For each pixel (x, y), sweep through the coordinates a = x r to a = x + r. Then compute b = y ± r 2 (x a) 2 (8) and increment the locations (a, b) in the voting table. See Figure 4(c). Note that this is the voting table, not actually a sensor image any more. The fuzziness of the peaks is due to non-sharp edges, fiber blemishes, and other defects in the sensor image. 4.3 Blob Detection A standard technique for blob detection is the Laplacian of Gaussian. We do a convolution of scale t followed by summing second partial derivatives to produce a string response for structures of scale 2t. See wikipedia.org/wiki/blob_detection. We use a convolutional kernel of K(r, t) = 1 r 2 exp 2πt2 2t 2 (9) 11

12 with t = r 2 0/2, (10) with r 0 chosen here to be 11 pixels. For the Laplacian operator, we make two passes with the differentiation operator given in Equation 4, and then two passes with the operator given in Equation Spike Detection After doing the Sobel, Hough, and convolutional sharpening transformations, we have two kinds of detections: actual fibers, and false positives caused by the echo circles of false votes (see Figure 4(d)). Sometimes the false positives have higher peaks and fluxes than the solutions for the fainter fibers. The false positives arise from the intersection of echo circles. Imagine two intersecting circles, like in a Venn Diagram. At the two crossing points, the circles add to produce a local flux spike. We want to reject these spikes, and will do so by looking at their local structure. The false positives have extended structure from the adjacent circles. They do not fall off in all directions like a Gaussian. To detect this, we will measure the flux in an annulus around the spike; too much flux in the annulus, and we reject the spike. Examining a few images gives us a hueristic: The ratio of the annulus flux to the total flux appears to be much less than about 0.25 for Gaussian-like spikes, and much greater than 0.25 for the false positives. We will parameterize this value (z crit ) so we can change it as needed. Another issue is that even after sharpening, the Hough peaks may be cratered, multi-peaked, and so on. Peak finding sometimes finds two peaks in close proximity, representing the same fiber. We try to exclude these contaminators, and as a final measure when adding a new spike, we check the list to see if the new spike is too near any spike in the list, and leave only the brighter of any pair we find. We use a Keep Clear Distance of 10 pixels for this. We compute the mean µ and standard deviation σ for the image, and the examine each pixel with a value greater than some Nσ. We choose N =

13 For the annular flux, we compute two fluxes, f 1 = f(r 1 ) and f 2 = f(r 2 ) where r 1 = 11 pixels and r 2 = 9 pixels. Our measure of the annular flux is and if z = (f 1 f 2 )/f 1 (11) z > z crit (12) we reject the candidate. 4.5 Absolute Orientation Problem For a complete description of the AOP, see Umeyama, S IEEE Transactions on Pattern Analysis and Machine Intelligence Vol. 13, No. 4, April, The AOP applies when you have a set of points represented in two different coordinate systems, with a mapping from one to the other consisting of a scale change, a rotation, and a shift. S = c R(θ) H + S 0 (13) The AOP operates on the two lists of coordinates, and produces the estimate of c, θ, and S 0 that minimizes the mean square error in the transformation. We have two lists: the ideal hexagon fibers (see Figure 2) and the detected spikes. Feeding these lists into the AOP algorithm, we extract c, θ, and S Motor Stage Alignment We assume that the motorized X and Y axes are perpendicular to each other, as are the camera pixel axes. We do not assume, however, that the motorized axes are aligned with the camera pixel axes, have the same scale, or the same zero-points. We use a simple linear transformation to relate displacements on the sensor to movements at the motor stages: M = k R(α) S + M 0 (14) 13

14 where k is the motor constant, given in mm/pixel, α is the angular misalignment, and M 0 is the position of the origin of the sensor (pixel 0) in the motor coordinate system. The Thorlabs APT stages we use provide precise motions in mm, but there is optical power in the imaging system that makes the motor constant k 1. We measure k, α, and M 0 using our stage calibration procedure. We capture two sensor images, changing the motor stage position between them. We have and M 1 = k R(α) S 1 + M 0 (15) M 2 = k R(α) S 2 + M 0 (16) S 1 and S 2 represent the pixel position of some fiducial (e.g. a particular fiber, or the AOP solution for the bundle) at each of the two motor stage positions. The as-yet unknown offset disappears in the difference M = M 2 M 1 (17) = k R(α) ( S 2 S 1 ) (18) = k R(α) S. (19) We estimate the motor constant k with the vector magnitudes as k = S M (20) and the misalignment between the sensor coordinates and motor coordinates as cos(α) = ˆM Ŝ. (21) Finally, we have two estimates of the offset, given by M 0 = M 1 k R(α) S 1 (22) 14

15 and M 0 = M 2 k R(α) S 2 (23) One can measure the packing errors by passing the ideal positions through the forward AOP, or by passing the detected positions through the reverse AOP, and computing the radial deviation from perfect packing. 5 Summary These algorithms succeed in identifying fibers, computing their coordinates, and positioning the motor stage in front of the point source of light. The image reductions are performed by C programs. Each step of the chain is performed by a single-purpose C program, and they can be sequenced using Unix pipelines, or with file operations storing the results of individual steps. We provide a Python interface routine for each C program, allowing easy access to the results in Python. In our Fiber Test Stand implementation, the analysis is managed by a toplevel Python program. 15

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

HOUGH TRANSFORM CS 6350 C V

HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM CS 6350 C V HOUGH TRANSFORM The problem: Given a set of points in 2-D, find if a sub-set of these points, fall on a LINE. Hough Transform One powerful global method for detecting edges

More information

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages

Edge Detection. Announcements. Edge detection. Origin of Edges. Mailing list: you should have received messages Announcements Mailing list: csep576@cs.washington.edu you should have received messages Project 1 out today (due in two weeks) Carpools Edge Detection From Sandlot Science Today s reading Forsyth, chapters

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

OBJECT detection in general has many applications

OBJECT detection in general has many applications 1 Implementing Rectangle Detection using Windowed Hough Transform Akhil Singh, Music Engineering, University of Miami Abstract This paper implements Jung and Schramm s method to use Hough Transform for

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

Feature Detectors and Descriptors: Corners, Lines, etc.

Feature Detectors and Descriptors: Corners, Lines, etc. Feature Detectors and Descriptors: Corners, Lines, etc. Edges vs. Corners Edges = maxima in intensity gradient Edges vs. Corners Corners = lots of variation in direction of gradient in a small neighborhood

More information

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 FDH 204 Lecture 10 130221 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Canny Edge Detector Hough Transform Feature-Based

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.

Minimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc. Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare

More information

The SIFT (Scale Invariant Feature

The SIFT (Scale Invariant Feature The SIFT (Scale Invariant Feature Transform) Detector and Descriptor developed by David Lowe University of British Columbia Initial paper ICCV 1999 Newer journal paper IJCV 2004 Review: Matt Brown s Canonical

More information

Basic Algorithms for Digital Image Analysis: a course

Basic Algorithms for Digital Image Analysis: a course Institute of Informatics Eötvös Loránd University Budapest, Hungary Basic Algorithms for Digital Image Analysis: a course Dmitrij Csetverikov with help of Attila Lerch, Judit Verestóy, Zoltán Megyesi,

More information

Lecture: Edge Detection

Lecture: Edge Detection CMPUT 299 Winter 2007 Lecture: Edge Detection Irene Cheng Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop -

More information

Line, edge, blob and corner detection

Line, edge, blob and corner detection Line, edge, blob and corner detection Dmitri Melnikov MTAT.03.260 Pattern Recognition and Image Analysis April 5, 2011 1 / 33 Outline 1 Introduction 2 Line detection 3 Edge detection 4 Blob detection 5

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Application Note. Revision 1

Application Note. Revision 1 Risley Prism Scanner Two wedge prisms can be used to create an angular deviation of a beam from its optical axis to create continuous circular scan patterns or discrete beam pointing, which is commonly

More information

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR

Laser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and

More information

Edge detection. Gradient-based edge operators

Edge detection. Gradient-based edge operators Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:

More information

ksa MOS Ultra-Scan Performance Test Data

ksa MOS Ultra-Scan Performance Test Data ksa MOS Ultra-Scan Performance Test Data Introduction: ksa MOS Ultra Scan 200mm Patterned Silicon Wafers The ksa MOS Ultra Scan is a flexible, highresolution scanning curvature and tilt-measurement system.

More information

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS

SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS SUMMARY: DISTINCTIVE IMAGE FEATURES FROM SCALE- INVARIANT KEYPOINTS Cognitive Robotics Original: David G. Lowe, 004 Summary: Coen van Leeuwen, s1460919 Abstract: This article presents a method to extract

More information

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013

Feature Descriptors. CS 510 Lecture #21 April 29 th, 2013 Feature Descriptors CS 510 Lecture #21 April 29 th, 2013 Programming Assignment #4 Due two weeks from today Any questions? How is it going? Where are we? We have two umbrella schemes for object recognition

More information

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm

EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm EE368 Project Report CD Cover Recognition Using Modified SIFT Algorithm Group 1: Mina A. Makar Stanford University mamakar@stanford.edu Abstract In this report, we investigate the application of the Scale-Invariant

More information

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10

Announcements. Edges. Last Lecture. Gradients: Numerical Derivatives f(x) Edge Detection, Lines. Intro Computer Vision. CSE 152 Lecture 10 Announcements Assignment 2 due Tuesday, May 4. Edge Detection, Lines Midterm: Thursday, May 6. Introduction to Computer Vision CSE 152 Lecture 10 Edges Last Lecture 1. Object boundaries 2. Surface normal

More information

Edge Detection. EE/CSE 576 Linda Shapiro

Edge Detection. EE/CSE 576 Linda Shapiro Edge Detection EE/CSE 576 Linda Shapiro Edge Attneave's Cat (1954) 2 Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale.

Introduction. Introduction. Related Research. SIFT method. SIFT method. Distinctive Image Features from Scale-Invariant. Scale. Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe presented by, Sudheendra Invariance Intensity Scale Rotation Affine View point Introduction Introduction SIFT (Scale Invariant Feature

More information

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017 Reading Jain, Kasturi, Schunck, Machine Vision. McGraw-Hill, 1995. Sections 4.2-4.4, 4.5(intro), 4.5.5, 4.5.6, 5.1-5.4. [online handout] Image processing Brian Curless CSE 457 Spring 2017 1 2 What is an

More information

IRIS recognition II. Eduard Bakštein,

IRIS recognition II. Eduard Bakštein, IRIS recognition II. Eduard Bakštein, edurard.bakstein@fel.cvut.cz 22.10.2013 acknowledgement: Andrzej Drygajlo, EPFL Switzerland Iris recognition process Input: image of the eye Iris Segmentation Projection

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

Generalized Hough Transform, line fitting

Generalized Hough Transform, line fitting Generalized Hough Transform, line fitting Introduction to Computer Vision CSE 152 Lecture 11-a Announcements Assignment 2: Due today Midterm: Thursday, May 10 in class Non-maximum suppression For every

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

Chapter 3 Image Registration. Chapter 3 Image Registration

Chapter 3 Image Registration. Chapter 3 Image Registration Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy

BSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving

More information

Chapters 1 7: Overview

Chapters 1 7: Overview Chapters 1 7: Overview Chapter 1: Introduction Chapters 2 4: Data acquisition Chapters 5 7: Data manipulation Chapter 5: Vertical imagery Chapter 6: Image coordinate measurements and refinements Chapter

More information

Final Exam Study Guide

Final Exam Study Guide Final Exam Study Guide Exam Window: 28th April, 12:00am EST to 30th April, 11:59pm EST Description As indicated in class the goal of the exam is to encourage you to review the material from the course.

More information

School of Computing University of Utah

School of Computing University of Utah School of Computing University of Utah Presentation Outline 1 2 3 4 Main paper to be discussed David G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, IJCV, 2004. How to find useful keypoints?

More information

Local Features Tutorial: Nov. 8, 04

Local Features Tutorial: Nov. 8, 04 Local Features Tutorial: Nov. 8, 04 Local Features Tutorial References: Matlab SIFT tutorial (from course webpage) Lowe, David G. Distinctive Image Features from Scale Invariant Features, International

More information

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I

Massachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision Quiz I Handed out: 2004 Oct. 21st Due on: 2003 Oct. 28th Problem 1: Uniform reflecting

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

Range Sensors (time of flight) (1)

Range Sensors (time of flight) (1) Range Sensors (time of flight) (1) Large range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic sensors, infra-red sensors

More information

MATHEMATICS Geometry Standard: Number, Number Sense and Operations

MATHEMATICS Geometry Standard: Number, Number Sense and Operations Standard: Number, Number Sense and Operations Number and Number A. Connect physical, verbal and symbolic representations of 1. Connect physical, verbal and symbolic representations of Systems integers,

More information

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11

Computer Vision I. Announcement. Corners. Edges. Numerical Derivatives f(x) Edge and Corner Detection. CSE252A Lecture 11 Announcement Edge and Corner Detection Slides are posted HW due Friday CSE5A Lecture 11 Edges Corners Edge is Where Change Occurs: 1-D Change is measured by derivative in 1D Numerical Derivatives f(x)

More information

Midterm Examination CS 534: Computational Photography

Midterm Examination CS 534: Computational Photography Midterm Examination CS 534: Computational Photography November 3, 2016 NAME: Problem Score Max Score 1 6 2 8 3 9 4 12 5 4 6 13 7 7 8 6 9 9 10 6 11 14 12 6 Total 100 1 of 8 1. [6] (a) [3] What camera setting(s)

More information

An Intuitive Explanation of Fourier Theory

An Intuitive Explanation of Fourier Theory An Intuitive Explanation of Fourier Theory Steven Lehar slehar@cns.bu.edu Fourier theory is pretty complicated mathematically. But there are some beautifully simple holistic concepts behind Fourier theory

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

SIFT - scale-invariant feature transform Konrad Schindler

SIFT - scale-invariant feature transform Konrad Schindler SIFT - scale-invariant feature transform Konrad Schindler Institute of Geodesy and Photogrammetry Invariant interest points Goal match points between images with very different scale, orientation, projective

More information

Spectrographs. C. A. Griffith, Class Notes, PTYS 521, 2016 Not for distribution.

Spectrographs. C. A. Griffith, Class Notes, PTYS 521, 2016 Not for distribution. Spectrographs C A Griffith, Class Notes, PTYS 521, 2016 Not for distribution 1 Spectrographs and their characteristics A spectrograph is an instrument that disperses light into a frequency spectrum, which

More information

Flexible Calibration of a Portable Structured Light System through Surface Plane

Flexible Calibration of a Portable Structured Light System through Surface Plane Vol. 34, No. 11 ACTA AUTOMATICA SINICA November, 2008 Flexible Calibration of a Portable Structured Light System through Surface Plane GAO Wei 1 WANG Liang 1 HU Zhan-Yi 1 Abstract For a portable structured

More information

3 Identify shapes as two-dimensional (lying in a plane, flat ) or three-dimensional ( solid ).

3 Identify shapes as two-dimensional (lying in a plane, flat ) or three-dimensional ( solid ). Geometry Kindergarten Identify and describe shapes (squares, circles, triangles, rectangles, hexagons, cubes, cones, cylinders, and spheres). 1 Describe objects in the environment using names of shapes,

More information

Image Analysis. Edge Detection

Image Analysis. Edge Detection Image Analysis Edge Detection Christophoros Nikou cnikou@cs.uoi.gr Images taken from: Computer Vision course by Kristen Grauman, University of Texas at Austin (http://www.cs.utexas.edu/~grauman/courses/spring2011/index.html).

More information

Vector Addition. Qty Item Part Number 1 Force Table ME-9447B 1 Mass and Hanger Set ME Carpenter s level 1 String

Vector Addition. Qty Item Part Number 1 Force Table ME-9447B 1 Mass and Hanger Set ME Carpenter s level 1 String rev 05/2018 Vector Addition Equipment List Qty Item Part Number 1 Force Table ME-9447B 1 Mass and Hanger Set ME-8979 1 Carpenter s level 1 String Purpose The purpose of this lab is for the student to gain

More information

FFT-Based Astronomical Image Registration and Stacking using GPU

FFT-Based Astronomical Image Registration and Stacking using GPU M. Aurand 4.21.2010 EE552 FFT-Based Astronomical Image Registration and Stacking using GPU The productive imaging of faint astronomical targets mandates vanishingly low noise due to the small amount of

More information

Topic 4 Image Segmentation

Topic 4 Image Segmentation Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive

More information

Glossary alternate interior angles absolute value function Example alternate exterior angles Example angle of rotation Example

Glossary alternate interior angles absolute value function Example alternate exterior angles Example angle of rotation Example Glossar A absolute value function An absolute value function is a function that can be written in the form, where is an number or epression. alternate eterior angles alternate interior angles Alternate

More information

ELEC Dr Reji Mathew Electrical Engineering UNSW

ELEC Dr Reji Mathew Electrical Engineering UNSW ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to

More information

Optical design of COrE+

Optical design of COrE+ Optical design of COrE+ Karl Young November 23, 2015 The optical designs for COrE+ were made by Darragh McCarthy and Neil Trappe at Maynooth University and Karl Young and Shaul Hanany at University of

More information

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient

Edge Detection. CS664 Computer Vision. 3. Edges. Several Causes of Edges. Detecting Edges. Finite Differences. The Gradient Edge Detection CS664 Computer Vision. Edges Convert a gray or color image into set of curves Represented as binary image Capture properties of shapes Dan Huttenlocher Several Causes of Edges Sudden changes

More information

Local Features: Detection, Description & Matching

Local Features: Detection, Description & Matching Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British

More information

Practical Image and Video Processing Using MATLAB

Practical Image and Video Processing Using MATLAB Practical Image and Video Processing Using MATLAB Chapter 14 Edge detection What will we learn? What is edge detection and why is it so important to computer vision? What are the main edge detection techniques

More information

Calibration of a portable interferometer for fiber optic connector endface measurements

Calibration of a portable interferometer for fiber optic connector endface measurements Calibration of a portable interferometer for fiber optic connector endface measurements E. Lindmark Ph.D Light Source Reference Mirror Beamsplitter Camera Calibrated parameters Interferometer Interferometer

More information

Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006

Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006 Cylinders in Vs An optomechanical methodology Yuming Shen Tutorial for Opti521 November, 2006 Introduction For rotationally symmetric optical components, a convenient optomechanical approach which is usually

More information

Study Guide - Geometry

Study Guide - Geometry Study Guide - Geometry (NOTE: This does not include every topic on the outline. Take other steps to review those.) Page 1: Rigid Motions Page 3: Constructions Page 12: Angle relationships Page 14: Angle

More information

Model Fitting. Introduction to Computer Vision CSE 152 Lecture 11

Model Fitting. Introduction to Computer Vision CSE 152 Lecture 11 Model Fitting CSE 152 Lecture 11 Announcements Homework 3 is due May 9, 11:59 PM Reading: Chapter 10: Grouping and Model Fitting What to do with edges? Segment linked edge chains into curve features (e.g.,

More information

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor

COSC579: Scene Geometry. Jeremy Bolton, PhD Assistant Teaching Professor COSC579: Scene Geometry Jeremy Bolton, PhD Assistant Teaching Professor Overview Linear Algebra Review Homogeneous vs non-homogeneous representations Projections and Transformations Scene Geometry The

More information

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I) Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing

More information

Local Feature Detectors

Local Feature Detectors Local Feature Detectors Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Slides adapted from Cordelia Schmid and David Lowe, CVPR 2003 Tutorial, Matthew Brown,

More information

Make geometric constructions. (Formalize and explain processes)

Make geometric constructions. (Formalize and explain processes) Standard 5: Geometry Pre-Algebra Plus Algebra Geometry Algebra II Fourth Course Benchmark 1 - Benchmark 1 - Benchmark 1 - Part 3 Draw construct, and describe geometrical figures and describe the relationships

More information

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder]

Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Differential Geometry: Circle Patterns (Part 1) [Discrete Conformal Mappinngs via Circle Patterns. Kharevych, Springborn and Schröder] Preliminaries Recall: Given a smooth function f:r R, the function

More information

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline

EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT. Recap and Outline EECS150 - Digital Design Lecture 14 FIFO 2 and SIFT Oct. 15, 2013 Prof. Ronald Fearing Electrical Engineering and Computer Sciences University of California, Berkeley (slides courtesy of Prof. John Wawrzynek)

More information

Exterior Orientation Parameters

Exterior Orientation Parameters Exterior Orientation Parameters PERS 12/2001 pp 1321-1332 Karsten Jacobsen, Institute for Photogrammetry and GeoInformation, University of Hannover, Germany The georeference of any photogrammetric product

More information

ES302 Class Notes Trigonometric Applications in Geologic Problem Solving (updated Spring 2016)

ES302 Class Notes Trigonometric Applications in Geologic Problem Solving (updated Spring 2016) ES302 Class Notes Trigonometric Applications in Geologic Problem Solving (updated Spring 2016) I. Introduction a) Trigonometry study of angles and triangles i) Intersecting Lines (1) Points of intersection

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

Lecture 4: Image Processing

Lecture 4: Image Processing Lecture 4: Image Processing Definitions Many graphics techniques that operate only on images Image processing: operations that take images as input, produce images as output In its most general form, an

More information

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors Texture The most fundamental question is: How can we measure texture, i.e., how can we quantitatively distinguish between different textures? Of course it is not enough to look at the intensity of individual

More information

Geometry. Cluster: Experiment with transformations in the plane. G.CO.1 G.CO.2. Common Core Institute

Geometry. Cluster: Experiment with transformations in the plane. G.CO.1 G.CO.2. Common Core Institute Geometry Cluster: Experiment with transformations in the plane. G.CO.1: Know precise definitions of angle, circle, perpendicular line, parallel line, and line segment, based on the undefined notions of

More information

Image representation. 1. Introduction

Image representation. 1. Introduction Image representation Introduction Representation schemes Chain codes Polygonal approximations The skeleton of a region Boundary descriptors Some simple descriptors Shape numbers Fourier descriptors Moments

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 02 130124 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Basics Image Formation Image Processing 3 Intelligent

More information

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more

CV: 3D to 2D mathematics. Perspective transformation; camera calibration; stereo computation; and more CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more Roadmap of topics n Review perspective transformation n Camera calibration n Stereo methods n Structured

More information

Image restoration. Restoration: Enhancement:

Image restoration. Restoration: Enhancement: Image restoration Most images obtained by optical, electronic, or electro-optic means is likely to be degraded. The degradation can be due to camera misfocus, relative motion between camera and object,

More information

Geometry 10 and 11 Notes

Geometry 10 and 11 Notes Geometry 10 and 11 Notes Area and Volume Name Per Date 10.1 Area is the amount of space inside of a two dimensional object. When working with irregular shapes, we can find its area by breaking it up into

More information

Chapter 4. Clustering Core Atoms by Location

Chapter 4. Clustering Core Atoms by Location Chapter 4. Clustering Core Atoms by Location In this chapter, a process for sampling core atoms in space is developed, so that the analytic techniques in section 3C can be applied to local collections

More information

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014

SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image

More information

Schedule for Rest of Semester

Schedule for Rest of Semester Schedule for Rest of Semester Date Lecture Topic 11/20 24 Texture 11/27 25 Review of Statistics & Linear Algebra, Eigenvectors 11/29 26 Eigenvector expansions, Pattern Recognition 12/4 27 Cameras & calibration

More information

Chapter 5. Transforming Shapes

Chapter 5. Transforming Shapes Chapter 5 Transforming Shapes It is difficult to walk through daily life without being able to see geometric transformations in your surroundings. Notice how the leaves of plants, for example, are almost

More information

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION

CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular

More information

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1.

10/5/09 1. d = 2. Range Sensors (time of flight) (2) Ultrasonic Sensor (time of flight, sound) (1) Ultrasonic Sensor (time of flight, sound) (2) 4.1. Range Sensors (time of flight) (1) Range Sensors (time of flight) (2) arge range distance measurement -> called range sensors Range information: key element for localization and environment modeling Ultrasonic

More information

Motion Tracking and Event Understanding in Video Sequences

Motion Tracking and Event Understanding in Video Sequences Motion Tracking and Event Understanding in Video Sequences Isaac Cohen Elaine Kang, Jinman Kang Institute for Robotics and Intelligent Systems University of Southern California Los Angeles, CA Objectives!

More information

Edge Detection (with a sidelight introduction to linear, associative operators). Images

Edge Detection (with a sidelight introduction to linear, associative operators). Images Images (we will, eventually, come back to imaging geometry. But, now that we know how images come from the world, we will examine operations on images). Edge Detection (with a sidelight introduction to

More information

PITSCO Math Individualized Prescriptive Lessons (IPLs)

PITSCO Math Individualized Prescriptive Lessons (IPLs) Orientation Integers 10-10 Orientation I 20-10 Speaking Math Define common math vocabulary. Explore the four basic operations and their solutions. Form equations and expressions. 20-20 Place Value Define

More information

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than

More information

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou

Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They can be performed sequentially or simultaneou an edge image, nd line or curve segments present Given the image. in Line and Curves Detection 1 Issues with Curve Detection Grouping (e.g., the Canny hysteresis thresholding procedure) Model tting They

More information

Geometry Practice. 1. Angles located next to one another sharing a common side are called angles.

Geometry Practice. 1. Angles located next to one another sharing a common side are called angles. Geometry Practice Name 1. Angles located next to one another sharing a common side are called angles. 2. Planes that meet to form right angles are called planes. 3. Lines that cross are called lines. 4.

More information

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than

More information