Binary Image Proc: week 2
|
|
- Shavonne Russell
- 6 years ago
- Views:
Transcription
1 Binary Image Proc: week 2 n Imaging devices n Getting a binary image n Morphological filtering n Connected components extraction n Extracting numerical features from regions CSE803 Fall 207
2 Imaging Devices Summary of several common imaging devices covered (Chapter 2 of Shapiro and Stockman) MSU/CSE Fall 207 2
3 Major issues What radiation is sensed? Is motion used to scan the scene or are all sensing elements static? How fast can sensing occur? What are the sensing elements? How is resolution determined? -- in intensity -- in space MSU/CSE Fall 207 3
4 CCD type camera Array of small fixed elements Can read faster than TV rates Can add refracting elts to get color in 2x2 neighborhoods 8-bit intensity common MSU/CSE Fall 207 4
5 Color CCD Bayer Filter CSE803 Fall 207 5
6 Blooming Problem with Arrays Difficult to insulate adjacent sensing elements. Charge often leaks from hot cells to neighbors, making bright regions larger. MSU/CSE Fall 207 6
7 8-bit intensity can be clipped Dark grid intersections at left were actually brightest of scene. In A/D conversion the bright values were clipped to lower values. MSU/CSE Fall 207 7
8 Lens distortion distorts image Barrel distortion of rectangular grid is common for cheap lenses ($50). Pin cushion opposite. Precision lenses can cost $000 or more. Zoom lenses often show severe distortion. Barrel distortion Pincushion distortion Mustache distortion MSU/CSE Fall 207 8
9 Other important effects/problems Discrete pixel effect: signal is integrated Mixed pixel effect: materials mix Thin features can be missed Measurements in pixels have error Region or material B Region or material A MSU/CSE Fall 207 9
10 Orbiting satellite scanner View earth pixel at a time (through a straw) Prism produces multispectral pixel Image row by scanning boresight All rows by motion of satellite in orbit Scanned area of earth is a parallelogram, not a rectangle MSU/CSE Fall 207 0
11 Human eye as a spherical camera 00M sensing elts in retina Rods sense intensity Cones sense color Fovea has tightly packed elts, more cones Periphery has more rods Focal length is about 20mm Pupil/iris controls light entry Eye scans, or saccades to image details on fovea 00M sensing cells funnel to M optic nerve connections to the brain MSU/CSE Fall 207
12 Surface data (2.5D) sensed by structured light sensor MSU/CSE Fall Projector projects plane of light on object Camera sees bright points along an imaging ray Compute 3D surface point via line-plane intersection REF: Minolta Vivid 90 camera
13 Structured light MSU/CSE Fall 207 3
14 2.5D face image from Minolta Vivid 90 scanner in the CSE PRIP Lab MSU/CSE Fall 207 4
15 Freehand Laser Scanning Using Mobile Phone CSE803 Fall 207 5
16 LIDAR also senses surfaces Single sensing element scans scene Laser light reflected off surface and returned Phase shift codes distance Brightness change codes albedo watch?v=ebucgxzq_xg MSU/CSE Fall 207 6
17 What about human vision? Do we compute the surfaces in our environment? Do we represent them in our memory as we move around or search? Do we save representations of familiar objects? (See David Marr, Vision, 982; Aloimonus and Rosenfeld 99.) MSU/CSE Fall 207 8
18 3D scanning technology 3D image of voxels obtained Usually computationally expensive reconstruction of 3D from many 2D scans (CAT computer-aidedtomography) More info later in the course. MSU/CSE Fall 207 9
19 Magnetic Resonance Imaging Sense density of certain chemistry S slices x R rows x C columns Volume element (voxel) about 2mm per side At left is shaded 2D image created by volume rendering a 3D volume: darkness codes depth MSU/CSE Fall
20 Single slice through human head MRIs are computed structures, computed from many views. At left is MRA (angiograph), which shows blood flow. CAT scans are computed in much the same manner from X-ray transmission data. MSU/CSE Fall 207 2
21 Other variations Microscopes, telescopes, endoscopes, X-rays: radiation passes through objects to sensor elements on the other side Fibers can carry image around curves; in bodies, in machine tools Pressure arrays create images (fingerprints, butts) Sonar, stereo, focus, etc can be used for range sensing (see Chapters 2 and 3) MSU/CSE Fall
22 Summary of some digital imaging problems. Mixed pixel problem: mixed material in the world/scene map to a single pixel Loss of thin/small features due to resolution Variations in area or perimeter due to variation in object location and thresholding Path problems: refraction, dispersion, etc. takes radiation off straight path (blue light bends more or less than red light going through lens?) MSU/CSE Fall
23 Quick look at thresholding n Separate objects from background. n 2 class or many class problem? n How to do it? n Discuss methods later. CSE803 Fall
24 Cherry image shows 3 regions n n n n Background is black Healthy cherry is bright Bruise is medium dark Histogram shows two cherry regions (black background has been removed) Use this gray value to separate CSE803 Fall
25 Choosing a threshold n Common to find the deepest valley between two modes of bimodal histogram n Or, can level-slice using the intensities values a and b that bracket the mode of the desired objects n Can fit two or more Gaussian curves to the histogram n Can do optimization on above (Ohta et al) CSE803 Fall
26 Example red blood cell image n Many blood cells are separate objects n Many touch bad! n Salt and pepper noise from thresholding n How useable is this data? CSE803 Fall
27 Cleaning up thresholding results n Can delete object pixels on boundary to better separate parts. n Can fill small holes n Can delete tiny objects n (last 2 are salt-andpepper noise) CSE803 Fall
28 Removing salt-and-pepper n Change pixels all (some?) of whose neighbors are different (coercion!): see hole filled at right n Delete objects that are tiny relative to targets: see some islands removed at right CSE803 Fall
29 Simple morphological cleanup n Can be done just after thresholding -- remove salt and pepper n Can be done after connected components are extracted -- discard regions that are too small or too large to be the target CSE803 Fall
30 Dilation & Erosion Basic operations Are dual to each other: Erosion shrinks foreground, enlarges Background Dilation enlarges foreground, shrinks background Slides contributed by Volker Krüger & Rune Andersen, University of Aalborg, Esbjerg %2FCLASS_479%2Flectures_202_479%2F20.% %2520A._Introduction_to_Morphological_Operators.ppt&ei=_fMtUpvQNYbwyAGN0IAo&usg= AFQjCNGJat2gHPKxO_WlYkw3gzv6XxEJIQ&sig2=5McelJco9akafAA7xCB6Q
31 A first Example: Erosion Erosion is an important morphological operation Applied Structuring Element:
32 Example for Erosion Input image Structuring Element Output Image 0
33 Example for Erosion Input image Structuring Element Output Image 0 0
34 Example for Erosion Input image Structuring Element Output Image 0 0 0
35 Example for Erosion Input image Structuring Element Output Image
36 Example for Erosion Input image Structuring Element Output Image
37 Example for Erosion Input image Structuring Element Output Image
38 Example for Erosion Input image Structuring Element Output Image
39 Example for Erosion Input image Structuring Element Output Image
40 Another example of erosion White = 0, black =, dual property, image as a result of erosion gets darker
41 Counting Coins Counting coins is difficult because they touch each other! Solution: Binarization and Erosion separates them!
42 Example: Dilation Dilation is an important morphological operation Applied Structuring Element:
43 Example for Dilation Input image Structuring Element Output Image
44 Example for Dilation Input image Structuring Element Output Image 0
45 Example for Dilation Input image Structuring Element Output Image 0
46 Example for Dilation Input image Structuring Element Output Image 0
47 Example for Dilation Input image Structuring Element Output Image 0
48 Example for Dilation Input image Structuring Element Output Image 0
49 Example for Dilation Input image Structuring Element Output Image 0
50 Example for Dilation Input image Structuring Element Output Image 0
51 Another Dilation Example Image get lighter, more uniform intensity
52 Edge Detection Edge Detection. Dilate input image 2. Subtract input image from dilated image 3. Edges remain!
53 Opening & Closing Important operations Derived from the fundamental operations Erosion next dilation -> opening Dilation next erosion -> closing Usually applied to binary images, but gray value images are also possible Opening and closing are dual operations
54 Connected components n Assume thresholding obtained binary image n Aggregate pixels of each object n 2 different program controls n Different uses of data structures n Related to paint/search algs n Compute features from each object region CSE803 Fall
55 CC analysis of red blood cells n 63 separate objects detected n Single cells have area about 50 n Noise spots n Gobs of cells CSE803 Fall
56 More control of imaging n More uniform objects n More uniform background n Thresholding works n Objects actually separated CSE803 Fall
57 Results of pacmen analysis n 5 objects detected n Location known n Area known n 3 distinct clusters of 5 values of area; 85, 45, 293 CSE803 Fall
58 Results of coloring objects n Each object is a connected set of pixels n Object label is color n How is this done? CSE803 Fall
59 Extracting white regions n n n n Program learns white from training set of sample pixels. Aggregate similar white neighbors to form regions. Regions can then be classified as characters. (Work contributed by David Moore, CSE 803.) (Right) output is a labeled image. (Left) input RGB image of a NASA sign CSE803 Fall 207 6
60 Extracting components: Alg A n Collect connected foreground pixels into separate objects label pixels of same object with same color n A) collect by random access of pixels using paint or fill algorithm CSE803 Fall
61 paint/fill algorithm n Obj region must be bounded by background n Start at any pixel [r,c] inside obj n Recursively color neighbors CSE803 Fall
62 Events of paint/fill algorithm n PP denotes processing point n If PP outside image, return to prior PP n If PP already labeled, return to prior PP n If PP is background pixel, return to prior PP n If PP is unlabeled object pixel, then ) label PP with current color code 2) recursively label neighbors N,, N8 (or N,, N4) CSE803 Fall
63 Recursive Paint/Fill Alg: region n Color closed boundary with L n Choose pixel [r,c] inside boundary n Call FILL FILL ( I, r, c, L) If [r,c] is out, return If I[r,c] == L, return I[r,c] ß L // color it For all neighbors [rn,cn] FILL(I, rn, cn, L) CSE803 Fall
64 Recall different neighborhoods n 4-connected: right, up, left, down only n 8-connected: have 45 degree diagonals also n If the pixels [r,c] of an image region R are 4- connected, then there exists a path from any pixel [r,c] in R to any other pixel [r2,c2] in R using only 4 neighbors n The coloring algorithm is guaranteed to color all connected pixels of a region. Why? CSE803 Fall
65 Correctness of recursive coloring n n n n If any pixel P is colored L, then all of its object neighbors must be colored L (because FILL will be recursively called on each neighbor) If pixels P and Q are in connected region R, then there must be a path from pixel P to Q inside of R. If FILL does not color Q, let X be the first uncolored pixel on a path from P to Q. If a neighbor N of X is colored, then so must be X! This contradiction proves the property all of the path must be colored. Pixels P and Q are in a connected region. CSE803 Fall P Q N X P is colored; Q is not; N is colored; X is not. But if N is colored, X must be too!
66 Connected components using recursive Paint/Fill n Raster scan until object pixel found n Assign new color for new object n Search through all connected neighbors until the entire object is labeled n Return to the raster scan to search for another object pixel CSE803 Fall
67 Extracting 5 objects CSE803 Fall
68 Outside material to cover n Look at C++ functions for raster scanning and pixel propagation n Study related fill algorithm n Discuss how the recursion works n Prove that all pixels connected to the start pixel must be colored the same n Runtime stack can overflow if region has many pixels. CSE803 Fall
69 Alg B: raster scan control Visit each image pixel once, going by row and then column. Propagate color to neighbors below and to the right. Keep track of merging colors. CSE803 Fall 207 7
70 Have caution in programming n Note how your image tool coordinates the pixels. n Different tools or programming languages handle rows and columns differently n Text uses EE-TV coordinates or standard math coordinates col Yi row Xi CSE803 Fall
71 Events controlled by neighbors n n n n If all Ni background, then PP gets new color code If all Ni same color L, then PP gets L If Ni!= Nj, then take smallest code and make all same See Ch 3.4 of S&S Processing point CSE803 Fall
72 Merging connecting regions Detect and record merges while raster scanning. Use equivalence table to recode (in practice, probably have to start with color = 2, not ) CSE803 Fall
73 Alg A versus alg B n n n n n n Visits pixels more than once Needs full image Recursion or stacking slower than B No need to recolor Can compute features on the fly Can quit if search object found (tracking?) visits each pixel once Needs only 2 rows of image at a time Need to merge colors and region features when regions merge Typically faster Not suited for heuristic (smart) start pixel CSE803 Fall
74 Outside material n More examples of raster scanning n Union-find algorithm and parent table n Computing features from labeled object region n More on recursion and C++ CSE803 Fall
75 Computing features of regions Can postprocess results of CC alg. Or, can compute as pixels are aggregated CSE803 Fall
76 Area and centroid CSE803 Fall
77 Bounding box: BB n Smallest rectangle containing all pixels n r-low is lowest row value; c-low is lowest column value n r-high, c-high are the highest values n Bounding box has diagonal corners [rlow, c-low] and [r-high, c-high] n BBs summarize where an object is; they can be used to rule out near collisions CSE803 Fall
78 Second moments These are invariant to object location in the image. CSE803 Fall
79 Contrast second moments n For the letter I n Versus the letter O n Versus the underline _ I r c CSE803 Fall 207 8
80 Perimeter pixels and length How do we find these boundary pixels? CSE803 Fall
81 Circularity or elongation Computing P is more trouble than computing A. CSE803 Fall
82 Circularity as variance of radius CSE803 Fall
83 Axis of least (most) inertia gives object oriented, or natural, coordinate system passes through centroid axis of most inertia is perpendicular to it concept extends to 3D and nd CSE803 Fall
84 Approach (in Cartesian/math coordinates) n First assume axis goes through centroid n Assume axis makes arbitrary angle Θ with the horizontal axis n Write formula for the angular momentum (inertia) to rotate a single unit of mass (pixel) about the axis n Write formula for the sum over all pixels n Take derivative of formula to find Θ for extreme values CSE803 Fall
85 Rotational inertia of pixel Y d Θ + 90 [x,y] Θ Axis of inertia Centroid of pixels X CSE803 Fall
86 Overall steps n d = [x, y] projected along normal to axis = -x sin Θ + y cos Θ n Inertia of single pixel is d squared n Inertia of entire region is sum over all pixels İ = ( x sin Θ y cos Θ)^2 = sin^2 Θ x^2 2sin Θ cos Θ xy + cos^2 Θ y^2 = sin^2 Θ a 2sin Θ cos Θ b + cos^2 Θ c where a, b, c are the second moments! CSE803 Fall
87 Final form n By differentiating İ with respect to Θ and setting to 0 n 0 = a tan 2 Θ -2b c tan 2 Θ n tan 2 Θ = 2b/(a-c) pretty simple n What if all pixels [x,y] lie on a circle? n What if they are all on a line? CSE803 Fall
88 As in text with centroid CSE803 Fall
89 Formula for best axis n use least squares to derive the tangent angle θ of the axis of least inertia n express tan 2θ in terms of the 3 second moments n interpret the formula for a circle of pixels and a straight line of pixels CSE803 Fall 207 9
90 Applications An iterative approach for fitting multiple connected ellipse structure to silhouette, PRL, 200 CSE803 Fall
Image Processing. Bilkent University. CS554 Computer Vision Pinar Duygulu
Image Processing CS 554 Computer Vision Pinar Duygulu Bilkent University Today Image Formation Point and Blob Processing Binary Image Processing Readings: Gonzalez & Woods, Ch. 3 Slides are adapted from
More informationEE 584 MACHINE VISION
EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency
More informationBinary Image Analysis. Binary Image Analysis. What kinds of operations? Results of analysis. Useful Operations. Example: red blood cell image
inary Image Analysis inary Image Analysis inary image analysis consists of a set of image analysis operations that are used to produce or process binary images, usually images of s and s. represents the
More informationProblem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1
Machine vision systems Problem definition Image acquisition Image segmentation Connected component analysis Machine vision systems - 1 Problem definition Design a vision system to see a flat world Page
More informationCSE 455: Computer Vision Winter 2007
CSE 455: Computer Vision Winter 2007 Instructor: Professor Linda Shapiro (shapiro@cs) Additional Instructor: Dr. Matthew Brown (brown@microsoft.com) TAs: Masa Kobashi (mkbsh@cs) Peter Davis (pediddle@cs)
More informationCS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University
CS443: Digital Imaging and Multimedia Binary Image Analysis Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding
More informationProcessing of binary images
Binary Image Processing Tuesday, 14/02/2017 ntonis rgyros e-mail: argyros@csd.uoc.gr 1 Today From gray level to binary images Processing of binary images Mathematical morphology 2 Computer Vision, Spring
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationChapter 3 Image Registration. Chapter 3 Image Registration
Chapter 3 Image Registration Distributed Algorithms for Introduction (1) Definition: Image Registration Input: 2 images of the same scene but taken from different perspectives Goal: Identify transformation
More informationBinary Image Processing. Introduction to Computer Vision CSE 152 Lecture 5
Binary Image Processing CSE 152 Lecture 5 Announcements Homework 2 is due Apr 25, 11:59 PM Reading: Szeliski, Chapter 3 Image processing, Section 3.3 More neighborhood operators Binary System Summary 1.
More informationChapter 2 - Fundamentals. Comunicação Visual Interactiva
Chapter - Fundamentals Comunicação Visual Interactiva Structure of the human eye (1) CVI Structure of the human eye () Celular structure of the retina. On the right we can see one cone between two groups
More informationThe image is virtual and erect. When a mirror is rotated through a certain angle, the reflected ray is rotated through twice this angle.
1 Class XII: Physics Chapter 9: Ray optics and Optical Instruments Top Concepts 1. Laws of Reflection. The reflection at a plane surface always takes place in accordance with the following two laws: (i)
More informationDigital Image Processing
Digital Image Processing Part 9: Representation and Description AASS Learning Systems Lab, Dep. Teknik Room T1209 (Fr, 11-12 o'clock) achim.lilienthal@oru.se Course Book Chapter 11 2011-05-17 Contents
More informationTopic 6 Representation and Description
Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation
More informationRobot vision review. Martin Jagersand
Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation
More information09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)
Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape
More informationAll forms of EM waves travel at the speed of light in a vacuum = 3.00 x 10 8 m/s This speed is constant in air as well
Pre AP Physics Light & Optics Chapters 14-16 Light is an electromagnetic wave Electromagnetic waves: Oscillating electric and magnetic fields that are perpendicular to the direction the wave moves Difference
More informationDigital Image Processing Fundamentals
Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to
More informationCS534 Introduction to Computer Vision Binary Image Analysis. Ahmed Elgammal Dept. of Computer Science Rutgers University
CS534 Introduction to Computer Vision Binary Image Analysis Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines A Simple Machine Vision System Image segmentation by thresholding Digital
More informationColor and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception
Color and Shading Color Shapiro and Stockman, Chapter 6 Color is an important factor for for human perception for object and material identification, even time of day. Color perception depends upon both
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationCOMPUTER AND ROBOT VISION
VOLUME COMPUTER AND ROBOT VISION Robert M. Haralick University of Washington Linda G. Shapiro University of Washington A^ ADDISON-WESLEY PUBLISHING COMPANY Reading, Massachusetts Menlo Park, California
More informationSIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014
SIFT: SCALE INVARIANT FEATURE TRANSFORM SURF: SPEEDED UP ROBUST FEATURES BASHAR ALSADIK EOS DEPT. TOPMAP M13 3D GEOINFORMATION FROM IMAGES 2014 SIFT SIFT: Scale Invariant Feature Transform; transform image
More informationUNIT-2 IMAGE REPRESENTATION IMAGE REPRESENTATION IMAGE SENSORS IMAGE SENSORS- FLEX CIRCUIT ASSEMBLY
18-08-2016 UNIT-2 In the following slides we will consider what is involved in capturing a digital image of a real-world scene Image sensing and representation Image Acquisition Sampling and quantisation
More informationConnected components - 1
Connected Components Basic definitions Connectivity, Adjacency, Connected Components Background/Foreground, Boundaries Run-length encoding Component Labeling Recursive algorithm Two-scan algorithm Chain
More informationEECS490: Digital Image Processing. Lecture #17
Lecture #17 Morphology & set operations on images Structuring elements Erosion and dilation Opening and closing Morphological image processing, boundary extraction, region filling Connectivity: convex
More informationC E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II
T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323
More informationBoundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking
Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important
More informationExtracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang
Extracting Layers and Recognizing Features for Automatic Map Understanding Yao-Yi Chiang 0 Outline Introduction/ Problem Motivation Map Processing Overview Map Decomposition Feature Recognition Discussion
More informationImage Processing: Final Exam November 10, :30 10:30
Image Processing: Final Exam November 10, 2017-8:30 10:30 Student name: Student number: Put your name and student number on all of the papers you hand in (if you take out the staple). There are always
More informationComputer Vision. The image formation process
Computer Vision The image formation process Filippo Bergamasco (filippo.bergamasco@unive.it) http://www.dais.unive.it/~bergamasco DAIS, Ca Foscari University of Venice Academic year 2016/2017 The image
More informationCS5670: Computer Vision
CS5670: Computer Vision Noah Snavely Light & Perception Announcements Quiz on Tuesday Project 3 code due Monday, April 17, by 11:59pm artifact due Wednesday, April 19, by 11:59pm Can we determine shape
More informationEECS490: Digital Image Processing. Lecture #23
Lecture #23 Motion segmentation & motion tracking Boundary tracking Chain codes Minimum perimeter polygons Signatures Motion Segmentation P k Accumulative Difference Image Positive ADI Negative ADI (ADI)
More informationCounting Particles or Cells Using IMAQ Vision
Application Note 107 Counting Particles or Cells Using IMAQ Vision John Hanks Introduction To count objects, you use a common image processing technique called particle analysis, often referred to as blob
More informationCHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN
CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image
More informationIntroduction to Computer Vision. Human Eye Sampling
Human Eye Sampling Sampling Rough Idea: Ideal Case 23 "Digitized Image" "Continuous Image" Dirac Delta Function 2D "Comb" δ(x,y) = 0 for x = 0, y= 0 s δ(x,y) dx dy = 1 f(x,y)δ(x-a,y-b) dx dy = f(a,b) δ(x-ns,y-ns)
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationCS4670: Computer Vision
CS4670: Computer Vision Noah Snavely Lecture 30: Light, color, and reflectance Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Light by Ted Adelson Readings Szeliski, 2.2, 2.3.2 Properties of light
More informationTopic 4 Image Segmentation
Topic 4 Image Segmentation What is Segmentation? Why? Segmentation important contributing factor to the success of an automated image analysis process What is Image Analysis: Processing images to derive
More informationOptics INTRODUCTION DISCUSSION OF PRINCIPLES. Reflection by a Plane Mirror
Optics INTRODUCTION Geometric optics is one of the oldest branches of physics, dealing with the laws of reflection and refraction. Reflection takes place on the surface of an object, and refraction occurs
More informationLecture #13. Point (pixel) transformations. Neighborhood processing. Color segmentation
Lecture #13 Point (pixel) transformations Color modification Color slicing Device independent color Color balancing Neighborhood processing Smoothing Sharpening Color segmentation Color Transformations
More information9. RAY OPTICS AND OPTICAL INSTRUMENTS
9. RAY OPTICS AND OPTICAL INSTRUMENTS 1. Define the terms (a) ray of light & (b) beam of light A ray is defined as the straight line path joining the two points by which light is travelling. A beam is
More information[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera
[10] Industrial DataMatrix barcodes recognition with a random tilt and rotating the camera Image processing, pattern recognition 865 Kruchinin A.Yu. Orenburg State University IntBuSoft Ltd Abstract The
More informationLecture 3: Art Gallery Problems and Polygon Triangulation
EECS 396/496: Computational Geometry Fall 2017 Lecture 3: Art Gallery Problems and Polygon Triangulation Lecturer: Huck Bennett In this lecture, we study the problem of guarding an art gallery (specified
More informationDigital Image Processing Chapter 11: Image Description and Representation
Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that
More informationColour Reading: Chapter 6. Black body radiators
Colour Reading: Chapter 6 Light is produced in different amounts at different wavelengths by each light source Light is differentially reflected at each wavelength, which gives objects their natural colours
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity
More information3B SCIENTIFIC PHYSICS
3B SCIENTIFIC PHYSICS Instruction sheet 06/18 ALF Laser Optics Demonstration Set Laser Optics Supplement Set Page 1 2 3 3 3 4 4 4 5 5 5 6 6 6 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14
More informationMorphological Image Processing
Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures
More informationRequirements for region detection
Region detectors Requirements for region detection For region detection invariance transformations that should be considered are illumination changes, translation, rotation, scale and full affine transform
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationLecture 9: Hough Transform and Thresholding base Segmentation
#1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting
More informationLecture 17: Recursive Ray Tracing. Where is the way where light dwelleth? Job 38:19
Lecture 17: Recursive Ray Tracing Where is the way where light dwelleth? Job 38:19 1. Raster Graphics Typical graphics terminals today are raster displays. A raster display renders a picture scan line
More informationRay Optics Demonstration Set (RODS) and Ray Optics Demonstration Set Plus (RODS+) USER S GUIDE
Ray Optics Demonstration Set (RODS) and Ray Optics Demonstration Set Plus USER S GUIDE 1 NO. OF EXP. Table of contents TITLE OF EXPERIMENT SET TO USE Introduction Tables of the set elements E1 Reflection
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 4 Jan. 24 th, 2019 Slides from Dr. Shishir K Shah and Frank (Qingzhong) Liu Digital Image Processing COSC 6380/4393 TA - Office: PGH 231 (Update) Shikha
More informationAnalysis of Binary Images
Analysis of Binary Images Introduction to Computer Vision CSE 52 Lecture 7 CSE52, Spr 07 The appearance of colors Color appearance is strongly affected by (at least): Spectrum of lighting striking the
More informationComplex Sensors: Cameras, Visual Sensing. The Robotics Primer (Ch. 9) ECE 497: Introduction to Mobile Robotics -Visual Sensors
Complex Sensors: Cameras, Visual Sensing The Robotics Primer (Ch. 9) Bring your laptop and robot everyday DO NOT unplug the network cables from the desktop computers or the walls Tuesday s Quiz is on Visual
More informationMiniature faking. In close-up photo, the depth of field is limited.
Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/file:jodhpur_tilt_shift.jpg Miniature faking Miniature faking http://en.wikipedia.org/wiki/file:oregon_state_beavers_tilt-shift_miniature_greg_keene.jpg
More informationCoE4TN4 Image Processing
CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable
More informationCS6670: Computer Vision
CS6670: Computer Vision Noah Snavely Lecture 21: Light, reflectance and photometric stereo Announcements Final projects Midterm reports due November 24 (next Tuesday) by 11:59pm (upload to CMS) State the
More informationComputer and Machine Vision
Computer and Machine Vision Lecture Week 10 Part-2 Skeletal Models and Face Detection March 21, 2014 Sam Siewert Outline of Week 10 Lab #4 Overview Lab #5 and #6 Extended Lab Overview SIFT and SURF High
More informationMorphological Image Processing
Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor
More informationReflection & Refraction
Reflection & Refraction Reflection Occurs when light hits a medium and bounces back towards the direction it came from Reflection is what allows us to see objects: Lights reflects off an object and travels
More informationDigital Image Processing COSC 6380/4393
Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/
More informationECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall Midterm Examination
ECE 172A: Introduction to Intelligent Systems: Machine Vision, Fall 2008 October 29, 2008 Notes: Midterm Examination This is a closed book and closed notes examination. Please be precise and to the point.
More informationMassachusetts Institute of Technology. Department of Computer Science and Electrical Engineering /6.866 Machine Vision Quiz I
Massachusetts Institute of Technology Department of Computer Science and Electrical Engineering 6.801/6.866 Machine Vision Quiz I Handed out: 2004 Oct. 21st Due on: 2003 Oct. 28th Problem 1: Uniform reflecting
More informationLecture 6: Multimedia Information Retrieval Dr. Jian Zhang
Lecture 6: Multimedia Information Retrieval Dr. Jian Zhang NICTA & CSE UNSW COMP9314 Advanced Database S1 2007 jzhang@cse.unsw.edu.au Reference Papers and Resources Papers: Colour spaces-perceptual, historical
More informationLecture 19: Depth Cameras. Visual Computing Systems CMU , Fall 2013
Lecture 19: Depth Cameras Visual Computing Systems Continuing theme: computational photography Cameras capture light, then extensive processing produces the desired image Today: - Capturing scene depth
More informationRay Optics. Ray model Reflection Refraction, total internal reflection Color dispersion Lenses Image formation Magnification Spherical mirrors
Ray Optics Ray model Reflection Refraction, total internal reflection Color dispersion Lenses Image formation Magnification Spherical mirrors 1 Ray optics Optical imaging and color in medicine Integral
More informationChapter 12 Notes: Optics
Chapter 12 Notes: Optics How can the paths traveled by light rays be rearranged in order to form images? In this chapter we will consider just one form of electromagnetic wave: visible light. We will be
More informationBSB663 Image Processing Pinar Duygulu. Slides are adapted from Selim Aksoy
BSB663 Image Processing Pinar Duygulu Slides are adapted from Selim Aksoy Image matching Image matching is a fundamental aspect of many problems in computer vision. Object or scene recognition Solving
More informationPractice Exam Sample Solutions
CS 675 Computer Vision Instructor: Marc Pomplun Practice Exam Sample Solutions Note that in the actual exam, no calculators, no books, and no notes allowed. Question 1: out of points Question 2: out of
More informationCHAPTER 3 RETINAL OPTIC DISC SEGMENTATION
60 CHAPTER 3 RETINAL OPTIC DISC SEGMENTATION 3.1 IMPORTANCE OF OPTIC DISC Ocular fundus images provide information about ophthalmic, retinal and even systemic diseases such as hypertension, diabetes, macular
More informationImage Formation. Antonino Furnari. Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania
Image Formation Antonino Furnari Image Processing Lab Dipartimento di Matematica e Informatica Università degli Studi di Catania furnari@dmi.unict.it 18/03/2014 Outline Introduction; Geometric Primitives
More informationDigital Image Processing
Digital Image Processing Lecture # 4 Digital Image Fundamentals - II ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Presentation Outline
More informationLecture 10: Image Descriptors and Representation
I2200: Digital Image processing Lecture 10: Image Descriptors and Representation Prof. YingLi Tian Nov. 15, 2017 Department of Electrical Engineering The City College of New York The City University of
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationHome Lab 7 Refraction, Ray Tracing, and Snell s Law
Home Lab Week 7 Refraction, Ray Tracing, and Snell s Law Home Lab 7 Refraction, Ray Tracing, and Snell s Law Activity 7-1: Snell s Law Objective: Verify Snell s law Materials Included: Laser pointer Cylindrical
More informationBabu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)
5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?
More informationCS201 Computer Vision Lect 4 - Image Formation
CS201 Computer Vision Lect 4 - Image Formation John Magee 9 September, 2014 Slides courtesy of Diane H. Theriault Question of the Day: Why is Computer Vision hard? Something to think about from our view
More informationImage Acquisition + Histograms
Image Processing - Lesson 1 Image Acquisition + Histograms Image Characteristics Image Acquisition Image Digitization Sampling Quantization Histograms Histogram Equalization What is an Image? An image
More informationImage matching. Announcements. Harder case. Even harder case. Project 1 Out today Help session at the end of class. by Diva Sian.
Announcements Project 1 Out today Help session at the end of class Image matching by Diva Sian by swashford Harder case Even harder case How the Afghan Girl was Identified by Her Iris Patterns Read the
More information2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into
2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/
More information(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)
Digital Image Processing Prof. P. K. Biswas Department of Electronics and Electrical Communications Engineering Indian Institute of Technology, Kharagpur Module Number 01 Lecture Number 02 Application
More informationChapter 11 Representation & Description
Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering
More informationMathematics in Orbit
Mathematics in Orbit Dan Kalman American University Slides and refs at www.dankalman.net Outline Basics: 3D geospacial models Keyhole Problem: Related Rates! GPS: space-time triangulation Sensor Diagnosis:
More informationAnnouncements. Binary Image Processing. Binary System Summary. Histogram-based Segmentation. How do we select a Threshold?
Announcements Binary Image Processing Homework is due Apr 24, :59 PM Homework 2 will be assigned this week Reading: Chapter 3 Image processing CSE 52 Lecture 8 Binary System Summary. Acquire images and
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 1 Overview of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing and the Mirror Equation The Refraction of Light Ray Tracing
More informationLocal Features: Detection, Description & Matching
Local Features: Detection, Description & Matching Lecture 08 Computer Vision Material Citations Dr George Stockman Professor Emeritus, Michigan State University Dr David Lowe Professor, University of British
More informationChapter 26 Geometrical Optics
Chapter 26 Geometrical Optics 26.1 The Reflection of Light 26.2 Forming Images With a Plane Mirror 26.3 Spherical Mirrors 26.4 Ray Tracing and the Mirror Equation 26.5 The Refraction of Light 26.6 Ray
More information9-1 GCSE Maths. GCSE Mathematics has a Foundation tier (Grades 1 5) and a Higher tier (Grades 4 9).
9-1 GCSE Maths GCSE Mathematics has a Foundation tier (Grades 1 5) and a Higher tier (Grades 4 9). In each tier, there are three exams taken at the end of Year 11. Any topic may be assessed on each of
More informationImage Segmentation. Selim Aksoy. Bilkent University
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]
More informationLecture 14: Refraction
Lecture 14: Refraction We know from experience that there are several transparent substances through which light can travel air, water, and glass are three examples When light passes from one such medium
More informationImage Segmentation. Selim Aksoy. Bilkent University
Image Segmentation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr Examples of grouping in vision [http://poseidon.csd.auth.gr/lab_research/latest/imgs/s peakdepvidindex_img2.jpg]
More informationHarder case. Image matching. Even harder case. Harder still? by Diva Sian. by swashford
Image matching Harder case by Diva Sian by Diva Sian by scgbt by swashford Even harder case Harder still? How the Afghan Girl was Identified by Her Iris Patterns Read the story NASA Mars Rover images Answer
More informationMinimizing Noise and Bias in 3D DIC. Correlated Solutions, Inc.
Minimizing Noise and Bias in 3D DIC Correlated Solutions, Inc. Overview Overview of Noise and Bias Digital Image Correlation Background/Tracking Function Minimizing Noise Focus Contrast/Lighting Glare
More information