Fig. 3.1: Interpolation schemes for forward mapping (left) and inverse mapping (right, Jähne, 1997).
|
|
- Elwin Waters
- 6 years ago
- Views:
Transcription
1 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes Spatial transorms 3.1. Geometric operations (Reading: Castleman, 1996, pp ) - a geometric operation is deined as g o (x, y) = g i ( x!, y!) = g i [ a(x,y),b(x, y) ] (3.1) where a() and b() are the transormation equations that map an input image g i onto an output image g o Interpolation - as the pixel coordinates resulting out o most transormations do not coincide with the grid coordinates, the grey value o transormed pixels has to be derived through pixel illing or interpolation; in orward mapping (see Fig. 3.1), the transormed pixel value in the output image is derived rom the our input image pixels mapping onto it, such that each pixel's grey value G o is given by the sum o the areal ractions G o (x, y) =! (3.) k k G k Fig. 3.1: Interpolation schemes or orward mapping (let) and inverse mapping (right, Jähne, 1997). - a more sophisticated approach consists o deriving the grey values o the pixels in the transormed image through appropriate interpolation in the original image (Fig. 3.1, inverse mapping) - the simplest interpolation technique, nearest-neighbour or zero-order interpolation, consists in assigning the pixel a grey value that corresponds to that o its nearest neighbour - bilinear (irst-order) interpolation involves itting a hyperbolic paraboloid such that G(x, y) = ax + by + cxy + d (3.3) - rom the our corner coordinate pixels (see Fig. 3.) any inlying point can be derived through sequential interpolation: G(x, 0) = G(0,0) + x[ G(1,0)! G(0,0) ] G(x,1) = G(0,1) + x[ G(1,1)! G(0,1) ] G(x, y) = G(x, 0) + y[ G(x,1)! G(x,0) ] (3.4) - applications which demand continuous interpolation (also o the derivatives) require spline-based interpolation or more sophisticated algorithms implemented in the requency domain (or details see, e.g., Jähne, 1997); another option is to represent object outlines in a classiied image by splines or other continuous unctions
2 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes Fig. 3.: Bilinear interpolation scheme (Castleman, 1996) Geometric transorms (Reading: Schowengerdt, 1997, pp ) - aine transorms are linear coordinate transorms that can be broken down into a sequence o basic operations: translation, rotation, scaling, stretching, and shearing; these can be expressed in matrix notation (using homogeneous coordinates in which the x-y plane is represented by z=1 in x-y-z space) - translation by x 0, y 0 such that a(x,y)=x+x 0, b(x,y)=y+y 0 :! a(x, y) $! 1 0 x o $! x$ # b(x, y) = # 0 1 y 0 # y 1 % % 1% - stretching by c and d such that a(x,y)=x/c, b(x,y)=y/d:! a(x, y) $! 1/ c 0 0$! x$ # b(x, y) = # 0 1/ d 0 # y 1 % 0 0 1% 1% - rotation by the angle θ, such that a(x,y)=xcosθ -ysinθ, b(x,y)=xsinθ +ycosθ:! a(x, y) $! cos' (sin' 0$! x$ # b(x, y) = # sin' cos' 0 # y 1 % 0 0 1% 1% - a ull aine transormation can then be described as the matrix product o its component operations, e.g., or a rotation about a point x 0,y 0 other than the origin is given by! a(x, y) $! 1 0 x 0 $! cos' (sin' 0$! 1 0 (x 0 $! x$ # b(x, y) = # 0 1 y 0 # sin' cos' 0 # 0 1 (y 0 # y 1 % % 0 0 1% % 1% (3.5) (3.6) (3.7) (3.8) - or some applications, such as the mapping o ground control points (GCPs) rom a distorted satellite image to a reerence image (or map projection), non-linear distortion precludes the application o aine transorms; in the case o map projections, the transormation equations are known (see, e.g., Snyder J. P., 198: Map projections used by the U.S. Geological Survey. Geol. Surv. Bull., nd ed., 153, 313pp.) and can be applied accordingly
3 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes rectiication o distorted images or which the transormation equations are not known (e.g., a problem that might be encountered in geolocation o satellite imagery or aerial photography based on GCPs), requires a dierent approach; one such approach consists o deriving a generic polynomial model to relate the image coordinates x',y' to that o a reerence, x! = N N"1 i j # # a ij i= 0 j =0 N N "1 i j y! = # # b ij i = 0 j = 0 - or most applications -order polynomials are suicient, such that x! = a 00 + a 10 + a 01 + a 11 + a 0 x r + a 0 y! = b 00 + b 10 + b 01 + b 11 + b 0 x r + b 0 (3.9) (3.10) - now, or M pairs o GCPs located in the image and the reerence, equation 3.10 can be written or the M sets o equations in matrix notation " x 1! % " 1,1,1,1,1,1 $ x! ' $ 1,,,,,, $... ' = $ $ # x! ' $ M # 1, M,M, M, M, M X = RA,1, M " % $ ' $ ' $ ' $ $ # a 00 a 01 a 10 a 11 a 0 a 0 % ' ' ' ' ' (3.11) and similarly or the y' coordinates - i M is equal to the number o polynomial coeicients K, A and B can be obtained by inverting R: A = R!1 X B = R!1 Y (3.1) - or cases where M>K, least-square techniques can be applied to solve or A and B (or details see Schowengerdt, 1997, p. 337.) Analysis o stereoscopic images (Reading: Castleman, 1996, pp ) - because the reconstruction o the 3-dimensional topography o image pairs requires geometric transormation o images, it will be shortly discussed at this point - or two bore-sighted cameras (i.e. with parallel optical axes) as shown in Fig. 3.3, the image o a given point in 3-D space X 0,Y 0, on the image plane o the let camera l with ocal length (such that Z = - in the image plane), is given by X l =!X 0 Y l =!Y 0 (3.13) the corresponding right camera image is then given by a shit d in the image position relative to the origin which coincides with the lens l:
4 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes X r =!(X 0 + d)! d Y r =!Y 0 (3.14) - deining a coordinate system in the image plane that is rotated by 180 relative to the world coordinate system such that x l = -X l, y l = -Y l, = -X r d, = -Y r, the image coordinates or the point X 0,Y 0, are: x l = X 0 y Z l = Y 0 0 = (X 0 + d) = Y 0 (3.15) rearranging and solving or yields: X 0 = x l d =! x l Z = x 0 Z r! d 0 (3.16) - this is reerred to as the normal-range equation and relates the Z-coordinate (i.e. topography)o any given point to the corresponding dislocation in x-direction in the image pair; as both and d are typically small whereas may be quite large, determination o -x l to a high degree o accuracy is crucial - the true-range equation is derived trigonometrically as R = + x l + y l R = d + x l + y l! x l (3.17) which or many systems (with X 0,Y 0 << and x l,y l << ) can be approximated by equation while the derivation o the Z-topography is airly straightorward (at least or bore-sighted systems), the determination o -x l to a suicient degree o accuracy can be quite a challenge; in most applications the magnitude o the dislocation cannot be derived explicitly but has to determined or an entire image pair based on correlation techniques; some o these aspects will be covered at a later stage; however, a technique explicitly developed or stereo pair matching has been presented by Sun (Digital Image Computing: Techniques and Applications, pp , Massey University, Auckland, New Zealand, 10-1 December 1997) and the algorithm has been implemented at Fig. 3.3: Camera arrangement or stereoscopic imaging (Castleman, 1996).
5 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes Spatial iltering (Reading: Gonzalez and Wintz, 1987, pp ; Castleman, 1996, pp ; Jähne, 1995, 100.) - while point operations discussed in Section can be thought o as the simplest orm o a spatial operation (operating on a neighbourhood o 1,1 or each pixel), in a more general context transormations or iltering in the spatial domain correspond to the transormation o an input image g i (x,y) to an output image g o (x.y) through convolution with a linear, position-invariant 1 operator (also reerred to as mask or convolution kernel) h, such that g o (x, y) = h(x,y)! g i (x, y) (3.18) (as outlined in Section 4, this spatial operation corresponds to a multiplication o the Fourier transorms H(xu,v) and G(u,v) o the image and the kernel in the requency domain) - in its discrete orm, the convolution can be represented by the summation over a kernel k o size m,n or the product k() g i () o every x,y o the image pixel coordinates (Fig. 3.4) such that g o (x, y) = 1 C i " " k(m,n) g i (x + m,y + n) (3.19) m =! i n=! j j - the scaling actor 1/C is typically given as ΣΣk(m,n), with kernels mostly odd-sized (e.g., 3x3) with the origin at the center coordinate Fig. 3.4: Discrete convolution (Castleman, 1996). - as will be outlined in more detail in Section 4, the three major types o linear ilters can be represented both in the spatial and in the requency-domain (Fig. 3.5); with lowpass ilters mostly applied to smooth images and reduce the impact o impulse noise on image processing and high-pass ilters playing an important role in edge and eature detection; in general terms, a convolution can be applied to remove the 1 An operator h is linear i its application to a combined set o images g k (multiplied by a scalar actor a k ) yields the same result as the operation on the individual component images: " h #! k $ a k g k % =! a kh(g k ) k which implies that linear operations can be perormed on images decomposed into subunits rather than the whole image. Position- or shit-invariance requires that the result o an operation is the same regardless o the position o the image or the operator, which also aids in breaking down more complex operations into simpler component steps.
6 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes - - eects o its inverse (determined or provided a priori) on the image, such as artiacts resulting out o errors in the imaging system Fig. 3.5: Cross-sections o requency-domain (top) and spatial-domain ilters (Gonzalez and Woods, 199) Smoothing ilters - as shown in Fig. 3.5, lowpass or smoothing ilters reduce the high-requency component o the image; the range o requencies impacted by the spatial convolution is given by the ilter size, with the magnitude o the scaling actor and the shape o kernel k(m,n) determining the amplitude o the transormation (Fig. 3.6 shows simple low-pass ilter kernels o dierent sizes); the impact o the ilter operation can be modulated by, e.g., sampling a Gaussian distribution unction (Fig. 3.5a) and modiying its statistical moments Fig. 3.6: Kernels or smoothing ilters (Gonzalez and Woods, 199). - linear convolutions are associated with a (or large kernels substantial) loss o image inormation; or image enhancement purposes, so-called rank-order ilters (non-linear) achieve better results; or rank order iltering, the set o pixels sampled by a mask o size m,n is ordered and the center pixel (or origin) is transormed to the n-th element o the sorted list (e.g., median, minimum, maximum); as discussed in more detail in section 3.3, these non-linear ilters also orm the basis or morphological iltering techniques Fi.g. 3.7: Schematic depiction o 3x3 median (rank-order) ilter (Jähne, 1997).
7 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes Fig. 3.8: Noisy image (let) and result o 3x3 median iltering (right) Sharpening ilters - as shown in Fig. 3.5b, the high-pass ilter selectively operates on the high-requency component o images; because the integral o the ilter unction over the entire interval is zero (corresponding to a zerosum o the discrete ilter kernel, see Fig. 3.9); no change occurs in image regions that are o uniorm grey value, whereas ine details are "sharpened", e.g., in applications where a remedy or the eects o blurry optics is required; since this operation also enhances the noise component, the weight o the central pixel in the kernel may be increased to values >8 in a 3x3 mask (Fig. 3.9) which is oten reerred to as high-boost iltering and represents a compromise between sharpening and maintaining image integrity Fig. 3.9: Sharpening kernel (Gonzalez and Woods, 199). - derivative ilters are a variant o sharpening ilters that respond to spatial grey-value gradients in an image (Fig. 3.10); the gradient vector o a unction (x,y) is given by #"! = %"x ( % " ( $ "y ' with derivative ilters based on the determination o the magnitude o this vector (3.0)! =! = #" % $ "x # + ' " % $ "y (3.1) - implementation o gradient operators on discrete images can be achieved in several ways (see exemplary pixel neighbourhood and operators in Fig. 3.11); dierencing o the absolute values is the most common approach, with, e.g., so-called Roberts cross-gradient operators (corresponding to a x kernel) based on the approximation! " z 5 # z 9 # z 6 # z 8 (3.) - or 3x3 kernels, this approximation can be implemented by so-called Prewitt operators (Fig. 3.11) which compute the sum o dierences
8 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes - 4 -! " (z 7 + z 8 + z 9 ) # (z 1 + z + z 3 ) + (z 3 + z 6 + z 9 ) # (z 1 + z 4 + z 7 ) (3.3) - or some applications, the second-order derivative (Laplacian) o an image needs to be computed and can be approximated as shown! = " "x + " "y! # 4z 5 $ (z + z 4 + z 6 + z 8 ) (3.4) Fig. 3.10: First and second derivative across a boundary in an image (Gonzalez and Woods, 199). Fig. 3.11: Image segment (a) and kernels or gradient operators (b to d, or details see text; Gonzalez and Woods, 199).
9 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes Morphological iltering (Reading: Castleman, 1996, pp , Jähne, 1995, pp , Serra, 198) - as outlined in the previous section, the convolution o an image with a linear ilter mask involves the summation over the invidual products o mask and image matrix elements (Fig. 3.4); or non-linear ilters this product summation is replaced by a ranking (in rank-order ilters such as the median ilter shown in Figs. 3.7 and 3.8) or a logical operation (or morphological ilters) as shown in Fig. 3.1 Fig. 3.1: Transormation o an input image with a non-linear, morphological operator (Castleman, 1996). - rank-order ilters can be very powerul operators or image enhancement and extraction o image subsets (two examples will be discussed in class, with the median ilter as one o them also shown in Fig. 3.8) - morphological ilters operate on binary images, with each pixel represented by either 0 or 1 as a result o a segmentation, e.g. through an algorithm that distinguishes between a class o relevant eatures and a background signal; while the examples discussed here and in the class are based on square masks (mostly 3x3, as in Fig. 3.1), morphological ilter masks (also called structuring elements) can be o any shape, including composite masks that are broken down into components sequentially operating on the image - the two basic operations in morphological image processing are the erosion and the dilation; the elimination o all pixels adjacent to the outer margin o a eature in an image is reerred to as erosion (assuming here and throughout the text, that eatures are characterized by 0 and the background is 1 or 55 in binary images); in the terms o set theory, the result E o an erosion is the set o all points x,y such that S translated by x,y is ully contained within the eature set B E = B! S = { x, y S xy " B} (3.5) - an example o an erosion and other morphological processes is shown in Fig. 3.13; note how connected eatures end up disjointed i the bridges between the eatures are smaller than one structuring element diameter - a dilation integrates all pixels into an object that border on its perimeter; this is achieved belecting the structuring element B about its origin and then translating this by x,y across the image, the result o the dilation D is the set o all x,y or which S' and B overlap by at least one element: D = B! S = { x, y ( S " xy # B) $ B} (3.6) - sequential erosion and dilation or vice versa are known as opening O(B) and closing C(B), respectively:
10 Eicken, GEOS 69 - Geoscience Image Processing Applications, Lecture Notes O(B) = B S = ( B! S) " S C(B) = B os = ( B " S)! S (3.6) Fig. 3.13: Opening through sequential erosion (b,c) and dilation (d,e) and closing through sequential dilation (,g) and erosion (h,i) o a eature (a) (Gonzalez and Woods, 199). - thus erosion results in the elimination o eatures smaller than the size o the structuring element, separation o eatures joined by thin bridges and a reduction in the roughness o eature contours; sequential erosion with increasing mask sizes and determination o the corresponding change in total eature/background area is an eicient and robust method o size analysis - morphological operators, in particular when combined with conditional criteria that, e.g., ensure connectivity between objects, are powerul instruments in the pre-processing and analysis o image data, such as in the case o skeletonization or Euclidean distance distance transorms (a variant o the binary morphological operation, see Fig. 3.14) Fig. 3.14: Binarized image (let), euclidean distance map (center, grey value denotes distance rom margin o eatures), skeletonized image (right).
Chapter 3 Image Enhancement in the Spatial Domain
Chapter 3 Image Enhancement in the Spatial Domain Yinghua He School o Computer Science and Technology Tianjin University Image enhancement approaches Spatial domain image plane itsel Spatial domain methods
More informationDigital Image Processing. Image Enhancement in the Spatial Domain (Chapter 4)
Digital Image Processing Image Enhancement in the Spatial Domain (Chapter 4) Objective The principal objective o enhancement is to process an images so that the result is more suitable than the original
More informationNeighbourhood Operations
Neighbourhood Operations Neighbourhood operations simply operate on a larger neighbourhood o piels than point operations Origin Neighbourhoods are mostly a rectangle around a central piel Any size rectangle
More informationReview for Exam I, EE552 2/2009
Gonale & Woods Review or Eam I, EE55 /009 Elements o Visual Perception Image Formation in the Ee and relation to a photographic camera). Brightness Adaption and Discrimination. Light and the Electromagnetic
More informationMAPI Computer Vision. Multiple View Geometry
MAPI Computer Vision Multiple View Geometry Geometry o Multiple Views 2- and 3- view geometry p p Kpˆ [ K R t]p Geometry o Multiple Views 2- and 3- view geometry Epipolar Geometry The epipolar geometry
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationBabu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)
5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?
More informationClassification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation
Image enhancement (GW-Ch. 3) Classification of image operations Process of improving image quality so that the result is more suitable for a specific application. contrast stretching histogram processing
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationTopic 6 Representation and Description
Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation
More informationEXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,
School of Computer Science and Communication, KTH Danica Kragic EXAM SOLUTIONS Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006, 14.00 19.00 Grade table 0-25 U 26-35 3 36-45
More informationLecture 2 Image Processing and Filtering
Lecture 2 Image Processing and Filtering UW CSE vision faculty What s on our plate today? Image formation Image sampling and quantization Image interpolation Domain transformations Affine image transformations
More informationInterpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.
Image Interpolation 48 Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Fundamentally, interpolation is the process of using known
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity
More informationBroad field that includes low-level operations as well as complex high-level algorithms
Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and
More informationDigital Image Processing
Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments
More informationChapter 3: Intensity Transformations and Spatial Filtering
Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing
More informationMathematical Morphology and Distance Transforms. Robin Strand
Mathematical Morphology and Distance Transforms Robin Strand robin.strand@it.uu.se Morphology Form and structure Mathematical framework used for: Pre-processing Noise filtering, shape simplification,...
More informationFundamentals of Digital Image Processing
\L\.6 Gw.i Fundamentals of Digital Image Processing A Practical Approach with Examples in Matlab Chris Solomon School of Physical Sciences, University of Kent, Canterbury, UK Toby Breckon School of Engineering,
More informationC E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II
T H E U N I V E R S I T Y of T E X A S H E A L T H S C I E N C E C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S Image Operations II For students of HI 5323
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More information521466S Machine Vision Exercise #1 Camera models
52466S Machine Vision Exercise # Camera models. Pinhole camera. The perspective projection equations or a pinhole camera are x n = x c, = y c, where x n = [x n, ] are the normalized image coordinates,
More informationBoundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking
Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important
More informationVivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.
Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,
More informationDigital Image Processing. Lecture 6
Digital Image Processing Lecture 6 (Enhancement in the Frequency domain) Bu-Ali Sina University Computer Engineering Dep. Fall 2016 Image Enhancement In The Frequency Domain Outline Jean Baptiste Joseph
More informationWhat will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing
What will we learn? Lecture Slides ME 4060 Machine Vision and Vision-based Control Chapter 10 Neighborhood Processing By Dr. Debao Zhou 1 What is neighborhood processing and how does it differ from point
More informationf(x,y) is the original image H is the degradation process (or function) n(x,y) represents noise g(x,y) is the obtained degraded image p q
Image Restoration Image Restoration G&W Chapter 5 5.1 The Degradation Model 5.2 5.105.10 browse through the contents 5.11 Geometric Transformations Goal: Reconstruct an image that has been degraded in
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationCPSC 425: Computer Vision
1 / 55 CPSC 425: Computer Vision Instructor: Fred Tung ftung@cs.ubc.ca Department of Computer Science University of British Columbia Lecture Notes 2015/2016 Term 2 2 / 55 Menu January 12, 2016 Topics:
More informationmorphology on binary images
morphology on binary images Ole-Johan Skrede 10.05.2017 INF2310 - Digital Image Processing Department of Informatics The Faculty of Mathematics and Natural Sciences University of Oslo After original slides
More informationLecture: Edge Detection
CMPUT 299 Winter 2007 Lecture: Edge Detection Irene Cheng Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop -
More informationCITS 4402 Computer Vision
CITS 4402 Computer Vision A/Prof Ajmal Mian Adj/A/Prof Mehdi Ravanbakhsh, CEO at Mapizy (www.mapizy.com) and InFarm (www.infarm.io) Lecture 02 Binary Image Analysis Objectives Revision of image formation
More informationDigital Image Processing Fundamentals
Ioannis Pitas Digital Image Processing Fundamentals Chapter 7 Shape Description Answers to the Chapter Questions Thessaloniki 1998 Chapter 7: Shape description 7.1 Introduction 1. Why is invariance to
More informationImage Sampling and Quantisation
Image Sampling and Quantisation Introduction to Signal and Image Processing Prof. Dr. Philippe Cattin MIAC, University of Basel 1 of 46 22.02.2016 09:17 Contents Contents 1 Motivation 2 Sampling Introduction
More informationBiomedical Image Analysis. Spatial Filtering
Biomedical Image Analysis Contents: Spatial Filtering The mechanics of Spatial Filtering Smoothing and sharpening filters BMIA 15 V. Roth & P. Cattin 1 The Mechanics of Spatial Filtering Spatial filter:
More informationImage Processing. Traitement d images. Yuliya Tarabalka Tel.
Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an
More informationImage Sampling & Quantisation
Image Sampling & Quantisation Biomedical Image Analysis Prof. Dr. Philippe Cattin MIAC, University of Basel Contents 1 Motivation 2 Sampling Introduction and Motivation Sampling Example Quantisation Example
More informationChapter 11 Representation & Description
Chain Codes Chain codes are used to represent a boundary by a connected sequence of straight-line segments of specified length and direction. The direction of each segment is coded by using a numbering
More informationUNIVERSITY OF OSLO. Faculty of Mathematics and Natural Sciences
UNIVERSITY OF OSLO Faculty of Mathematics and Natural Sciences Exam: INF 4300 / INF 9305 Digital image analysis Date: Thursday December 21, 2017 Exam hours: 09.00-13.00 (4 hours) Number of pages: 8 pages
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationSECTION 5 IMAGE PROCESSING 2
SECTION 5 IMAGE PROCESSING 2 5.1 Resampling 3 5.1.1 Image Interpolation Comparison 3 5.2 Convolution 3 5.3 Smoothing Filters 3 5.3.1 Mean Filter 3 5.3.2 Median Filter 4 5.3.3 Pseudomedian Filter 6 5.3.4
More informationA Hierarchical Quality-Dependent Approach Toward Establishing A Seamless Nationwide Topographic Database
A Hierarchical Quality-Dependent Approach Toward Establishing A Seamless Nationwide Topographic Database Sagi Dalyot, Arieal Gershkovich, Yerach Doytsher Mapping and Geo-Inormation Engineering Technion,
More informationSUPER RESOLUTION IMAGE BY EDGE-CONSTRAINED CURVE FITTING IN THE THRESHOLD DECOMPOSITION DOMAIN
SUPER RESOLUTION IMAGE BY EDGE-CONSTRAINED CURVE FITTING IN THE THRESHOLD DECOMPOSITION DOMAIN Tsz Chun Ho and Bing Zeng Department o Electronic and Computer Engineering The Hong Kong University o Science
More informationExamination in Image Processing
Umeå University, TFE Ulrik Söderström 203-03-27 Examination in Image Processing Time for examination: 4.00 20.00 Please try to extend the answers as much as possible. Do not answer in a single sentence.
More informationBinary Morphological Model in Refining Local Fitting Active Contour in Segmenting Weak/Missing Edges
0 International Conerence on Advanced Computer Science Applications and Technologies Binary Morphological Model in Reining Local Fitting Active Contour in Segmenting Weak/Missing Edges Norshaliza Kamaruddin,
More informationLecture 4: Spatial Domain Transformations
# Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationRobot vision review. Martin Jagersand
Robot vision review Martin Jagersand What is Computer Vision? Computer Graphics Three Related fields Image Processing: Changes 2D images into other 2D images Computer Graphics: Takes 3D models, renders
More informationBlacksburg, VA July 24 th 30 th, 2010 Georeferencing images and scanned maps Page 1. Georeference
George McLeod Prepared by: With support from: NSF DUE-0903270 in partnership with: Geospatial Technician Education Through Virginia s Community Colleges (GTEVCC) Georeference The process of defining how
More informationUsing Hierarchical Warp Stereo for Topography. Introduction
Using Hierarchical Warp Stereo for Topography Dr. Daniel Filiberti ECE/OPTI 531 Image Processing Lab for Remote Sensing Introduction Topography from Stereo Given a set of stereoscopic imagery, two perspective
More informationMorphological Image Processing
Morphological Image Processing Binary dilation and erosion" Set-theoretic interpretation" Opening, closing, morphological edge detectors" Hit-miss filter" Morphological filters for gray-level images" Cascading
More informationMATRIX ALGORITHM OF SOLVING GRAPH CUTTING PROBLEM
UDC 681.3.06 MATRIX ALGORITHM OF SOLVING GRAPH CUTTING PROBLEM V.K. Pogrebnoy TPU Institute «Cybernetic centre» E-mail: vk@ad.cctpu.edu.ru Matrix algorithm o solving graph cutting problem has been suggested.
More informationDetection of Edges Using Mathematical Morphological Operators
OPEN TRANSACTIONS ON INFORMATION PROCESSING Volume 1, Number 1, MAY 2014 OPEN TRANSACTIONS ON INFORMATION PROCESSING Detection of Edges Using Mathematical Morphological Operators Suman Rani*, Deepti Bansal,
More informationThe. Handbook ijthbdition. John C. Russ. North Carolina State University Materials Science and Engineering Department Raleigh, North Carolina
The IMAGE PROCESSING Handbook ijthbdition John C. Russ North Carolina State University Materials Science and Engineering Department Raleigh, North Carolina (cp ) Taylor &. Francis \V J Taylor SiFrancis
More informationDefects Detection of Billet Surface Using Optimized Gabor Filters
Proceedings o the 7th World Congress The International Federation o Automatic Control Deects Detection o Billet Surace Using Optimized Gabor Filters Jong Pil Yun* SungHoo Choi* Boyeul Seo* Chang Hyun Park*
More informationSURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES
SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES 1 B.THAMOTHARAN, 2 M.MENAKA, 3 SANDHYA VAIDYANATHAN, 3 SOWMYA RAVIKUMAR 1 Asst. Prof.,
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationImage Processing. Application area chosen because it has very good parallelism and interesting output.
Chapter 11 Slide 517 Image Processing Application area chosen because it has very good parallelism and interesting output. Low-level Image Processing Operates directly on stored image to improve/enhance
More informationChapter 18. Geometric Operations
Chapter 18 Geometric Operations To this point, the image processing operations have computed the gray value (digital count) of the output image pixel based on the gray values of one or more input pixels;
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationShort Survey on Static Hand Gesture Recognition
Short Survey on Static Hand Gesture Recognition Huu-Hung Huynh University of Science and Technology The University of Danang, Vietnam Duc-Hoang Vo University of Science and Technology The University of
More informationMorphological Image Processing
Morphological Image Processing Ranga Rodrigo October 9, 29 Outline Contents Preliminaries 2 Dilation and Erosion 3 2. Dilation.............................................. 3 2.2 Erosion..............................................
More informationDD2423 Image Analysis and Computer Vision IMAGE FORMATION. Computational Vision and Active Perception School of Computer Science and Communication
DD2423 Image Analysis and Computer Vision IMAGE FORMATION Mårten Björkman Computational Vision and Active Perception School of Computer Science and Communication November 8, 2013 1 Image formation Goal:
More informationImage Processing. Filtering. Slide 1
Image Processing Filtering Slide 1 Preliminary Image generation Original Noise Image restoration Result Slide 2 Preliminary Classic application: denoising However: Denoising is much more than a simple
More information09/11/2017. Morphological image processing. Morphological image processing. Morphological image processing. Morphological image processing (binary)
Towards image analysis Goal: Describe the contents of an image, distinguishing meaningful information from irrelevant one. Perform suitable transformations of images so as to make explicit particular shape
More informationChapter 11 Image Processing
Chapter Image Processing Low-level Image Processing Operates directly on a stored image to improve or enhance it. Stored image consists of a two-dimensional array of pixels (picture elements): Origin (0,
More informationEECS490: Digital Image Processing. Lecture #19
Lecture #19 Shading and texture analysis using morphology Gray scale reconstruction Basic image segmentation: edges v. regions Point and line locators, edge types and noise Edge operators: LoG, DoG, Canny
More informationA Quantitative Comparison of 4 Algorithms for Recovering Dense Accurate Depth
A Quantitative Comparison o 4 Algorithms or Recovering Dense Accurate Depth Baozhong Tian and John L. Barron Dept. o Computer Science University o Western Ontario London, Ontario, Canada {btian,barron}@csd.uwo.ca
More informationResearch on Image Splicing Based on Weighted POISSON Fusion
Research on Image Splicing Based on Weighted POISSO Fusion Dan Li, Ling Yuan*, Song Hu, Zeqi Wang School o Computer Science & Technology HuaZhong University o Science & Technology Wuhan, 430074, China
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More information2: Image Display and Digital Images. EE547 Computer Vision: Lecture Slides. 2: Digital Images. 1. Introduction: EE547 Computer Vision
EE547 Computer Vision: Lecture Slides Anthony P. Reeves November 24, 1998 Lecture 2: Image Display and Digital Images 2: Image Display and Digital Images Image Display: - True Color, Grey, Pseudo Color,
More informationEE 584 MACHINE VISION
EE 584 MACHINE VISION Binary Images Analysis Geometrical & Topological Properties Connectedness Binary Algorithms Morphology Binary Images Binary (two-valued; black/white) images gives better efficiency
More informationNoise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions
Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images
More informationCoE4TN4 Image Processing
CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationCHAPTER 4 TEXTURE FEATURE EXTRACTION
83 CHAPTER 4 TEXTURE FEATURE EXTRACTION This chapter deals with various feature extraction technique based on spatial, transform, edge and boundary, color, shape and texture features. A brief introduction
More informationLecture 8 Object Descriptors
Lecture 8 Object Descriptors Azadeh Fakhrzadeh Centre for Image Analysis Swedish University of Agricultural Sciences Uppsala University 2 Reading instructions Chapter 11.1 11.4 in G-W Azadeh Fakhrzadeh
More informationEdge linking. Two types of approaches. This process needs to be able to bridge gaps in detected edges due to the reason mentioned above
Edge linking Edge detection rarely finds the entire set of edges in an image. Normally there are breaks due to noise, non-uniform illumination, etc. If we want to obtain region boundaries (for segmentation)
More informationDIGITAL TERRAIN MODELS
DIGITAL TERRAIN MODELS 1 Digital Terrain Models Dr. Mohsen Mostafa Hassan Badawy Remote Sensing Center GENERAL: A Digital Terrain Models (DTM) is defined as the digital representation of the spatial distribution
More informationImage Processing
Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore
More informationEE 264: Image Processing and Reconstruction. Image Motion Estimation II. EE 264: Image Processing and Reconstruction. Outline
Peman Milanar Image Motion Estimation II Peman Milanar Outline. Introduction to Motion. Wh Estimate Motion? 3. Global s. Local Motion 4. Block Motion Estimation 5. Optical Flow Estimation Basics 6. Optical
More informationSYDE 575: Introduction to Image Processing
SYDE 575: Introduction to Image Processing Image Enhancement and Restoration in Spatial Domain Chapter 3 Spatial Filtering Recall 2D discrete convolution g[m, n] = f [ m, n] h[ m, n] = f [i, j ] h[ m i,
More informationImage representation. 1. Introduction
Image representation Introduction Representation schemes Chain codes Polygonal approximations The skeleton of a region Boundary descriptors Some simple descriptors Shape numbers Fourier descriptors Moments
More informationImage Enhancement: To improve the quality of images
Image Enhancement: To improve the quality of images Examples: Noise reduction (to improve SNR or subjective quality) Change contrast, brightness, color etc. Image smoothing Image sharpening Modify image
More informationComputer Vision I - Basics of Image Processing Part 2
Computer Vision I - Basics of Image Processing Part 2 Carsten Rother 07/11/2014 Computer Vision I: Basics of Image Processing Roadmap: Basics of Digital Image Processing Computer Vision I: Basics of Image
More informationPart 3: Image Processing
Part 3: Image Processing Image Filtering and Segmentation Georgy Gimel farb COMPSCI 373 Computer Graphics and Image Processing 1 / 60 1 Image filtering 2 Median filtering 3 Mean filtering 4 Image segmentation
More informationDigital image processing
Digital image processing Morphological image analysis. Binary morphology operations Introduction The morphological transformations extract or modify the structure of the particles in an image. Such transformations
More informationME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies"
ME/CS 132: Introduction to Vision-based Robot Navigation! Low-level Image Processing" Larry Matthies" lhm@jpl.nasa.gov, 818-354-3722" Announcements" First homework grading is done! Second homework is due
More informationELEC Dr Reji Mathew Electrical Engineering UNSW
ELEC 4622 Dr Reji Mathew Electrical Engineering UNSW Review of Motion Modelling and Estimation Introduction to Motion Modelling & Estimation Forward Motion Backward Motion Block Motion Estimation Motion
More informationDigital Image Processing Chapter 11: Image Description and Representation
Digital Image Processing Chapter 11: Image Description and Representation Image Representation and Description? Objective: To represent and describe information embedded in an image in other forms that
More informationComparison between Various Edge Detection Methods on Satellite Image
Comparison between Various Edge Detection Methods on Satellite Image H.S. Bhadauria 1, Annapurna Singh 2, Anuj Kumar 3 Govind Ballabh Pant Engineering College ( Pauri garhwal),computer Science and Engineering
More informationMorphological Image Processing
Morphological Image Processing Binary image processing In binary images, we conventionally take background as black (0) and foreground objects as white (1 or 255) Morphology Figure 4.1 objects on a conveyor
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationAliasing. Can t draw smooth lines on discrete raster device get staircased lines ( jaggies ):
(Anti)Aliasing and Image Manipulation for (y = 0; y < Size; y++) { for (x = 0; x < Size; x++) { Image[x][y] = 7 + 8 * sin((sqr(x Size) + SQR(y Size)) / 3.0); } } // Size = Size / ; Aliasing Can t draw
More information3 Image Enhancement in the Spatial Domain
3 Image Enhancement in the Spatial Domain Chih-Wei Tang 唐之瑋 ) Department o Communication Engineering National Central Universit JhongLi, Taiwan 013 Spring Outline Gra level transormations Histogram processing
More informationLaser sensors. Transmitter. Receiver. Basilio Bona ROBOTICA 03CFIOR
Mobile & Service Robotics Sensors for Robotics 3 Laser sensors Rays are transmitted and received coaxially The target is illuminated by collimated rays The receiver measures the time of flight (back and
More informationDigital Image Processing. Image Enhancement - Filtering
Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images
More informationMorphological Image Processing
Morphological Image Processing Morphology Identification, analysis, and description of the structure of the smallest unit of words Theory and technique for the analysis and processing of geometric structures
More informationDigital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement
Chapter 3 Image Enhancement in the Spatial Domain The principal objective of enhancement to process an image so that the result is more suitable than the original image for a specific application. Enhancement
More information