Digital Image Processing. Week 2
|
|
- Kelly Greer
- 6 years ago
- Views:
Transcription
1
2 Geometric spatial transformations and image registration - modify the spatial relationship between pixels in an image - these transformations are often called rubber-sheet transformations (analogous to printing an image on a sheet of rubber and then stretching the sheet according to a predefined set of rules. A geometric transformation consists of 2 basic operations: 1. a spatial transformation of coordinates 2. intensity interpolation that assign intensity values to the spatial transformed pixels The coordinate system transformation: ( x, y) T[( v, w)] (v, w) pixel coordinates in the original image (x, y) pixel coordinates in the transformed image
3 v w T[( v, w)] (, ) shrinks the original image half its size in both spatial directions 2 2 Affine transform 0 [ x, y,1] [ v, w,1] T [ v, w,1] t t 0 t t x t vt wt y t12vt22wt33 t31 t32 1 (AT) This transform can scale, rotate, translate, or shear a set of coordinate points, depending on the elements of the matrix T. If we want to resize an image, rotate it, and move the result to some location, we simply form a 3x3 matrix equal to the matrix product of the scaling, rotation, and translation matrices from Table 1.
4 Affine transformations
5 The preceding transformations relocate pixels on an image to new locations. To complete the process, we have to assign intensity values to those locations. This task is done by using intensity interpolation (like nearest neighbor, bilinear, bi-cubic interpolation). In practice, we can use equation (AT) in two basic ways: forward mapping: scan the pixels of the input image (v, w), compute the new spatial location (x, y) of the corresponding pixel in the new image using (AT) directly; Problems: - intensity assignment when 2 or more pixels in the original image are transformed to the same location in the output image, - some output locations have no correspondent in the original image (no intensity assignment)
6 inverse mapping: scans the output pixel locations, and at each location, (x, y), computes the corresponding location in the input image (v, w) 1 ( vw, ) T ( xy, ) It then interpolates among the nearest input pixels to determine the intensity of the output pixel value. Inverse mappings are more efficient to implement than forward mappings and are used in numerous commercial implementations of spatial transformations (MATLAB for ex.).
7
8 Image registration align two or more images of the same scene In image registration, we have available the input and output images, but the specific transformation that produced the output image from the input is generally unknown. The problem is to estimate the transformation function and then use it to register the two images. - it may be of interest to align (register) two or more image taken at approximately the same time, but using different imaging systems (MRI scanner, and a PET scanner). - align images of a given location, taken by the same instrument at different moments of time (satellite images) Solving the problem: using tie points (also called control points), which are corresponding points whose locations are known precisely in the input and reference image.
9 How to select tie points? - interactively selecting them - use of algorithms that try to detect these points - some imaging systems have physical artifacts (small metallic objects) embedded in the imaging sensors. These objects produce a set of known points (called reseau marks) directly on all images captured by the system, which can be used as guides for establishing tie points. The problem of estimating the transformation is one of modeling. Suppose we have a set of 4 tie points both on the input image and the reference image. A simple model based on a bilinear approximation is given by: x c v c w c vw c y c v c w c vw c (v, w) and (x, y) are the coordinates of the tie points (we get a 8x8 linear system for {c i })
10 When 4 tie points are insufficient to obtain satisfactory registration, an approach used frequently is to select a larger number of tie points and using this new set of tie points subdivide the image in rectangular regions marked by groups of 4 tie points. On the subregions marked by 4 tie points we applied the transformation model described above. The number of tie points and the sophistication of the model required to solve the register problem depend on the severity of the geometrical distortion.
11 a b c d (a) reference image (b) geometrically distorted image (c) registered image (d) difference between (a) and (c)
12 Probabilistic Methods z i = the values of all possible intensities in an M N digital image, i=0,1,,l-1 p(z k ) = the probability that the intensity level z k occurs in the given image nk pz ( k ) M N n k = the number of times that intensity z k occurs in the image (MN is the total number of pixels in the image) L1 k0 pz ( ) 1 The mean (average) intensity of an image is given by: L1 m zkp( zk) k0 k
13 The variance of the intensities is: L1 2 k 2 k0 ( z m) p( z ) The variance is a measure of the spread of the values of z about the mean, so it is a measure of image contrast. Usually, for measuring image contrast the standard deviation ( ) is used. The n-th moment of a random variable z about the mean is defined as: L1 n n z zk m p zk k0 ( ) ( ) ( ) 2 ( ( z) 1, ( z) 0, ( z) ) ( z) 0 3 the intensities are biased to values higher than the mean; ( ( z) 0 3 the intensities are biased to values lower than the mean; k
14 ( z) 0 3 the intensities are distributed approximately equally on both side of the mean Fig.1 (a) Low contrast (b) medium contrast (c) high contrast Figure 1(a) standard deviation 14.3 (variance = 204.5) Figure 1(b) standard deviation 31.6 (variance = 998.6) Figure 1(c) standard deviation 49.2 (variance = )
15 Intensity Transformations and Spatial Filtering gxy (, ) T f( xy, ) f(x, y) input image, g(x, y) output image, T an operator on f defined over a neighborhood of (x, y). - the neighborhood of the point (x, y), S xy usually is rectangular, centered on (x, y), and much smaller in size than the image
16 Sxy Digital Image Processing - spatial filtering, the operator T (the neighborhood and the operation applied on it) is called spatial filter (spatial mask, kernel, template or window) {( x, y)} T becomes an intensity (gray-level or mapping) transformation function s Tr ( ) s and r are denoting, respectively, the intensity of g and f at (x, y). Fig. 2 Intensity transformation functions. left contrast stretching right thresholding function Figure 2 left - T produces an output image of higher contrast than the original, by darkening the intensity levels below k and brightening the levels above k this technique is called contrast stretching.
17 Figure 2 right - T produces a binary output image. A mapping of this form is called thresholding function. Some Basic Intensity Transformation Functions Image Negatives The negative of an image with intensity levels in [0, L-1] is obtain using the function - equivalent of a photographic negative s Tr ( ) L1 r - technique suited for enhancing white or gray detail embedded in dark regions of an image
18 Original Negative image
19 Log Transformations s Tr ( ) clog(1 r), c- constant, r 0 Some basic intensity transformation functions
20 This transformation maps a narrow range of low intensity values in the input into a wider range. An operator of this type is used to expand the values of dark pixels in an image while compressing the higher-level values. The opposite is true for the inverse log transformation. The log functions compress the dynamic range of images with large variations in pixel values. Figure 4(a) intensity values in the range 0 to 1.5 x 10 6 Figure 4(b) = log transformation of Figure 4(a) with c=1 range 0 to 6.2 a b (a) Fourier spectrum (b) log transformation applied to (a), c=1 Fig. 4
21 Power-Law (Gamma) Transformations st( r) c r, c, - positive constants ( sc( r ) ) Plots of gamma transformation for different values of γ (c=1)
22 Power-law curves with 1 map a narrow range of dark input values into a wider range of output values, with the opposite being true for higher values of input values. The curves with 1 have the opposite effect of those generated with values of 1. c 1 - identity transformation. A variety of devices used for image capture, printing, and display respond according to a power law. The process used to correct these power-law response phenomena is called gamma correction.
23 a b c d (a) aerial image (b) (d) results of applying gamma transformation with c=1 and γ=3.0, 4.0 and 5.0 respectively
24 Piecewise-Linear Transformations Functions Contrast stretching - a process that expands the range of intensity levels in an image so it spans the full intensity range of the recording tool or display device a b c d Fig.5
25 s1 r r [0, r 1] r1 s2( rr1) s1( r2 r) Tr ( ) r[ r1, r2] ( r2 r1) ( r2 r1) s2( L1 r) r[ r2, L 1] ( L1 r2 )
26 r s, r s identity transformation (no change) r r, s 0, s L 1 thresholding function Figure 5(b) shows an 8-bit image with low contrast. Figure 5(c) - contrast stretching, obtained by setting the parameters r, s r,0 r, s r, L max 1 1 min, where r min and r max denote the minimum and maximum gray levels in the image, respectively. Thus, the transformation function stretched the levels linearly from their original range to the full range [0, L-1]. Figure 5(d) - the thresholding function was used with r, s m,0 r, s m, L where m is the mean gray level in the image., The original image on which these results are based is a scanning electron microscope image of pollen, magnified approximately 700 times.
27 Intensity-level slicing - highlighting a specific range of intensities in an image There are two approaches for intensity-level slicing: 1. display in one value (white, for example) all the values in the range of interest and in another (say, black) all other intensities (Figure 3.11 (a)) 2. brighten (or darken) the desired range of intensities but leaves unchanged all other intensities in the image (Figure 3.11 (b)).
28 Highlights intensity range [A, B] and reduces all other intensities to a lower level Highlights range [A, B] and preserves all other intensities Figure 6 (left) aortic angiogram near the kidney. The purpose of intensity slicing is to highlight the major blood vessels that appear brighter as a result of injecting a contrast medium. Figure 6(middle) shows the result of applying the first technique for a band near the top of the scale of intensities. This type of enhancement produces a binary image
29 which is useful for studying the shape of the flow of the contrast substance (to detect blockages ) In Figure 3.12(right) the second technique was used: a band of intensities in the mid-gray image around the mean intensity was set to black, the other intensities remain unchanged. Fig. 6 - Aortic angiogram and intensity sliced versions
30 Bit-plane slicing For a 8-bit image, f(x, y) is a number in [0,255], with 8-bit representation in base 2 This technique highlights the contribution made to the whole image appearances by each of the bits. An 8-bit image may be considered as being composed of eight 1-bit planes (plane 1 the lowest order bit, plane 8 the highest order bit)
31
32 The binary image for the 8-th bit plane of an 8-bit image can be obtained by processing the input image with a threshold intensity transformation function that maps all the intensities between 0 and 127 to 0 and maps all levels between 128 and 255 to 1. The bit-slicing technique is useful for analyzing the relative importance of each bit in the image helps in determining the proper number of bits to use when quantizing the image. The technique is also useful for image compression.
33 Histogram processing The histogram of a digital image is with intensity levels in [ 0, L-1]: rk n k hr ( k) nk, k0,1,..., L1 the k-th intensity level the number of pixels in the image with intensity r Normalized histogram for an M x N digital image: nk pr ( k ), k0,1,..., L 1 MN pr ( k ) = an estimate of the probability of occurrence of intensity level r k in the image L1 k0 pr ( ) 1 k k
34 Fig. 8 dark and light images, lowcontrast, and high contrast images and their histograms
35 Histogram Equalization Digital Image Processing - determine a transformation function that seeks to produce an output image that has a uniform histogram 1. T(r) monotonically increasing 2. 0 Tr ( ) L1 for 0 r L 1 s Tr ( ), 0 r L 1 T(r) monotonically increasing guarantees that intensity values in output image will not be less than the corresponding input values Relation (b) requires that both input and output images have the same range of intensities
36 Histogram equalization or histogram linearization transformation k k ( L 1) s Tr ( ) L1 p( r) n M N k k r j j j0 j0 The output image is obtained by mapping each pixel in the input image with intensity r k into a corresponding pixel with intensity s k in the output image. Consider the following example: 3-bit image (L=8), 64x64 image (M=N=64, MN=4096) Intensity distribution and histogram values for a 3-bit digital image
37 0 s T( r ) 7 p ( r ) 7 p ( r ) r j r 0 j0 1 s T( r ) 7 p ( r ) 7 p ( r ) 7 p ( r ) r j r 0 r 1 j0 s 4.55, s 5.67, s 6.23, s 6.65, s 6.86, s s s s s s s s s
38
39
40 Histogram Matching (Specification) Sometimes is useful to be able to specify the shape of the histogram that we wish the output image to have. The method used to generate a processed image that has a specified histogram is called histogram matching or histogram specification. Suppose {z q ;q=0,,l-1} are the new values of histogram we desire to match. Consider the histogram equalization transformation for the input image: k k ( L 1) s T( r ) L1 p ( r ) n, k 0,1,..., L1 M N (1) k k r j j j0 j0 Consider the histogram equalization transformation for the new histogram: Gz ( ) L1 p( z), q0,1,..., L1 (2) q z i i0 q Tr ( ) s Gz ( ) for some value of q k k q 1 zq G sk ( )
41 Histogram-specification procedure: 1) Compute the histogram p r (r) of the input image, and compute the histogram equalization transformation (1). Round the resulting values s k to integers in [0, L-1] 2) Compute all values of the transformation function G using relation (2), where p z (z i ) are the values of the specified histogram. Round the values G(z q ) to integers in the range [0, L-1] and store these values in a table 3) For every value of s k,k=0,1,,l-1 use the table for the values of G to find the corresponding value of z q so that G(z q ) is closest to s k and store these mappings from s to z. When more than one value of z q satisfies the property (i.e., the mapping is not unique), choose the smallest value by convention. 4) Form the histogram-specified image by first histogram-equalizing the input image and then mapping every equalized pixel value, s k, of this image to the corresponding value z q in the histogram-specified image using the mappings found at step 3).
42 The intermediate step of equalizing the input image can bin skipped by combining the two transformation functions T and G -1. Reconsider the above example: Fig. 9
43 Figure 9(a) shows the histogram of the original image. Figure 9 (b) is the new histogram to achieve. The first step is to obtain the scaled histogram-equalized values: Then we compute the values of G: 0 s s s s 1 s s s s Gz ( ) 7 p( z) 0.00, Gz ( ) Gz ( ) 0.00, Gz ( ) z i i0 Gz ( ) , Gz ( ) , Gz ( ) 5.956, Gz ( )
44 The results of performing step 3) of the procedure are summarized in the next table: In the last step of the algorithm, we use the mappings in the above table to map every pixel in the histogram equalized image into a corresponding pixel in the newly-created
45 histogram-specified image. The values of the resulting histogram are listed in the third column of Table 3.2, and the histogram is sketched in Figure 9(d). r 0 r1 s0 zq r r r r r r s s z z 5 s s z s s s z q q q q
46 Local Histogram Processing Digital Image Processing The histogram processing techniques previously described are easily adaptable to local enhancement. The procedure is to define a square or rectangular neighborhood and move the center of this area from pixel to pixel. At each location, the histogram of the points in the neighborhood is computed and either a histogram equalization or histogram specification transformation function is obtained. This function is finally used to map the gray level of the pixel centered in the neighborhood. The center of the neighborhood region is then moved to an adjacent pixel location and the procedure is repeated. Updating the histogram obtained in the previous location with the new data introduced at each motion step is possible.
47
48 Using Histogram Statistics for Image Enhancement Let r denote a discrete random variable representing discrete gray-levels in [0, L-1], and let p(r i ) denote the normalized histogram component corresponding to the i-th value of r. The n-th moment of r about its mean is defined as: m is the mean (average intensity) value of r: L1 n n r ri m p ri i0 ( ) ( ) ( ) m L1 rip( ri) i0 - measure of average intensity 2 2 L1 i 2 i0 ( r) ( r m) p( r), Sample mean and sample variance: x0 y0 x0 y0 i measure of contrast M1 N1 M1N m f( x, y), f( x, y) m MN MN 2
49 Spatial Filtering The name filter is borrowed from frequency domain processing, where filtering means accepting (passing) or rejecting certain frequency components. Filters that pass low frequency are called lowpass filters. A lowpass filter has the effect of blurring (smoothing) an image. The filters are also called masks, kernels, templates or windows. The Mechanics of Spatial Filtering A spatial filter consists of: 1) a neighborhood (usually a small rectangle) 2) a predefined operation performed on the pixels in the neighborhood Filtering creates a new pixel with the same coordinates as the pixel in the center of the neighborhood, and whose intensity value is modified by the filtering operation.
50 If the operation performed on the image pixels is linear, the filter is called linear spatial filter, otherwise the filter is nonlinear. Fig. 10 Linear spatial filtering with a 3 3 filter mask
51 In Figure 10 is pictured a 3 3 linear filter: g( x, y) w( 1, 1) f( x1, y1) w( 1,0) f( x1, y) w(0,0) f( x, y) w(1,1) f( x1, y1) For a mask of size m n, we assume m=2a+1 and n=2b+1, where a and b are positive integers. The general expression of a linear spatial filter of an image of size M N with a filter of size m n is: a b gxy (, ) wst (, ) f( x sy, t) sa tb
52 Spatial Correlation and Convolution Correlation is the process of moving a filter mask over the image and computing the sum of products at each location. Convolution is similar with correlation, except that the filter is first rotated by 180º. Correlation Convolution a wxy (, ) f( xy, ) wst (, ) f( x sy, t) b sa tb a wxy (, ) f( xy, ) wst (, ) f( xsy, t) b sa tb A function that contains a single 1 and the rest being 0s is called a discrete unit impulse. Correlating a function with a discrete unit impulse produces a rotated version of the filter at the location of the impulse.
53 Linear filters can be found in DIP literature also as: convolution filter, convolution mask or convolution kernel. Vector Representation of Linear Filtering mn T R w1z1w2z2 wmnzmn wkzk w z k1 Where the w-s are the coefficients of an mn filter and the z-s are the corresponding image intensities encompassed by the filter. T R w z w z w z w z w z, w, z k1 k k 9
54 Smoothing Linear Filters A smoothing linear filter computes the average of the pixels contained in the neighborhood of the filter mask. These filters are called sometimes averaging filters or lowpass filters. The process of replacing the value of every pixel in an image by the average of the intensity levels in the neighborhood defined by the filter mask produces an image with reduced sharp transitions in intensities. Usually random noise is characterized by such sharp transitions in intensity levels smoothing linear filters are applied for noise reduction. The problem is that edges are also characterized by sharp intensity transitions, so averaging filters have the undesirable effect that they blur edges. A major use of averaging filters is the reduction of irrelevant detail in an image (pixel regions that are small with respect to the size of the filter mask).
55 There is the possibility of using weighted average: the pixels are multiplied by different coefficients, thus giving more importance (weight) to some pixels at the expense of other. A general weighted averaging filter of size m n (m and n are odd) for an MN image is given by the expression: a b wst (,) f( x sy, t) gxy (, ) sa tb a b x0,1,..., M1, y0,1,..., N1 wst (, ) sa tb
56 a b c d e f (a) original image (b) (f) results of smoothing with square averaging filters of size m=3,5,9,15, and 35, respectively The black squares at the top are of size 3, 5, 9, 15, 25, 35, 45, 55. The letters at the bottom range in size from 10 cu 24 points. The vertical bars are 5 pixels wide and 100 pixels high, separated bu 20 pixels. The diameter of the circles is 25 pixels, and their borders are 15 pixels apart. The noisy rectangles are pixels.
57 An important application of spatial averaging is to blur an image for the purpose of getting a gross representation of objects of interest, such that the intensity of smaller objects blends with the background and larger object become blob like and easy to detect. The size of the mask establishes the relative size of the objects that will disappear in the background. Left image from the Hubble Space Telescope, ; Middle Image filtered with a averaging mask; Right result of averaging the middle image
58 Order-Statistic (Nonlinear) Filters Order-statistic filters are nonlinear spatial filters based on ordering (ranking) the pixels contained in the image area defined by the selected neighborhood and replacing the value of the center pixel with the value determined by the ranking result. The best known filter in this class is the median filter, which replaces the value of a pixel by the median of the intensity values in the neighborhood of that pixel (the original value of the pixel is included in the computation of the median). Median filters provide excellent noise-reduction capabilities, and are less blurring than
59 linear smoothing filters of similar size. Median filters are particularly effective against impulse noise (also called salt-and-pepper noise). The median, of a set of values is such that half the values in the set are less than or equal to, and half are greater than or equal to. For a 3 3 neighborhood with intensity values (10, 15, 20, 20, 30, 20, 20, 25, 100) the median is =20. The effect of median filter is to force points with distinct intensity levels to be more like their neighbors. Isolated clusters of pixels
60 that are light or dark with respect to their neighbors, and whose area is less than m 2 2 are eliminated by an m m median filter (eliminated means forced to the median intensity of the neighbors). Max/min filter is the filter which replaces the intensity value of the pixel with the max/min value of the pixels in the neighborhood. The max/min filter is useful for finding the brightest/darkest points in an image. Min filter 0% filter Median filter 50% filter
61 Max filter 100% filter (a) (b) (c) (a) X-ray image of circuit board corrupted by salt&pepper noise (b) noise reduction with a 3 3 averaging filter (c) noise reduction with a 3 3 median filter
62 Sharpening Spatial Filters The principal objective of sharpening is to highlight transitions in intensity. These filters are applied in electronic printing, medical imaging, industrial inspection, autonomous guidance in military systems. Averaging analogous to integration Sharpening spatial differentiation Image differentiation enhances edges and other discontinuities (noise, for example) and deemphasizes areas with slowly varying intensities.
63 For digital images, discrete approximation of derivatives are used f f( x 1, y) f( x, y) x 2 f f( x1, y) 2 f( x, y) f( x1, y) 2 x
64 Illustration of the first and second derivatives of a 1-D digital function
65 Using the Second Derivative for Image Sharpening Laplacian operator Isotropic filters the response of this filter is independent of the direction of the discontinuities in the image. Isotropic filters are rotation invariant, in the sense that rotating the image and then applying the filter gives the same result as applying the filter to the image and then rotating the result. The simplest isotropic derivative operator is the Laplacian: f x f y f 2 2
66 This operator is linear. 2 f f ( x 1, y) 2 f( x, y) f( x 1, y) 2 x 2 f f( x, y1) 2 f( x, y) f( x, y1) 2 y 2 f( x, y) f( x 1, y) f( x 1, y) f( x, y 1) f( x, y 1) 4 f( x, y)
67 Filter mask that approximate the Laplacian The Laplacian being a derivative operator highlights gray-level discontinuities in an image and deemphasizes regions with slowly varying gray levels. This will tend to produce images that have
68 grayish edge lines and other discontinuities, all superimposed on a dark, featureless background. Background features can be recovered while still preserving the sharpening effect of the Laplacian operation simply by adding the original and Laplacian images. The basic way to use the Laplacian for image sharpening is given by: g x y f x y c f x y 2 (, ) (, ) (, ) The (discrete) Laplacian can contain both negative and positive values it needs to be scaled.
69 Blurred image of the North Pole of the Moon; Laplace filtered image Sharpening with c=1 and c=2
70 Unsharp Masking and Highboost Filtering - process used in printing and publishing industry to sharpen images - subtracting an unsharp (smoothed) version of an image from the original image 1.Blur the original image 2.Subtract the blurred image from the original (the resulting difference is called the mask) 3.Add the mask to the original
71 Let f( x, y ) be the blurred image. The mask is given by: g ( x, y) f( x, y) f( x, y) mask g( x, y) f( x, y) k g ( x, y) mask k = 1 unsharp masking k > 1 highboost filtering
72 original image blurred image (Gaussian filter 5 5, σ=3) mask difference between the above images unsharp masking result highboost filter result (k=4.5)
73 The Gradient for (Nonlinear) Image Sharpening f gx x f grad( f) g y f y - the gradient points in the direction of the greatest rate of change of f at location (x,y). The magnitude (length) of the gradient is defined as: 2 2 M( x, y) mag( f) gx gy
74 M(x,y) is an image of the same size as the original called the gradient image (or simply as the gradient). M(x,y) is rotation invariant (isotropic) (the gradient vector some applications, the following formula is used: f is not isotropic). In M( x, y) gx gy (not isotropic) Different ways of approximating g x and g produce different filter y operators.
75 Roberts cross-gradient operator (1965) g f( x1, y1) f( x, y) x 1 g f( x, y1) f( x1, y) y 2 M( x, y) M( x, y) 1 2
76 Sobel operators g f( x1, y1) 2 f( x, y1) f( x1, y1) x f( x 1, y1) 2 f( x, y1) f( x1, y1) g f( x1, y1) 2 f( x1, y) f( x1, y1) y f( x1, y1) 2 f( x1, y) f( x1, y1)
77 Roberts cross gradient operators Sobel operators
78 Filtering in the Frequency Domain Filter: a device or material for suppressing or minimizing waves or oscillations of certain frequencies Frequency: the number of times that a periodic function repeats the same sequence of values during a unit variation of the independent variable
79 Fourier series and Transform Fourier in a memoir in 1807 and published in 1822 in his book La Théorie Analitique de la Chaleur states that any periodic function can be expressed as the sum of sines and/or cosines of different frequencies, each multiplied by a different coefficient (called now a Fourier series). Even function that are not periodic (but whose area under the curve is finite) can be expressed as the integral of sines and/or cosines multiplied by a weighing function the Fourier transform. Both representation share the characteristic that a function expressed in either a Fourier series or transform, can be
80 reconstructed (recovered) completely via an inverse process, with no loss of information. It allows us to work in the Fourier domain and then return to the original domain of the function without losing any information.
81 Complex Numbers C Ri I, R, I, i 1, R - real part, Cimaginary part C RiI the conjugate of the complex number C C C i C R I 2 2 (cos sin ), complex number in polar coordinates e i cos isin Euler's formula C i C e
82 Fourier series Digital Image Processing f(t) a periodic function ( f( t T) f( t) t) f() t n n c e n 2 n i t T 2 n 1 i t T cn f( t) e dt n0, 1, 2,... T T 2 T 2 Impulses and the Sifting Property A unit impulse located at t=0, denoted δ(t) is defined as: if t 0 () t satisfying () t dt 1 0 if t 0
83 Physically, an impulse may be interpreted as a spike of infinity amplitude and zero duration, having unit area. An impulse has the sifting property with respect to integration: f() t () t dt f(0), f continuous in t 0 f() t ( tt ) dt f( t ), f continuous in t The unit discrete impulse, δ(x) is defined as: 1 if x 0 ( x) satisfying ( x) 1 0 if x 0 x
84 The sifting property: x f( x) ( x) f(0) f( x) ( x x ) f( x ) The impulse train, sδt (t) : x s () ( ) T t tnt n 0 0
85 The Fourier Transform of Function of One Continuous Variable The Fourier transform of a continuous function f(t) of a continuous variable t is: F i2 () ( ) () t f t F f te dt Conversely, given F( ), we can obtain f(t) back using the inverse 1 Fourier transform, f() t F F( ), given by: 2 f () t F( ) e i t d
86 Digital Image Processing F( ) f ( t) cos(2 t) isin(2 t) dt The sinc function: sin( x) sinc( x) x, sinc(0) 1
87 The Fourier transform of the unit impulse: i2t ( ) ( ) 1 F t e dt F( ) ( tt ) e dt cos(2 t ) isin(2 t ) i2t The Fourier series for the impulse train, sδt: 1 st () t e T n 2 n i t T F 2 n i t n T e T
88 The Fourier transform of the periodic impulse train, S(μ) is also an impulse train: Convolution 1 S( ) T n f() t h() t f() s g( ts) ds, f, h F F n T f( t) h( t) H( ) F( ) f() t h() t H( ) F( ) continuous functions
89 Convolution in the frequency domain is analogous to multiplication in the spatial domain. The convolution theorem is the foundation for filtering in the frequency domain. Sampling and the Fourier Transform of Sampled Functions Continuous functions have to be converted into a sequence of discrete values in order to be used by a computer. Consider a continuous function, f(t), that we wish to sample at uniform intervals (ΔT). We assume that the function extends from - to. One way to model sampling is by using an impulse train function:
90 f() t f() t st () t f() t ( tnt), f() t n sampled function The value fk of an arbitrary sample in the sequence is given by: fk f() t ( tkt) dt f ( kt)
91
92 The Fourier Transform of a Sampled Function Let F (μ) be the Fourier transform of a continuous function f (t) and let f () t the sampled function. The Fourier transform of the sampled function is: T F ( ) F f() t = F f() t s () t F( ) S( ), 1 S( ) T n n T 1 F ( ) F( ) S( ) F T n n T
93 The Fourier transform F ( ) of the sampled function f () t is an infinite, periodic sequence of copies of F(μ), the period is The Sampling Theorem 1 T. Consider the problem of establishing the conditions under which a continuous function can be recovered uniquely from a set of its samples. A function f(t) is called band-limited if its Fourier transform is 0 outside the interval [-μmax, μmax].
94 We can recover f(t) from its sampled version if we can isolate a copy of F(μ) from the periodic sequence of copies of this function contained in F ( ), the transform of the sampled function f () t.
95 Recall that F ( ) is continuous, periodic with period 1 T. All we need is one complete period to characterize the entire transform. This implies that we can recover f(t) from that single period by using the inverse Fourier transform. Extracting from F ( ) a single period that is equal to F(μ) is possible if the separation between copies is sufficient, i.e., max 2 T T max
96 Sampling Theorem A continuous, band-limited function can be recovered completely from a set of its samples if the samples are acquired at a rate exceeding twice the highest frequency content of the function. The number 2 max is called Nyquist rate.
97
98 To see how the recovery of F(μ) from F ( ) is possible we will proceed as follows (see Figure 4.8). H( ) T max 0 otherwise max F( ) H( ) F ( ) 2 f () t F( ) e i t d
99 Function H(μ) is called a lowpass filter because it passes frequencies at the low end of frequency range but it eliminates (filter out) all higher frequencies. It is also called an ideal lowpass filter.
100 The Discrete Fourier Transform (DFT) of One Variable Obtaining the DFT from the Continuous Transform of a Sampled Function The Fourier transform of a sampled, band-limited in [-, ] function is continuous, periodic in [-, ]. In practice, we work with a finite number of samples, and the objective is to derive the DFT corresponding to such sample sets. i2 t i2 t i2 nt n n n F ( ) f() t e dt f() t ( tnt) e dt f e (1)
101 What is the discrete version of F ( )? All we need to characterize F ( ) is one period, and sampling one period is the basis of DFT. Suppose that we want to obtain M equally spaced samples of F ( ) taken over the period substitute it in (1): M 1 n0 1 0, T. Consider: m m, m 0,1,..., M 1 MT 2 mn i M Fm fne, m 0,1,2,..., M 1 (2)
102 This expression is the discrete Fourier transform. Given a set { fn } of M samples of f(t), equation (2) yields a sample set { Fm } of M complex discrete values corresponding to the discrete Fourier transform of the input sample. Conversely, given { Fm }, we can recover the sample set { fn } by using the inverse discrete Fourier transform (IDFT): M 1 2 mn 1 i M fn Fme, n0,1,2,..., M 1 M m0
103
104 Extension to Functions of Two Variables The 2-D Impulse and Its Sifting Property Continuous case if t z 0 (, tz), (, tzdtdz ) 1 0 otherwise Sifting property f(, t z) (, t z) dtdz f(0,0) f (, t z) ( t t, z z ) dt dz f ( t, z )
105 Discrete case 1 if x y 0 ( x, y), 0 otherwise x y f( x, y) ( x, y) f(0,0) x y f( x, y) ( x x, y y ) f( x, y )
106 The 2-D Continuous Fourier Transform Pair i2 ( t z) F(, ) f ( tze, ) dtdz 2 ( ) f (, tz) F(, ) e i t z d d Two-Dimensional Sampling and the 2-D Sampling Theorem 2-D impulse train s (, ) (, ) TZ t z tmt znz m n f(t,z) is band-limited if its Fourier transform is 0 outside the rectangle defined by the intervals.
107 max, max and max, max F(, ) 0 for and max max The two-dimensional sampling theorem states that a continuous, band-limited function f(t,z) can be recovered with no error from a set of its sample if the sampling intervals are: 1 1 T 2 2 T max 1 1 Z 2 2 Z max max max
108 The 2-D Discrete Fourier Transform and Its Inverse M1 N1 ux vy i 2 M N Fuv (, ) f( xye, ) x0 y0 f(x,y) is a digital image of size M N. Given the transform F(u,v) we can obtain f(x,y) by using the inverse discrete Fourier transform (IDFT): M1 N1 ux vy i 2 M N 1 f( x, y) F( u, v) e, x 0,1,..., M 1, y 0,1,..., N 1 MN u0 v0
109 Some Properties of the 2-D Discrete Fourier Transform Relationships Between Spatial and Frequency Intervals A digital image f(x,y) consists of M N samples taken at ΔT and ΔZ distances. The separation between the corresponding discrete, frequency domain variables are given by: u v 1 MT 1 NZ
110 Translation and Rotation ux 0 vy 0 i 2 M N 0 0 Digital Image Processing f( x, y) e F( uu, vv ) f( x x, y y ) F( u, v) e 0 0 ux 0 vy 0 i 2 M N Using polar coordinates x rcos, y rsin, ucos, v sin we get the rotating f(x,y) by an angle θ0, the same happens with the Fourier transform, F: f( r, ) F(, ) 0 0
111 Periodicity Digital Image Processing F( u, v) F( uk M, v) F( u, vk N) F( uk M, v k N) f( x, y) f( xk M, y) f( x, yk N) f( xk M, yk N), k, k 1 2 integers x y M N f( x, y)( 1) F( u, v ) 2 2 This last relation shifts the data so that F(0,0) is at the center of the frequency rectangle defined by the intervals [0,M-1] and [0,N-1].
112 Symmetry Properties Odd and even part of a function: Digital Image Processing wxy (, ) w( xy, ) w( xy, ) e o w w e o ( x, y) ( x, y) wxy (, ) w( x, y) 2 wxy (, ) w( x, y) 2 w ( x, y) w ( x, y) symmetric e e w ( x, y) w ( x, y) antisymmetric o o
113 For digital images, evenness and oddness become: w ( x, y) w ( M x, N y) e e w ( x, y) w ( M x, N y) o o M1 N1 x0 y0 w ( x, y) w ( x, y) 0 e o
114
115 Fourier Spectrum and Phase Angle Express the Fourier transform in polar coordinates: Fuv Fuv e i ( u, v) (, ) (, ), 2 2 Fuv (, ) R( uv, ) I( uv, ) is called Fourier or frequency spectrum I( uv, ) ( uv, ) arctan Ruv (, ) is the phase angle Puv (, ) F( uv, ) R( uv, ) I ( uv, ) - the power spectrum F( uv, ) F( u, v) ( uv, ) ( u, v)
116 M1 N1 F(0,0) f( x, y) x0 y0 F(0,0) MN f Because MN usually is large, F(0,0) is the largest component of the spectrum by a factor that can be several orders of magnitude larger than other terms. F(0,0) sometimes is called the dc component of the transform. (dc= direct current current of zero frequency)
117 The 2-D Convolution Theorem 2-D circular convolution: M1N1 f( x, y) h( x, y) f( m, n) h( xm, yn), x 0,1,..., M 1, y 0,1,..., N 1 m0 n0 The 2-D convolution theorem f( x, y) h( x, y) F( u, v) H( u, v) f( x, y) h( x, y) F( u, v) H( u, v)
118
119
120
121
122 Filtering in the Frequency Domain Let f(x,y) be a digital image and F(u,v) its (discrete) Fourier transform. The 2-D Discrete Fourier Transform and Its Inverse M1 N1 ux vy i 2 M N Fuv (, ) f( xye, ) x0 y0 f(x,y) is a digital image of size M N. Given the transform F(u,v) we can obtain f(x,y) by using the inverse discrete Fourier transform (IDFT):
123 M1 N1 ux vy i 2 M N 1 f( x, y) F( u, v) e, x 0,1,..., M 1, MN u0 v0 y 0,1,..., N 1 The 2-D Convolution Theorem 2-D circular convolution: M1 N1 f( x, y) h( x, y) f( m, n) h( xm, yn), m0 n0 x 0,1,..., M 1, y 0,1,..., N 1 f( x, y) h( x, y) F( u, v) H( u, v) f( x, y) h( x, y) F( u, v) H( u, v)
124
125 If we use the DFT and the convolution theorem to obtain the same result in the left column of Figure 4.28, we must take into account the periodicity inherent in the expression for the DFT. The problem which appears in Figure 4.28 is commonly referred to as wraparound error. The solution to this problem is simple. Consider two functions f and h composed of A and B samples. It can be shown that if we append zeros to both functions so that they have the same length, denoted by P, then wraparound is avoided by choosing:
126 P AB 1 This process is called zero padding. Let f(x,y) and h(x,y) be two image arrays of size A B and C D pixels, respectively. Wraparound error in their circular convolution can be avoided by padding these functions with zeros: f p ( x, y) f( x, y) 0 x A1and 0 y B1 0 A x P and B yq
127 h p ( x, y) hxy (, ) 0 xc1 and 0 y D1 0 C x P and D yq P AC 1 ( P 2M 1), Q B D1 ( Q 2N 1)
128 Frequency Domain Filtering Fundamentals Given a digital image f(x,y) of size MN, the basic filtering equation has the form: gxy 1 (, ) HuvFuv (, ) (, ) F (1) Where 1 F is the inverse discrete Fourier transform (IDFT), F(u,v) is the discrete Fourier transform (DFT) of the input image, H(u,v) is a filter function (also called filter or the filter transfer function) and g(x,y) is the filtered (output) image. F, H, and g are arrays of the same size as f, MN.
129 H(u,v) symmetric about its center simplifies the computations and also requires that F(u,v) to be centered. In order to obtain a centered F(u,v) the image f(x,y) is multiplied by (-1) x+y before computing its transform. 0 u M /2, v N /2 ( uv 0) Huv (, ) 1 elsewhere This filter rejects the dc term (responsible for the average intensity of an image) and passes all other terms of F(u,v).
130 This filter will reduce the average intensity of the output image to zero. Low frequencies in the transform are related to slowly varying intensity components in an image (such as walls in a room, or a cloudless sky) and high frequencies are caused by sharp transitions in intensity, such as edges or noise. A filter H(u,v) that attenuates high frequencies while passing low frequencies (i.e. a lowpass filter) would blur an image while a filter with the opposite property (highpass filter) would
131 enhance sharp detail, but cause a reduction of contrast in the image. Image of damaged integrated circuit Fourier spectrum F(0,0)=0
132
133 The DFT is a complex array of the form: F( uv, ) Ruv (, ) iiuv (, ) g x y H uvruv ih uviuv 1 (, ) F (, ) (, ) (, ) (, ) The phase angle is not altered by filtering in this way. Filters that affect the real and the imaginary parts equally, and thus have no effect on the phase are called zero-phase-shift filters. Even small changes in the phase angle can have undesirable effects on the filtered output.
134
135 Main Steps for Filtering in the Frequency Domain 1. Given an input image f(x,y) of size MN, obtain the padding parameters P and Q (usually P=2M, Q=2N) 2. Form a padded image fp(x,y), of size PQ by appending the necessary numbers of zeros to f(x,y) (f is in the upper left corner of fp) x y 3. f ( x, y) ( 1) f ( x, y) - to center its transform p p 4. Compute the DFT, F(u,v), of the image obtain from 3.
136 5. Generate a real, symmetric filter function H(u,v) of P Q size PQ with center at coordinates, 2 2. Compute the array product Guv (, ) HuvFuv (, ) (, ) 6. Obtain the processed image: g x y G u v 1 (, ) real (, ) ( 1) x y p F The real part is selected in order to ignore parasitic complex components resulting from computational inaccuracies.
137 7. Obtain the output, filtered image, g(x,y) by extracting the MN region from the top, left corner of gp(x,y).
138 Correspondence between Filtering in the Spatial and Frequency Domains The link between filtering in the spatial domain and frequency domain is the convolution theorem. Given a filter H(u,v), suppose that we want to find its equivalent representation in the spatial domain. f( x, y) ( x, y) F( u, v) 1 F gxy HuvFuv hxy Huv 1 1 (, ) F (, ) (, ) (, ) (, )
139 The inverse transform of the frequency domain filter, h(x,y) is the corresponding filter in the spatial domain. Conversely, given a spatial filter, h(x,y) we obtain its frequency domain representation by taking the Fourier transform of the spatial filter: hxy (, ) Huv (, ) h(x,y) is sometimes called as the (finite) impulse response (FIR) of H(u,v).
140 One way to take advantage of the properties of both domains is to specify a filter in the frequency domain, compute its IDFT, and then use the resulting, full-size spatial filter as a guide for constructing smaller spatial filter masks. Let H(u) denote the 1-D frequency domain Gaussian filter: 2 2 Hu ( ) Ae, u 2 the standard deviation The corresponding filter in the spatial domain is obtained by taking the inverse Fourier transform of H(u):
141 x hx ( ) 2 Ae which is also a Gaussian filter. When H(u) has a broad profile (large value of ), h(x) has a narrow profile and vice versa. As approaches infinity, H(u) tends toward a constant function and h(x) tends towards an impulse, which implies no filtering in the frequency and spatial domains.
142 Image Smoothing Using Frequency Domain Filters Smoothing (blurring) is achieved in the frequency domain by high-frequency attenuation that is by lowpass filtering. We consider three types of lowpass filters: ideal, Butterworth, Gaussian The Butterworth filter has a parameter called the filter order. For high order values, the Butterworth filter approaches the ideal filter and for low values is more like a Gaussian filter.
143 All filters and images in these sections are consider padded with zeros, thus they have size P Q. The Butterworth filter may be viewed as providing a transition between the other two filters. Ideal Lowpass Filters (ILPF) 1 if Duv (, ) D Huv (, ) 0 if Duv (, ) D Where D0 0 is a positive constant and D(u,v) is the distance between (u,v) and the center of the frequency rectangle: 0 0
144 2 2 P Q Duv (, ) u v 2 2 (DUV) The name ideal indicates that all frequencies on or inside the circle of radius D0 are passed without attenuation, whereas all frequencies outside the circle are completely eliminated (filtered out). For an ILPF cross section, the point of transition between H(u,v)=1 and H(u,v)=0 is called the cutoff frequency. The sharp cutoff frequencies of an ILPF cannot be realized with
145 electronic components, but they can be simulated in a computer. We can compare the ILPF by studying their behavior with respect to the cutoff frequencies.
146
147 Butterworth Lowpass Filter (BLPF) The transfer function of a Butterworth lowpass filter of order n and with cutoff frequency at distance D0 from the origin is: Huv (, ) 1 1 Duv (, ) D 0 where D(u,v) is given by the relation (DUV). 2n
148 The BLPF transfer function does not have a sharp discontinuity that gives a clear cutoff between passed and filtered frequencies. For filters with smooth transfer functions, defining a cutoff frequency locus is made at points for which H(u,v) is down to a certain fraction of its maximum value.
149
150 Gaussian Lowpass Filter (GLPF) 2 2 Huv (, ) e e D 2 ( u, v) D 2 ( uv, ) 2D 2 0 D0 is the cutoff frequency. When D(u,v) = D0 the GLPF is down to of its maximum value.
151
152 Image Sharpening Using Frequency Domain Filters Edges and other abrupt changes in intensities are associated with high-frequency components, image sharpening can be achieved in the frequency domain by highpass filters, which
153 attenuates the low-frequency components without changing the high-frequency information in the Fourier transform. A highpass filter is obtained from a given lowpass filter using the equation: H ( u, v) 1 H ( u, v) HP where HLP(u,v) is the transfer function of a lowpass filter. LP
154
155 Ideal Highpass Filter A 2-D ideal highpass filter (IHPF) is defined as: 0 if Duv (, ) D Huv (, ) 1 if Duv (, ) D where D0 is the cutoff frequency and D(u,v) is given by equation (DUV). As for ILPF, the IHPF is not physically realizable. 0 0
156 Butterworth Highpass Filter (BHPF) The transfer function of a Butterworth highpass filter of order n and with cutoff frequency at distance D0 from the origin is:
157 Huv (, ) 1 1 D0 Duv (, ) 2n
158 Gaussian Highpass Filter (GLPF) Huv (, ) 1 e D 2 ( uv, ) 2D 2 0
159
160 Figure 4.57(a) is a image of a thumb print in which smudges are present. A keystep in automated figerprint recognition is enhancement of print ridges and the reduction of smudges. In this example a highpass filter was used to enhance ridges and reduce the effects of smudging. Enhancement of the ridges is accomplished by the fact that they contain high frequencies, which are unchanged by a highpass filter. This filter reduces low frequency components
161 which correspond to slowly varying intensities in the image, such as background and smudges. Figure 4.57(b) is the result of using a BHPF of order n=4, with a cutoff frequency D0=50. Figure 4.57(c) is the result of setting to black all negative values and to white all positive values in Figure 4.57(b) (a threshold intensity transformation)
162 The Laplacian in the Frequency Domain The Laplacian can be implemented in the frequency domain using the filter: Huv (, ) 4 u v The centered Laplacian is: P Q 2 2 Huv (, ) 4 u v 4 D( uv, ) 2 2 The Laplacian image is obtained as: F 2 f( x, y) 1 H( u, v) F( u, v)
163 Enhancement is obtained with the equation: 2 gxy (, ) f( xy, ) f( xy, ) (1) Computing 2 f( x, y) with the above relation introduces DFT scaling factors that can be several orders of magnitude larger than the maximum value of f. To fix this problem, we normalize the values of f(x,y) to the range [0,1] (before computing its DFT) and divide value which will bring it to [-1,1]. 2 f( x, y) by its maximum
164 1 gxy (, ) F Fuv (, ) HuvFuv (, ) (, ) F 14 D ( u, v) F( u, v) The above formula is simple but has the same scaling problems as those mentioned above. Between (1) and (2), the former is preferred. (2)
165 Unsharp Masking, Highboost Filtering and High-Frequency-Emphasis Filtering g (, ) (, ) (, ) mask x y f x y flp x y f ( x, y) F H ( u, v) F( u, v) LP 1 HLP(u,v) is a lowpass filter. Here flp(x,y) is a smoothed image analogous to f( x, y ) from the spatial domain. LP g( x, y) f( x, y) k g ( x, y) mask k=1 unsharp masking, k>1 highboost filtering gxy (, ) F 1 khhp ( uv, ) Fuv (, ) 1
166 The factor 1 khhp ( u, v) is called high-frequency-emphasis filter. Highpass filter set the dc term to zero, thus reducing the average intensity in the filtered image to 0. The high-frequencyemphasis filter does not have this problem. The constant k gives control over the proportion of high frequencies that influence the final result. A more general high-frequency-emphasis filter: gxy (, ) F k1 k2hhp ( uv, ) Fuv (, ) 1 k1 0 controls the offset from the origin, k2 0 controls the contribution of high frequencies.
167 Homomorphic Filtering Digital Image Processing An image can be expressed as the product of its ilumination i(x,y) and reflectance r(x,y): f( x, y) i( x, y) r( x, y) Because f( x, y) i( x, y) r( x, y) F F F, consider: z( x, y) ln f( x, y) ln i( x, y) ln r( x, y) Taking the Fourier transform of this relation we have: Zuv (, ) F( uv, ) F( uv, ) i r
168 where Z, Fi, Fr are the Fourier transform of z(x,y), ln i(x,y), ln r(x,y), respectively. We can filter Z(u,v) using a filter H(u,v) so that S( u, v) H( u, v) Z( u, v) H( u, v) F ( u, v) H( u, v) F ( u, v) The filtered image in the spatial domain is: 1 (, ) (, ) 1 sxy F Suv F H( uvf, ) (, ) 1 i uv F H( uvf, ) r( uv, ) Define: 1 i( x, y) F H( u, v) Fi ( u, v) 1 r( x, y) F H( u, v) Fr ( u, v) i r
169 Because z(x,y)=ln f(x,y), we reverse the process to produce the output (filtered) image: sxy (, ) i( xy, ) r( xy, ) gxy e e e i xyr xy s( x, y) i( x, y) r( x, y) (, ) 0(, ) 0(, ) i( x, y) i ( x, y) e illumination of the output image, 0 r ( x, y) 0 e r( x, y) reflectance of the output image
170 The illumination component of an image generally is characterized by slow spatial variations, while the reflectance component tends to vary abruptly, particularly at the junction of dissimilar objects. These characteristics lead to associating the low frequencies of the Fourier transform of the logarithm of an image with illumination and the high frequencies with reflectance.
171 Selective Filtering There are applications in which it is of iterest to process specific bands of frequencies (bandreject or bandpass filters) or small regions of the frequency rectangle (notch filters) Bandreject and Bandpass Filters Ideal bandreject filter Huv W W 0 if D D( u, v) D 1 otherwise 0 0 (, ) 2 2
172 Butterworth Bandreject Filter 1 Huv (, ) W D( u, v) 1 D ( u, v) D Gaussian Bandreject Filter n Huv (, ) 1 e D u v D WD( u, v) 2 2 (, ) 0 2 In the above bandreject filters (ideal, Butterworth and Gaussian) D(u,v) is the distance from the center of the
173 rectangle given by (DUV), D0 is the radial center of the band, and W is the width of the band. A bandpass filter is obtained from a bandreject filter using the formula: H ( u, v) 1 H ( u, v) BP BR
174 Notch Filters A notch filter rejects (or passes) frequencies in a predefined neighborhood about the center of the frequency rectangle. Zero-phase-shift filters must be symmetric about the origin, so a notch filter with center at (u0,v0) must have a corresponding notch at location (-u0,-v0). Notch reject filters are constructed as products of highpass filters whose centers have been translated to the center of the notches. The general form is:
175 Q H ( u, v) H ( u, v) H ( u, v) NR k k k1 Where Hk(u,v) and H-k(u,v) are highpass filters whose centers are at (uk,vk) and (-uk,-vk), respectively. These centers are specified with respect to the center of the frequency rectangle M N, 2 2. The distance computations for each filter are made using the expressions:
176 2 2 M N Dk( u, v) u uk v vk M N D k( u, v) u uk v vk 2 2 A Butterworth notchreject filter of order n with 3 notch pairs: H ( u, v) { }{ } NR 2n 2n k1 D 0k D 0k 1 1 Dk( u, v) D k( u, v)
177 A notch pass filter is obtained from a notch reject filter using the expression: H ( u, v) 1 H ( u, v) NP One of the applications of notch filtering is for selectively modifying local regions of the DFT. This type of processing is done interactively, working directly on DFTs obtained without padding. NR
178
Chapter 3: Intensity Transformations and Spatial Filtering
Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing
More informationVivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.
Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,
More informationCHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN
CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image
More informationEEM 463 Introduction to Image Processing. Week 3: Intensity Transformations
EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial
More informationDigital Image Processing. Lecture 6
Digital Image Processing Lecture 6 (Enhancement in the Frequency domain) Bu-Ali Sina University Computer Engineering Dep. Fall 2016 Image Enhancement In The Frequency Domain Outline Jean Baptiste Joseph
More informationIntensity Transformations and Spatial Filtering
77 Chapter 3 Intensity Transformations and Spatial Filtering Spatial domain refers to the image plane itself, and image processing methods in this category are based on direct manipulation of pixels in
More informationImage Enhancement in Spatial Domain (Chapter 3)
Image Enhancement in Spatial Domain (Chapter 3) Yun Q. Shi shi@njit.edu Fall 11 Mask/Neighborhood Processing ECE643 2 1 Point Processing ECE643 3 Image Negatives S = (L 1) - r (3.2-1) Point processing
More informationEELE 5310: Digital Image Processing. Lecture 2 Ch. 3. Eng. Ruba A. Salamah. iugaza.edu
EELE 5310: Digital Image Processing Lecture 2 Ch. 3 Eng. Ruba A. Salamah Rsalamah @ iugaza.edu 1 Image Enhancement in the Spatial Domain 2 Lecture Reading 3.1 Background 3.2 Some Basic Gray Level Transformations
More informationEELE 5310: Digital Image Processing. Ch. 3. Eng. Ruba A. Salamah. iugaza.edu
EELE 531: Digital Image Processing Ch. 3 Eng. Ruba A. Salamah Rsalamah @ iugaza.edu 1 Image Enhancement in the Spatial Domain 2 Lecture Reading 3.1 Background 3.2 Some Basic Gray Level Transformations
More informationUNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN
UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN Spatial domain methods Spatial domain refers to the image plane itself, and approaches in this category are based on direct manipulation of pixels in an image.
More informationBiomedical Image Analysis. Spatial Filtering
Biomedical Image Analysis Contents: Spatial Filtering The mechanics of Spatial Filtering Smoothing and sharpening filters BMIA 15 V. Roth & P. Cattin 1 The Mechanics of Spatial Filtering Spatial filter:
More informationImage Enhancement: To improve the quality of images
Image Enhancement: To improve the quality of images Examples: Noise reduction (to improve SNR or subjective quality) Change contrast, brightness, color etc. Image smoothing Image sharpening Modify image
More informationPoint and Spatial Processing
Filtering 1 Point and Spatial Processing Spatial Domain g(x,y) = T[ f(x,y) ] f(x,y) input image g(x,y) output image T is an operator on f Defined over some neighborhood of (x,y) can operate on a set of
More informationLecture 4: Spatial Domain Transformations
# Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material
More informationEE795: Computer Vision and Intelligent Systems
EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear
More informationOutlines. Medical Image Processing Using Transforms. 4. Transform in image space
Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms
More informationIntensity Transformation and Spatial Filtering
Intensity Transformation and Spatial Filtering Outline of the Lecture Introduction. Intensity Transformation Functions. Piecewise-Linear Transformation Functions. Introduction Definition: Image enhancement
More information1.Some Basic Gray Level Transformations
1.Some Basic Gray Level Transformations We begin the study of image enhancement techniques by discussing gray-level transformation functions.these are among the simplest of all image enhancement techniques.the
More informationIntroduction to Digital Image Processing
Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin
More informationLecture 4 Image Enhancement in Spatial Domain
Digital Image Processing Lecture 4 Image Enhancement in Spatial Domain Fall 2010 2 domains Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an image Frequency Domain
More informationFiltering and Enhancing Images
KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.
More informationImage Restoration and Reconstruction
Image Restoration and Reconstruction Image restoration Objective process to improve an image, as opposed to the subjective process of image enhancement Enhancement uses heuristics to improve the image
More informationReview for Exam I, EE552 2/2009
Gonale & Woods Review or Eam I, EE55 /009 Elements o Visual Perception Image Formation in the Ee and relation to a photographic camera). Brightness Adaption and Discrimination. Light and the Electromagnetic
More informationDigital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering
Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing
More informationBiometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html
More informationLecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing
Lecture 4 Digital Image Enhancement 1. Principle of image enhancement 2. Spatial domain transformation Basic intensity it tranfomation ti Histogram processing Principle Objective of Enhancement Image enhancement
More informationPSD2B Digital Image Processing. Unit I -V
PSD2B Digital Image Processing Unit I -V Syllabus- Unit 1 Introduction Steps in Image Processing Image Acquisition Representation Sampling & Quantization Relationship between pixels Color Models Basics
More informationx' = c 1 x + c 2 y + c 3 xy + c 4 y' = c 5 x + c 6 y + c 7 xy + c 8
1. Explain about gray level interpolation. The distortion correction equations yield non integer values for x' and y'. Because the distorted image g is digital, its pixel values are defined only at integer
More informationSampling and Reconstruction
Sampling and Reconstruction Sampling and Reconstruction Sampling and Spatial Resolution Spatial Aliasing Problem: Spatial aliasing is insufficient sampling of data along the space axis, which occurs because
More informationPoint operation Spatial operation Transform operation Pseudocoloring
Image Enhancement Introduction Enhancement by point processing Simple intensity transformation Histogram processing Spatial filtering Smoothing filters Sharpening filters Enhancement in the frequency domain
More informationImage Restoration and Reconstruction
Image Restoration and Reconstruction Image restoration Objective process to improve an image Recover an image by using a priori knowledge of degradation phenomenon Exemplified by removal of blur by deblurring
More informationCoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction
CoE4TN4 Image Processing Chapter 5 Image Restoration and Reconstruction Image Restoration Similar to image enhancement, the ultimate goal of restoration techniques is to improve an image Restoration: a
More informationChapter - 2 : IMAGE ENHANCEMENT
Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement
More informationLecture 5: Frequency Domain Transformations
#1 Lecture 5: Frequency Domain Transformations Saad J Bedros sbedros@umn.edu From Last Lecture Spatial Domain Transformation Point Processing for Enhancement Area/Mask Processing Transformations Image
More informationDigital Image Processing. Image Enhancement in the Spatial Domain (Chapter 4)
Digital Image Processing Image Enhancement in the Spatial Domain (Chapter 4) Objective The principal objective o enhancement is to process an images so that the result is more suitable than the original
More informationSegmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.
Chapter - 3 : IMAGE SEGMENTATION Segmentation subdivides an image into its constituent s parts or objects. The level to which this subdivision is carried depends on the problem being solved. That means
More informationLecture 6: Edge Detection
#1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform
More informationDigital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement
Chapter 3 Image Enhancement in the Spatial Domain The principal objective of enhancement to process an image so that the result is more suitable than the original image for a specific application. Enhancement
More informationC2: Medical Image Processing Linwei Wang
C2: Medical Image Processing 4005-759 Linwei Wang Content Enhancement Improve visual quality of the image When the image is too dark, too light, or has low contrast Highlight certain features of the image
More informationBasic relations between pixels (Chapter 2)
Basic relations between pixels (Chapter 2) Lecture 3 Basic Relationships Between Pixels Definitions: f(x,y): digital image Pixels: q, p (p,q f) A subset of pixels of f(x,y): S A typology of relations:
More informationBabu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)
5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?
More informationChapter4 Image Enhancement
Chapter4 Image Enhancement Preview 4.1 General introduction and Classification 4.2 Enhancement by Spatial Transforming(contrast enhancement) 4.3 Enhancement by Spatial Filtering (image smoothing) 4.4 Enhancement
More informationImage Processing. Traitement d images. Yuliya Tarabalka Tel.
Traitement d images Yuliya Tarabalka yuliya.tarabalka@hyperinet.eu yuliya.tarabalka@gipsa-lab.grenoble-inp.fr Tel. 04 76 82 62 68 Noise reduction Image restoration Restoration attempts to reconstruct an
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity
More informationIMAGE ENHANCEMENT IN THE SPATIAL DOMAIN
1 Image Enhancement in the Spatial Domain 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Unit structure : 3.0 Objectives 3.1 Introduction 3.2 Basic Grey Level Transform 3.3 Identity Transform Function 3.4 Image
More informationDigital Image Processing
Digital Image Processing Third Edition Rafael C. Gonzalez University of Tennessee Richard E. Woods MedData Interactive PEARSON Prentice Hall Pearson Education International Contents Preface xv Acknowledgments
More informationImage Enhancement in Spatial Domain. By Dr. Rajeev Srivastava
Image Enhancement in Spatial Domain By Dr. Rajeev Srivastava CONTENTS Image Enhancement in Spatial Domain Spatial Domain Methods 1. Point Processing Functions A. Gray Level Transformation functions for
More informationEECS 556 Image Processing W 09. Image enhancement. Smoothing and noise removal Sharpening filters
EECS 556 Image Processing W 09 Image enhancement Smoothing and noise removal Sharpening filters What is image processing? Image processing is the application of 2D signal processing methods to images Image
More informationBroad field that includes low-level operations as well as complex high-level algorithms
Image processing About Broad field that includes low-level operations as well as complex high-level algorithms Low-level image processing Computer vision Computational photography Several procedures and
More informationDigital Image Processing
Digital Image Processing Jen-Hui Chuang Department of Computer Science National Chiao Tung University 2 3 Image Enhancement in the Spatial Domain 3.1 Background 3.4 Enhancement Using Arithmetic/Logic Operations
More informationInterpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections.
Image Interpolation 48 Interpolation is a basic tool used extensively in tasks such as zooming, shrinking, rotating, and geometric corrections. Fundamentally, interpolation is the process of using known
More informationCoE4TN3 Medical Image Processing
CoE4TN3 Medical Image Processing Image Restoration Noise Image sensor might produce noise because of environmental conditions or quality of sensing elements. Interference in the image transmission channel.
More informationLecture 2 Image Processing and Filtering
Lecture 2 Image Processing and Filtering UW CSE vision faculty What s on our plate today? Image formation Image sampling and quantization Image interpolation Domain transformations Affine image transformations
More informationIMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations
It makes all the difference whether one sees darkness through the light or brightness through the shadows David Lindsay IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations Kalyan Kumar Barik
More informationECG782: Multidimensional Digital Signal Processing
Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/
More informationFiltering Images. Contents
Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents
More informationNoise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions
Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images
More informationComputer Vision and Graphics (ee2031) Digital Image Processing I
Computer Vision and Graphics (ee203) Digital Image Processing I Dr John Collomosse J.Collomosse@surrey.ac.uk Centre for Vision, Speech and Signal Processing University of Surrey Learning Outcomes After
More informationCS4442/9542b Artificial Intelligence II prof. Olga Veksler
CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 2 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,
More informationIntroduction to Digital Image Processing
Introduction to Digital Image Processing Ranga Rodrigo June 9, 29 Outline Contents Introduction 2 Point Operations 2 Histogram Processing 5 Introduction We can process images either in spatial domain or
More informationDigital Image Processing. Image Enhancement - Filtering
Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images
More informationWhat will we learn? Neighborhood processing. Convolution and correlation. Neighborhood processing. Chapter 10 Neighborhood Processing
What will we learn? Lecture Slides ME 4060 Machine Vision and Vision-based Control Chapter 10 Neighborhood Processing By Dr. Debao Zhou 1 What is neighborhood processing and how does it differ from point
More informationCS4442/9542b Artificial Intelligence II prof. Olga Veksler
CS4442/9542b Artificial Intelligence II prof. Olga Veksler Lecture 8 Computer Vision Introduction, Filtering Some slides from: D. Jacobs, D. Lowe, S. Seitz, A.Efros, X. Li, R. Fergus, J. Hayes, S. Lazebnik,
More informationChapter 3 Image Enhancement in the Spatial Domain
Chapter 3 Image Enhancement in the Spatial Domain Yinghua He School o Computer Science and Technology Tianjin University Image enhancement approaches Spatial domain image plane itsel Spatial domain methods
More informationEdge and local feature detection - 2. Importance of edge detection in computer vision
Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature
More informationUnit - I Computer vision Fundamentals
Unit - I Computer vision Fundamentals It is an area which concentrates on mimicking human vision systems. As a scientific discipline, computer vision is concerned with the theory behind artificial systems
More informationLecture 7: Most Common Edge Detectors
#1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the
More informationSYDE 575: Introduction to Image Processing
SYDE 575: Introduction to Image Processing Image Enhancement and Restoration in Spatial Domain Chapter 3 Spatial Filtering Recall 2D discrete convolution g[m, n] = f [ m, n] h[ m, n] = f [i, j ] h[ m i,
More informationIn this lecture. Background. Background. Background. PAM3012 Digital Image Processing for Radiographers
PAM3012 Digital Image Processing for Radiographers Image Enhancement in the Spatial Domain (Part I) In this lecture Image Enhancement Introduction to spatial domain Information Greyscale transformations
More information3.4& Fundamentals& mechanics of spatial filtering(page 166) Spatial filter(mask) Filter coefficients Filter response
Image enhancement in the spatial domain(3.4-3.7) SLIDE 1/21 3.4& 3.4.1 Fundamentals& mechanics of spatial filtering(page 166) Spatial filter(mask) Filter coefficients Filter response Example: 3 3mask Linear
More informationSharpening through spatial filtering
Sharpening through spatial filtering Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Sharpening The term sharpening is referred
More informationFall 2015 Dr. Michael J. Reale
CS 49: Computer Vision MIDTERM REVIEW Fall 25 Dr. Michael J. Reale Midterm Review The Midterm will cover: REVIEW slide decks (inclusive) Quizzes through 4 (inclusive) REVIEW : INTRODUCTION Basic Terms
More informationCS4733 Class Notes, Computer Vision
CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision
More informationIMAGING. Images are stored by capturing the binary data using some electronic devices (SENSORS)
IMAGING Film photography Digital photography Images are stored by capturing the binary data using some electronic devices (SENSORS) Sensors: Charge Coupled Device (CCD) Photo multiplier tube (PMT) The
More information9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B
8. Boundary Descriptor 8.. Some Simple Descriptors length of contour : simplest descriptor - chain-coded curve 9 length of contour no. of horiontal and vertical components ( no. of diagonal components
More informationDigital Image Processing, 3rd ed. Gonzalez & Woods
Last time: Affine transforms (linear spatial transforms) [ x y 1 ]=[ v w 1 ] xy t 11 t 12 0 t 21 t 22 0 t 31 t 32 1 IMTRANSFORM Apply 2-D spatial transformation to image. B = IMTRANSFORM(A,TFORM) transforms
More informationChapter 10: Image Segmentation. Office room : 841
Chapter 10: Image Segmentation Lecturer: Jianbing Shen Email : shenjianbing@bit.edu.cn Office room : 841 http://cs.bit.edu.cn/shenjianbing cn/shenjianbing Contents Definition and methods classification
More information2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz
2D Image Processing - Filtering Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 What is image filtering?
More informationDigital Image Processing. Lecture # 3 Image Enhancement
Digital Image Processing Lecture # 3 Image Enhancement 1 Image Enhancement Image Enhancement 3 Image Enhancement 4 Image Enhancement Process an image so that the result is more suitable than the original
More informationDigital Image Analysis and Processing
Digital Image Analysis and Processing CPE 0907544 Image Enhancement Part I Intensity Transformation Chapter 3 Sections: 3.1 3.3 Dr. Iyad Jafar Outline What is Image Enhancement? Background Intensity Transformation
More informationImage restoration. Lecture 14. Milan Gavrilovic Centre for Image Analysis Uppsala University
Image restoration Lecture 14 Milan Gavrilovic milan@cb.uu.se Centre for Image Analysis Uppsala University Computer Assisted Image Analysis 2009-05-08 M. Gavrilovic (Uppsala University) L14 Image restoration
More informationf(x,y) is the original image H is the degradation process (or function) n(x,y) represents noise g(x,y) is the obtained degraded image p q
Image Restoration Image Restoration G&W Chapter 5 5.1 The Degradation Model 5.2 5.105.10 browse through the contents 5.11 Geometric Transformations Goal: Reconstruct an image that has been degraded in
More informationComputer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier
Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear
More informationAnno accademico 2006/2007. Davide Migliore
Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?
More informationMotivation. Gray Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More informationImage Processing Lecture 10
Image Restoration Image restoration attempts to reconstruct or recover an image that has been degraded by a degradation phenomenon. Thus, restoration techniques are oriented toward modeling the degradation
More informationDigital Image Fundamentals
Digital Image Fundamentals Image Quality Objective/ subjective Machine/human beings Mathematical and Probabilistic/ human intuition and perception 6 Structure of the Human Eye photoreceptor cells 75~50
More informationImage Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments
Image Processing Fundamentals Nicolas Vazquez Principal Software Engineer National Instruments Agenda Objectives and Motivations Enhancing Images Checking for Presence Locating Parts Measuring Features
More informationBiomedical Image Analysis. Point, Edge and Line Detection
Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth
More informationDigital Image Processing
Digital Image Processing Lecture # 6 Image Enhancement in Spatial Domain- II ALI JAVED Lecturer SOFTWARE ENGINEERING DEPARTMENT U.E.T TAXILA Email:: ali.javed@uettaxila.edu.pk Office Room #:: 7 Local/
More informationPerception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.
Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction
More informationINTENSITY TRANSFORMATION AND SPATIAL FILTERING
1 INTENSITY TRANSFORMATION AND SPATIAL FILTERING Lecture 3 Image Domains 2 Spatial domain Refers to the image plane itself Image processing methods are based and directly applied to image pixels Transform
More informationImage Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus
Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into
More informationLecture Image Enhancement and Spatial Filtering
Lecture Image Enhancement and Spatial Filtering Harvey Rhody Chester F. Carlson Center for Imaging Science Rochester Institute of Technology rhody@cis.rit.edu September 29, 2005 Abstract Applications of
More informationLecture 2: 2D Fourier transforms and applications
Lecture 2: 2D Fourier transforms and applications B14 Image Analysis Michaelmas 2017 Dr. M. Fallon Fourier transforms and spatial frequencies in 2D Definition and meaning The Convolution Theorem Applications
More informationAn introduction to image enhancement in the spatial domain.
University of Antwerp Department of Mathematics and Computer Science An introduction to image enhancement in the spatial domain. Sven Maerivoet November, 17th 2000 Contents 1 Introduction 1 1.1 Spatial
More informationIslamic University of Gaza Faculty of Engineering Computer Engineering Department
Islamic University of Gaza Faculty of Engineering Computer Engineering Department EELE 5310: Digital Image Processing Spring 2011 Date: May 29, 2011 Time : 120 minutes Final Exam Student Name: Student
More informationComputer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.
Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with
More informationMultimedia Computing: Algorithms, Systems, and Applications: Edge Detection
Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides
More informationMotivation. Intensity Levels
Motivation Image Intensity and Point Operations Dr. Edmund Lam Department of Electrical and Electronic Engineering The University of Hong ong A digital image is a matrix of numbers, each corresponding
More information