Image Processing. Low-level Image Processing. The stored image consists of a two-dimensional array of pixels (picture elements):

Size: px
Start display at page:

Download "Image Processing. Low-level Image Processing. The stored image consists of a two-dimensional array of pixels (picture elements):"

Transcription

1 Image Processing Low-level Image Processing Operates directly on a stored image to improve or enhance it and to help human and computer recognition of the image. The stored image consists of a two-dimensional array of pixels (picture elements): Origin (0, 0) j i Picture element (pixel) p(i, j) Figure. Pixmap. Many low-level image-processing operations assume monochrome images and refer to pixels as having gray level values or intensities. The gray level has a range from a minimum value to a maximum value (the grayscale). Typically, the grayscale is normalized to start at zero for black and 255 for white, which requires an 8-bit pixel. Color images typically use three values, one for each of the primary colors (red, green, and blue), or use the pixel value to point to a look-up table. 34

2 Computational Requirements Suppose a pixmap has pixels and 8-bit pixels. Storage requirement is 2 20 bytes ( Mbytes) Suppose each pixel must be operated upon just once. Then 2 20 operations are needed in the time of one frame. At 0-8 second/operation (0ns/operation), this would take 0 ms. In real-time applications, the speed of computation must be at the frame rate (typically frames/second). All pixels in the image must be processed in the time of one frame; that is, in 2 6 ms. However, typically, many high-complexity operations must be performed, not just one operation. 342

3 Point Processing Image-processing operations that produce an output based upon the value of a single pixel. Thresholding Pixels with values above a predetermined threshold value are kept and others below the threshold are reduced to 0. Given a pixel, x i, the operation on each pixel is if (x i < threshold) x i = 0; else x i = ; Contrast Stretching The range of gray level values is extended to make the details more visible. Given the pixel of value x i within a range x l and x h, the contrast is stretched to the range x H to x L by multiplying x i by x i ( x i x l ) x H x L = x L x h x l Gray Level Reduction The number of bits used to represent the gray level is reduced, perhaps to reduce the storage requirements. A simple method would be to truncate the lesser significant bits. This method would only work well and maintain sufficient information if the full range of gray levels is well represented in the image. 343

4 Histogram Shows the number of pixels in the image at each gray level: Number of pixels Gray level Figure.2 Image histogram. Sequential code for(i = 0; i < height_max; x++) for(j = 0; j < width_max; y++) hist[p[i][j]] = hist[p[i][j]] + ; where the pixels are contained in the array p[][] and hist[k] will hold the number of pixels having the kth gray level. Similar to adding numbers to an accumulating sum and similar parallel solutions can be used for computing histograms. 344

5 Smoothing, Sharpening, and Noise Reduction Smoothing suppresses large fluctuations in intensity over the image area and can be achieved by reducing the high-frequency content. Sharpening accentuates the transitions, enhancing the detail, and can be achieved by two ways. Noise reduction suppresses a noise signal present in the image. Often requires a local operation with access to a group of pixels around the pixel to be updated. A common group size is 3 3: x 0 x x 2 x 3 x 4 x 5 x 6 x 7 x 8 Figure.3 Pixel values for a 3 3 group. 345

6 Mean A simple smoothing technique is to take the mean or average of a group of pixels as the new value of the central pixel. Given a 3 3 group, the computation is x 4 ' where x 4 ' is the new value for x 4. x 0 + x + x 2 + x 3 + x 4 + x 5 + x 6 + x 7 + x = Sequential Code Nine steps to compute the average for each pixel, or 9n for n pixels. A sequential time complexity of Ο(n). 346

7 Parallel Code Can take advantage of each processor being able to compute a partial sum for its neighboring processors as well as itself. The number of steps can be reduced by separating the computation into four data transfer steps in lock-step fashion. Step Each pixel adds pixel from left Step 2 Each pixel adds pixel from right Step 3 Each pixel adds pixel from above Step 4 Each pixel adds pixel from below Figure.4 Four-step data transfer for the computation of mean. 347

8 Step Step 2 Step 3 Step 4 Each processor receives the value of the pixel from its left and adds that value to the value of its pixel to form an accumulating sum. The center processor (x 4 ) forms the accumulating sum, x 3 + x 4. Each processor receives the value of the pixel from its right and adds that value to its accumulating sum. The center processor forms the accumulating sum, x 3 + x 4 + x 5. Each processor receives the accumulated value at step 2 from above it and adds that value to its accumulating sum. It must also retain the original accumulated value for the next step. The center processor forms the accumulating sum, x 0 + x +x 2 + x 3 + x 4 + x 5, and retains x 3 + x 4 + x 5 separately. Each process receives the accumulated value at step 2 from below it and adds that value to its accumulating sum. The center processor forms the accumulating sum, x 0 + x +x 2 + x 3 + x 4 + x 5 + x 6 + x 7 + x 8. x 0 x x 2 x 0 x x 2 x 0 + x x 0 + x + x 2 x 3 x 4 x 5 x 3 x 4 x 5 x 3 + x 4 x 3 + x 4 + x 5 x 6 x 7 x 8 x 6 x 7 x 8 x 6 + x 7 x 6 + x 7 + x 8 (a) Step (b) Step 2 x 0 x x 2 x 0 x x 2 x 0 + x + x 2 x 0 + x + x 2 x 3 x 4 x 5 x 3 x 4 x 5 x x 0 + x + x 0 + x + x 2 2 x x 3 + x 4 + x 3 + x 4 + x 5 5 x 6 + x 7 + x 8 x 6 x 7 x 8 x 6 x 7 x 8 x 6 + x 7 + x 8 x 6 + x 7 + x 8 (c) Step 3 (d) Step 4 Figure.5 Parallel mean data accumulation. 348

9 Median Sequential Code The median can be found by ordering the pixel values from smallest to largest and choosing the center pixel value (assuming an odd number of pixels). With a 3 3 group, suppose the values in ascending order are y 0, y, y 2, y 3, y 4, y 5, y 6, y 7, and y 8. The median is y 4. Suggests that all the values in the group must first be sorted, and then the fifth element taken to replace the original value of the pixel. Using a sequential sorting algorithm such as bubble sort, in which the lesser values are found first in order, sorting could, in fact, be terminated after the fifth lowest value is obtained. Number of steps for finding each median is given by = 30 steps, or 30n for n pixels. 349

10 Parallel Code An Approximate Sorting Algorithm First, a compare-and-exchange operation is performed on each of the rows, requiring three steps. For the ith row, we have p i,j p i,j p i,j p i,j+ p i,j p i,j where means compare and exchange if the left gray level is greater than the right gray level. Then the columns are sorted using three compare-and-exchange operations. For the jth column, we have p i,j p i,j p i,j p i+,j p i,j p i,j The value in p i,j is taken to be the fifth largest pixel value. This algorithm does not always select the fifth largest value. However, a reasonable approximate algorithm and only requires six steps. Largest in row Next largest in row Next largest in column Figure.6 Approximate median algorithm requiring six steps. 350

11 Weighted Masks The simple mean method could be described by a weighted 3 3 mask that describes the amount of each pixel value used in the averaging. Suppose the weights are w 0, w, w 2, w 3, w 4, w 5, w 6, w 7, and w 8, and pixel values are x 0, x, x 2, x 3, x 4, x 5, x 6, x 7, and x 8. The new center pixel value, x 4 ', is given by x 4 ' = w 0 x 0 + w x + w 2 x 2 + w 3 x 3 + w 4 x 4 + w 5 x 5 + w 6 x 6 + w 7 x 7 + w 8 x k The scale factor, /k, is set to maintain the correct grayscale balance in the image after the operation. Often, k is given by w 0 + w + w 2 + w 3 + w 4 + w 5 + w 6 + w 7. The summation of products, w i x i, from two functions w and x is the (discrete) cross-correlation of f with w (written as f w). Mask Pixels Result w 0 w w 2 x 0 x x 2 w 3 w 4 w 5 x 3 x 4 x 5 = x 4 ' w 6 w 7 w 8 x 6 x 7 x 8 Figure.7 Using a 3 3 weighted mask. 35

12 k = 9 Figure.8 Mask to compute mean. 352

13 k = 6 8 Figure.9 A noise reduction mask. 353

14 k = 9 8 Figure.0 mask. High-pass sharpening filter The computation done with this mask is x 4 ' = 8x 4 x 0 x x 2 x 3 x 5 x 6 x 7 x

15 Edge Detection Highlighting the edges of the object where an edge is a significant change in the gray level intensity. Gradient and Magnitude Let us first consider a one-dimension gray level function, f(x) (say along a row). If the function, f(x), were differentiated, the first derivative, f/ x, measures the gradient of the transition and would be a positive-going or negative-going spike at a transition. The direction of the transition (either increasing or decreasing) can be identified by the polarity of the first derivative. Intensity transition First derivative Second derivative Figure. Edge detection using differentiation. 355

16 Image Function A two-dimensional discretized gray level function, f(x,y). Given by Gradient (magnitude) f = f f x y 2 Given as an angle by Gradient Direction φ ( x, y ) = tan where φ is the angle with respect to the y-axis. f y f x The gradient can be approximated to f f y + f x for reduced computational effort x Image y f(x, y) φ Constant intensity Gradient Figure.2 direction. Gray level gradient and 356

17 Edge Detection Masks For discrete functions, the derivatives are approximated by differences. The term f/ x is the difference in the x-direction and f/ y is the difference in the y-direction. We might consider computing the approximate gradient using pixel values x 5 and x 3 (to get f/ x) and x 7 and x (to get f/ y); i.e., f x x 5 x 3 so that f x y 7 x f x 7 x + x 5 x 3 Two masks are needed, one to obtain x 7 x and one to obtain x 5 x 3. The absolute values of the results of each mask are added together. 357

18 The approximate gradient obtained from Prewitt Operator f ( x y 6 x 0 ) + ( x 7 x ) + ( x 8 x 2 ) Then f ( x x 2 x 0 ) + ( x 5 x 3 ) + ( x 8 x 6 ) f x 6 x 0 + x 7 x + x 8 x 2 + x 2 x 0 + x 5 x 3 + x 8 x 6 which requires using the two 3 3 masks: Figure.3 Prewitt operator. 358

19 Derivatives are approximated to Sobel Operator f ( x y x 7 + x 8 ) ( x x + x 2 ) f ( x x x 5 + x 8 ) ( x x 3 + x 6 ) Operators implementing first derivatives will tend to enhance noise. However, the Sobel operator also has a smoothing action Figure.4 Sobel operator. 359

20 (a) Original image (Annabel) Figure.5 Edge detection with Sobel operator. (b) Effect of Sobel operator 360

21 Laplace Operator The Laplace second-order derivative is defined as 2 f = 2 f 2 f x y 2 approximated to 2 f = 4 x 4 ( x + x 3 + x 5 + x 7 ) which can be obtained with the single mask: Figure.6 Laplace operator. 36

22 Upper pixel x x 3 x 4 x 5 Left pixel Right pixel x 7 Lower pixel Figure.7 operator. Pixels used in Laplace 362

23 Figure.8 Effect of Laplace operator. 363

24 The Hough Transform The purpose is to find the parameters of equations of lines that most likely fit sets of pixels in an image. A line is described by the equation y = ax + b where the parameters, a and b, uniquely describe the particular line, a the slope and b the intercept on the y-axis. A search for those lines with the most pixels mapped onto them would be computationally prohibitively expensive [an Ο(n 3 ) algorithm]. Suppose the equation of the line is rearranged as: b = xa + y Every point that lies on a specific line in the x-y space will map into same point in the a-b space (parameter space). Therefore, we can find the number of points on each possible line in the x-y space by simply counting when a mapping maps into a point in the a-b space. y y = ax + b b b = x a + y (x, y ) b = xa + y (a, b) Pixel in image (a) (x, y) plane x (b) Parameter space a Figure.9 Mapping a line into (a, b) space. 364

25 Finding the Most Likely Lines A single point (x, y ) can be mapped into the points of a line in the a-b space. b = x a + y In the mapping process, discrete values will be used to a coarse prescribed precision and the computation is rounded to the nearest possible a-b coordinates. The mapping process is done for every point in the x-y space. A record is kept of those a-b points that have been obtained by incrementing the corresponding accumulator. Each accumulator will have the number of pixels that map into a single point in the parameter space. The points in the parameter space with locally maximum numbers of pixels are chosen as lines. 365

26 Method will fail for vertical lines (i.e., with the slope, a, infinite and with the y intercept, b, infinite) and with lines that approach this extreme. To avoid the problem, the line equation is rearranged to polar coordinates (the normal representation): r = x cos θ + y sin θ where r is the perpendicular distance to the origin in the original (x, y) coordinate system and θ is the angle between r and the x-axis. y y = ax + b r r = x cos θ + y sin θ (r, θ) r θ (a) (x, y) plane x (b) (r, θ) plane θ Figure.20 Mapping a line into (r, θ) space. The θ value will be in degrees and very conveniently will also be the gradient angle of the line (with respect to the x-axis). Each line will map to a single point in the (r, θ) space. A vertical line will simply map into a point where θ = 0 if a positive intercept with the x- axis, or θ = 80 if a negative intercept with the x-axis. A horizontal line has θ = 90. Each point in the x-y space will map into a curve in the (r, θ) space; i.e., (x, y ) maps into r = x cos θ + y sin θ 366

27 Implementation Assume the normal representation using the image coordinate system with the origin at the top left corner. x r θ y Figure.2 Normal representation using image coordinate system. 367

28 The parameter space is divided into small rectangular regions. One accumulator is provided for each region. Accumulators of those regions that a pixel maps into are incremented. This process must be done for all the pixels in the image. If all values of θ were tried (i.e., incrementing θ through all its values), the computational effort would be given by the number of discrete values of θ, say k intervals. With n pixels the complexity is Ο(kn). The computational effort can be reduced significantly by limiting the range of lines for individual pixels using some criteria. A single value of θ could be selected based upon the gradient of the line. Accumulator r θ Figure.22 Accumulators, acc[r][θ], for the Hough transform. 368

29 Sequential Code Could be of the form for (x = 0; x < xmax; x++) /* for each pixel */ for (y = 0; y < ymax; y++) { sobel(&x, &y, dx, dy); /* find x and y gradients */ magnitude = grad_mag(dx, dy); /* find magnitude if needed */ if (magnitude > threshold) { theta = grad_dir(dx, dy); /* atan2() fn */ theta = theta_quantize(theta); r = x * cos(theta) + y * sin(theta); r = r_quantize(r); acc[r][theta]++; /* increment accumulator */ append(r, theta, x, y); /* append point to line */ } } Finally, when all pixels have been considered, the values held in each accumulator will give the number of pixels that can map onto the corresponding line. The most likely lines are those with the most pixels, and accumulators with the local maxima are chosen. An algorithm for choosing the local maxima must be devised. Parallel Code Since the computation for each accumulator is independent of the other accumulations, it could be performed simultaneously, although each requires read access to the whole image. 369

30 Transformation into the Frequency Domain Fourier Transform Many applications for the Fourier transform in science and engineering. In the area of image processing, the Fourier transform is used in image enhancement, restoration, and compression. The image is a two-dimensional discretized function, f(x, y), but first start with the onedimensional case. For completeness, let us first review the results of the Fourier series and Fourier transform concepts from first principles. Fourier Series The Fourier series is a summation of sine and cosine terms and can be written as x( t) = a a 2πjt 2 j πjt + cos T b j sin T j = T is the period (/T = f, where f is a frequency). By some mathematical manipulation, we can get the more convenient description of the series: x( t) = X j e j = 2πij t T --- where X j is the jth Fourier coefficient in a complex form and i =. The Fourier coefficients can also be computed from specific integrals. 370

31 Fourier Transform Continuous Functions The previous summation developed into an integral: x( t) = X( f )e 2πif d f where X(f ) is a continuous function of frequency. The function X(f ) can be obtained from t X( f ) = x( t)e 2πift d X(f ) is the spectrum of x(t), or more simply the Fourier transform of x(t). The original function, x(t), can obtained from X(f ) using the first integral given, which is the inverse Fourier transform. Discrete Functions For functions having a set of N discrete values by replacing the integral with a summation, leading to the discrete Fourier transform (DFT) given by N X k = --- x N j e 2 j = 0 π i jk ---- N and inverse discrete Fourier transform given by x k = N j = 0 X j e 2 π i jk ---- N for 0 k N. The N (real) input values, x 0, x, x 2,, x N, produce N (complex) transform values, X 0, X, X 2,, X N. 37

32 Fourier Transforms in Image Processing A two-dimensional Fourier transform is where j and k are row and column coordinates, 0 j N and 0 k M. For convenience, let us assume that the image is square, where N = M. The equation can be rearranged into N M X lm = x jk e 2 N j = 0 k = 0 N πi jl km N M X lm x jk e 2πi km N = e 2 j = 0 k = 0 π i jl --- N The inner summation is a one-dimensional DFT operating on N points of a row to produce a transformed row. The outer summation is a one-dimensional DFT operating on N points of a column. We might write N X lm = X jm e 2 j = 0 πi jl --- N Hence, the two-dimensional DFT can be divided into two sequential phases, one operating on rows of elements and one operating on columns of (transformed) elements: j k Transform rows Transform columns x jk X jm X lm Figure.23 Two-dimensional DFT. 372

33 Applications A very wide range of image processing and analysis can be done in the frequency domain. One application for the DFT is in frequency filtering, which is applicable to both smoothing and edge detection (low-pass and high-pass filters). Frequency filtering can be described by the convolution operation: h( j, k) = g( j, k) f( j, k) the same operation as the cross-correlation operation for symmetrical masks, where g( j, k) describes the weighted mask (filter) and f( j, k) describes the image. It can be shown that the Fourier transform of a product of functions is given by the convolution of the transforms of the individual functions. Hence, the convolution of two functions can be obtained by taking the Fourier transforms of each function, multiplying the transforms H(j, k) = G(j, k) F(j, k) (element by element multiplication), where F(j, k) is the Fourier transform of f(j, k) and G( j, k) is the Fourier transform of g( j, k), and then taking the inverse transform to return the result into the original spatial domain. Image Transform f j,k Convolution f(j, k) F(j, k) Multiply Inverse transform h j,k H(j, k) h(j, k) g j,k g(j, k) G(j, k) Filter/image (a) Direct convolution (b) Using Fourier transform Figure.24 Convolution using Fourier transforms. 373

34 Parallelizing the Discrete Fourier Transform Starting from and using the notation w = e 2πi/N, X k = x j e 2 The w terms are called the twiddle factors. Each input value has to be multiplied by a twiddle factor. The inverse transform can be obtained by replacing w with w. X k N j = 0 = N j = 0 πi jk ---- N x j w jk Could have the form Sequential Code for (k = 0; k < N; k++) { /* for every point */ X[k] = 0; for (j = 0; j < N; j++) /* compute summation */ X[k] = X[k] + w j * k * x[j]; } where X[k] is the kth transformed point, x[k] is the kth input, for N points, and w = e 2πi/N. The summation step requires complex number arithmetic. Since each summation step uses the raised power of w of the previous step multiplied by w k (i.e., w (j-)*k * w k = w j*k ), the code could be rewritten as for (k = 0; k < N; k++) { X[k] = 0; a = ; for (j = 0; j < N; j++) { X[k] = X[k] + a * x[j]; a = a * w k ; } } where a is a temporary variable. 374

35 Elementary Master-Slave Implementation One slave process of N slave processes could be assigned to produce one of the transformed values; i.e., the kth slave process produces X[k]. The required values for a could be precomputed for each process by the master process. The parallel time complexity with N (slave) processes is Ο(N), which is cost optimal for an Ο(N 2 ) sequential algorithm implemented directly. Master process w 0 w w n Slave processes X[0] X[] X[n ] Figure.25 Master-slave approach for implementing the DFT directly. 375

36 Pipeline Implementation Unfolding the inner loop for X[k], we have X[k] = 0; a = ; X[k] = X[k] + a * x[0]; a = a * w k ; X[k] = X[k] + a * x[]; a = a * w k ; X[k] = X[k] + a * x[2]; a = a * w k ; X[k] = X[k] + a * x[3]; a = a * w k ;.. Each pair of statements X[k] = X[k] + a * x[0]; a = a * w k ; could be performed by a separate pipeline stage. x[j] X[k] Process j + a x[j] Values for next iteration X[k] a w k a w k Figure.26 One stage of a pipeline implementation of DFT algorithm. 376

37 0 w k x[0] x[] x[2] x[3] x[n ] X[k] a w k P 0 P P 2 P 3 P N (a) Pipeline structure Output sequence X[0],X[],X[2],X[3] X[0] X[] X[2] X[3] X[4] X[5] X[6] P N P N 2 Pipeline stages P 2 P P 0 Time (b) Timing diagram Figure.27 Discrete Fourier transform with a pipeline. 377

38 DFT as a Matrix-Vector Product The kth element of the discrete Fourier transform is given by X k = x 0 w 0 + x w + x 2 w 2 + x 3 w 3 + x N w N and the whole transform can be described by a matrix-vector product: X 0 X X 2 X 3. X k. X N = --- N.. w w 2 w 3.. w N w 2 w 4 w 6.. w 2 ( N ) w 3 w 6 w 9.. w 3 N w k w 2 k w 3 k.. w N w N w 2 ( N ) w 3 N ( ).. w N ( ) ( N ) x 0 x x 2 x 3. x k. x N (Note w 0 =.) Hence, the parallel methods of producing a matrix-vector product as described in Chapter 0 can be used for the discrete Fourier transform. 378

39 Fast Fourier Transform The FFT is a fast method of obtaining discrete Fourier transform and has a time complexity of Ο(N log N) instead of Ο(N 2 ). Let us start with the discrete Fourier transform equation with the scale factor to demonstrate that the scale factor does not affect the derivation of the algorithm: where w = e 2πi/N. N X k = --- x N j w jk j = 0 Many formulations. Generally, the summation is successively divided into parts by a divide-and-conquer approach. We will describe the version that starts by dividing the summation into the following two parts: ( N 2) ( N 2) X k = --- x N 2 j w 2 jk + x 2 j j = 0 j = 0 + w ( 2 j + )k where the first summation processes the x s with even indices and the second summation processes the x s with odd indices. Rearranging, we get or X k = -- 2 ( N 2) ( N 2) ( N 2) x w 2 jk + w k 2 j ( N 2) x 2 j j = 0 j = 0 + w 2 jk X k = -- 2 ( N 2 ) ( N 2) x e 2 2 j j = 0 π i jk N 2 ( N 2 ) + w k ( N 2 ) x 2 j j = 0 + e 2 π i jk N 2 379

40 Each summation can now be recognized as an N/2 discrete Fourier transform operating on the N/2 even points and the N/2 odd points, respectively. Therefore X k = -- X 2 even + w k X odd for k = 0,, N, where X even is the N/2-point DFT of the numbers with even indices, x 0, x 2, x 4,, and X odd is the N/2-point DFT of the numbers with odd indices, x, x 3, x 5,. Now, suppose k is limited to 0,, N/2, the first N/2 values of the total N values. The complete sequence can be divided into two parts: and X k = -- X 2 even + w k X odd X k + N 2 = -- X 2 even + w k + N 2 X odd = -- X 2 even w k X odd since w k+n/2 = w k, where 0 k < N/2. Hence, we could compute X k and X k+n/2 using two N/2-point transforms: Input sequence Transform x 0 x x 2 x 3 N/2 pt DFT X even + X k x N 2 x N N/2 pt DFT X odd w k k = 0,, N/2 X k+n/2 Figure.28 Decomposition of N-point DFT into two N/2-point DFTs. 380

41 Each of the N/2-point DFTs can be decomposed into two N/4-point DFTs and the decomposition could be continued until single points are to be transformed. A -point DFT is simply the value of the point. The computation has to be done for each transformed point and is often depicted in the form such as: x X 0 x + X x 2 x X 2 X 3 Figure.29 transform. Four-point discrete Fourier 38

42 X k = Σ(0,2,4,6,8,0,2,4)+w k Σ(,3,5,7,9,,3,5) {Σ(0,4,8,2)+w k Σ(2,6,0,4)}+w k {Σ(,5,9,3)+w k Σ(3,7,,5)} {[Σ(0,8)+w k Σ(4,2)]+w k [Σ(2,0)+w k Σ(6,4)]}+{[Σ(,9)+w k Σ(5,3)]+w k [Σ(3,)+w k Σ(7,5)]} x 0 x 8 x 4 x 2 x 2 x 0 x 6 x 4 x x 9 x 5 x 3 x 3 x x 7 x Figure.30 Sixteen-point DFT decomposition. 382

43 Sequential Code The sequential time complexity is essentially Ο(N log N) since there are log N steps and each step requires a computation proportional to N, where there are N numbers. The algorithm can be implemented recursively or iteratively. 383

44 Parallelizing the FFT Algorithm Binary Exchange Algorithm One processor were allocated to each data point (x[j] for the jth process) that is one row in Figure.3. Each process would finally produce one transformed point. The connection pattern of Figure.3 is the so-called butterfly connections. x 0 x x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 0 x x 2 x 3 x 4 x 5 X 0 X X 2 X 3 X 4 X 5 X 6 X 7 X 8 X 9 X 0 X X 2 X 3 X 4 X 5 Figure.3 Sixteen-point FFT computational flow. 384

45 Process Row P/r 0000 Inputs x 0 Outputs X 0 P x x 2 X X 2 00 x 3 X x 4 X 4 P x 5 x 6 X 5 X 6 0 x 7 X x 8 X 8 P 2 P 3 00 x 9 00 x 0 0 x 00 x 2 0 x 3 0 x 4 x 5 X 9 X 0 X X 2 X 3 X 4 X 5 Figure.32 Mapping processors onto 6-point FFT computation. 385

46 Analysis Computation Given p processors and N data points, at each step each processor will compute N/p points, and each point requires one multiplication and addition. With log N steps, the parallel time complexity is given by t comp = Ο(N log N). Communication If p = N, communication occurs at each step and there is one data exchange between pairs of processors at each of the log N steps. Suppose a hypercube or another interconnection network that allows simultaneous exchanges is used. In that case, the communication time complexity is given by t comm = Ο(log N) If p < N, communication between processors only occurs during the first log p steps. In the first step, all p processors make exchanges. In the next set, only half of the processors make data exchanges, in the next, a quarter of the processors, and so on. If an interconnection network that will allow simultaneous exchanges is used, the communication time complexity is simply given by t comm = Ο(log p) 386

47 Transpose Algorithm If the processors are organized as a two-dimensional array, say in row major order, communications would first take place between processors in each column, and then communication would take place in processors in each row: P 0 P P 2 P 3 x 0 x x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 0 x x 2 x 3 x 4 x 5 Figure.33 FFT using transpose algorithm first two steps. 387

48 During the first two steps, all communication would be within a processor, as shown. During the last two steps, the communication would be between processors. In the transpose algorithm, between the first two steps and the last two steps, the array elements are transposed; i.e., the elements down each column are moved across a row, as shown in Figure.34. P 0 P P 2 P 3 x 0 x x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 x 0 x x 2 x 3 x 4 x 5 Figure.34 Transposing array for transpose algorithm. 388

49 After the transpose, the last two steps proceed but now involve communication only within the processors, as shown in Figure.35. The only communication between processors s to transpose the array P 0 P P 2 P 3 x 0 x 4 x 8 x 2 x x 5 x 9 x 3 x 2 x 6 x 0 x 4 x 3 x 7 x x 5 Figure.35 FFT using transpose algorithm last two steps. 389

50 P R O B L E M S Scientific/Numerical -. Determine the parallel time complexity for computing the mean according to the algorithm described in Section for p processors and n image pixels. -2. Determine the circumstances that the median algorithm described in Section.4.2 will find the median and the circumstances that it will not find the median. What is the worst-case position inaccuracy in finding the median? -3. Figure.36 shows an image that has changes in intensity. A 3 3 mask is moved over an image. Work through the results of Sobel operator masks moving across this image to find the gradient magnitude and gradient angle at various places. Assume that the gray area has a value of 0 and the white area has a value of 255, and at least 50% of the pixel area must be highlighted to be recognized Mask Figure.36 Image for Problem Though all of the masks in this chapter as 3 3 masks, other sizes are possible, such as 5 5, 7 7, 9 9,, and 3 3 masks. Write a program to implement weighted mask filtering to cope with any m m mask in which the size of the mask (m odd) and its contents are entered as input. Apply this program to an image and experiment with different masks. -5. Write a parallel program to perform edge detection of an image file stored in a standard uncompressed format such as PPM (portable pix map) or PGM (portable gray map) file formats. For viewing, it may be necessary to convert the image into a format such as gif format; such conversions can be done with system utilities. (See also system man pages and Born, 995.) -6. Read in a PPM file in P6 format (full color) and write a parallel program to reduce the image to grayscale by taking the average of red, green, and blue values for each pixel: red value + green value + blue value pixel value =

51 -7. Write a program to implement the Laplace operator. Experiment with subsequent thresholding to obtain edge detection. In addition to using the left-right-up-down neighboring points, experiment with points that are neighbors to these points. -8. Suppose edge detection has already been done on an image creating a binary image ( at an edge and 0 elsewhere). Develop an algorithm and write a parallel program that fills the object, as delineated by the enclosed edges, with s. One way of doing this is to start with a single seed placed within the object and fill adjacent pixels recursively until the edges are reached. -9. Write a parallel program for the Hough transform using the normal representation for lines, and evaluate the program on sample images. -0. In the Hough transform, as an alternative to simply incrementing the accumulators, having the gradient operator also provide the gradient magnitude, f, allows the gradient to be added to the accumulator instead. Investigate this method empirically and compare to simply incrementing the accumulators. -. Read the paper by Krishnaswamy and Banerjee (997) describing using the slope-intercept form of equation for lines in images for the Hough transform and make a detailed comparison of that method compared to using the normal representation of Duda and Hart (972). -2. Determine the computation time of the pipelined discrete Fourier transform implementation described in Section Show that the result of a one-point discrete Fourier transform is simply the value of the point. -4. Since the fast Fourier transform is a divide-and-conquer algorithm, it can be considered as a tree. Develop the connections in the form of a tree. -5. Prove that the matrix-vector formulation for discrete Fourier transform described in Section.7.4, for N = 4, is X 0 X X 2 X 3 = -- 4 w w 2 w 3 w 2 w 2 w 3 w 2 w x o x x 2 x 3 Write parallel programs for N = 4 using the matrix-vector formulation and using the FFT formulation and measure the execution speeds. 39

52 Real Life -6. You have been commissioned by a major film studio to develop a really fast morphing package that will change one image into another image. You come up with the idea of having two images, the original image and the final image, and changing each pixel on the original image to become closer and closer to the pixels of the final image in a lock-step SIMD fashion. This method is certainly embarrassingly parallel, although it may not create a very smoothly changing shape. Experiment with the method and demonstrate it to the studio using pictures of actors (or your friends). -7. The CIA has given you the task of writing a really fast image recognition program (fast enough to be effective on board a missile) for detecting airport runways from images collected by CIA spy planes over a nonfriendly country. It is known that the runways are at least /2 mile long, 25 ft wide, and completely straight. The images contain a 5-mile-by- 5-mile area. Develop a parallel program that can detect these runways. Create sample nonperfect images. The CIA will provide the actual images once you have signed your life away. For background information, read the paper on detecting runways by Huertas, Cole, and Nevatia (990). -8. A manufacturer of printed circuit cards is having significant problems with broken printed circuit tracks on its boards and has commissioned you as an independent programmer to find a way to inspect images of the boards automatically as they leave the manufacturing line and recognize broken tracks. You know that all the tracks are straight in one direction or can turn 90 degrees to that direction. Develop a strategy for recognizing the break in the tracks and write a parallel program using realistic test images. For interest, read Mandeville (985). 392

Chapter 11 Image Processing

Chapter 11 Image Processing Chapter Image Processing Low-level Image Processing Operates directly on a stored image to improve or enhance it. Stored image consists of a two-dimensional array of pixels (picture elements): Origin (0,

More information

Image Processing. Application area chosen because it has very good parallelism and interesting output.

Image Processing. Application area chosen because it has very good parallelism and interesting output. Chapter 11 Slide 517 Image Processing Application area chosen because it has very good parallelism and interesting output. Low-level Image Processing Operates directly on stored image to improve/enhance

More information

Edge and local feature detection - 2. Importance of edge detection in computer vision

Edge and local feature detection - 2. Importance of edge detection in computer vision Edge and local feature detection Gradient based edge detection Edge detection by function fitting Second derivative edge detectors Edge linking and the construction of the chain graph Edge and local feature

More information

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

More information

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection Why Edge Detection? How can an algorithm extract relevant information from an image that is enables the algorithm to recognize objects? The most important information for the interpretation of an image

More information

EE795: Computer Vision and Intelligent Systems

EE795: Computer Vision and Intelligent Systems EE795: Computer Vision and Intelligent Systems Spring 2012 TTh 17:30-18:45 WRI C225 Lecture 04 130131 http://www.ee.unlv.edu/~b1morris/ecg795/ 2 Outline Review Histogram Equalization Image Filtering Linear

More information

Biomedical Image Analysis. Point, Edge and Line Detection

Biomedical Image Analysis. Point, Edge and Line Detection Biomedical Image Analysis Point, Edge and Line Detection Contents: Point and line detection Advanced edge detection: Canny Local/regional edge processing Global processing: Hough transform BMIA 15 V. Roth

More information

Lecture 7: Most Common Edge Detectors

Lecture 7: Most Common Edge Detectors #1 Lecture 7: Most Common Edge Detectors Saad Bedros sbedros@umn.edu Edge Detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the

More information

CS4733 Class Notes, Computer Vision

CS4733 Class Notes, Computer Vision CS4733 Class Notes, Computer Vision Sources for online computer vision tutorials and demos - http://www.dai.ed.ac.uk/hipr and Computer Vision resources online - http://www.dai.ed.ac.uk/cvonline Vision

More information

Intensity Transformations and Spatial Filtering

Intensity Transformations and Spatial Filtering 77 Chapter 3 Intensity Transformations and Spatial Filtering Spatial domain refers to the image plane itself, and image processing methods in this category are based on direct manipulation of pixels in

More information

Chapter 3: Intensity Transformations and Spatial Filtering

Chapter 3: Intensity Transformations and Spatial Filtering Chapter 3: Intensity Transformations and Spatial Filtering 3.1 Background 3.2 Some basic intensity transformation functions 3.3 Histogram processing 3.4 Fundamentals of spatial filtering 3.5 Smoothing

More information

Lecture: Edge Detection

Lecture: Edge Detection CMPUT 299 Winter 2007 Lecture: Edge Detection Irene Cheng Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop -

More information

Ulrik Söderström 16 Feb Image Processing. Segmentation

Ulrik Söderström 16 Feb Image Processing. Segmentation Ulrik Söderström ulrik.soderstrom@tfe.umu.se 16 Feb 2011 Image Processing Segmentation What is Image Segmentation? To be able to extract information from an image it is common to subdivide it into background

More information

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7) 5 Years Integrated M.Sc.(IT)(Semester - 7) 060010707 Digital Image Processing UNIT 1 Introduction to Image Processing Q: 1 Answer in short. 1. What is digital image? 1. Define pixel or picture element?

More information

Filtering and Enhancing Images

Filtering and Enhancing Images KECE471 Computer Vision Filtering and Enhancing Images Chang-Su Kim Chapter 5, Computer Vision by Shapiro and Stockman Note: Some figures and contents in the lecture notes of Dr. Stockman are used partly.

More information

Pipelined Computations

Pipelined Computations Pipelined Computations In the pipeline technique, the problem is divided into a series of tasks that have to be completed one after the other. In fact, this is the basis of sequential programming. Each

More information

Lecture 6: Edge Detection

Lecture 6: Edge Detection #1 Lecture 6: Edge Detection Saad J Bedros sbedros@umn.edu Review From Last Lecture Options for Image Representation Introduced the concept of different representation or transformation Fourier Transform

More information

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier Computer Vision 2 SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung Computer Vision 2 Dr. Benjamin Guthier 1. IMAGE PROCESSING Computer Vision 2 Dr. Benjamin Guthier Content of this Chapter Non-linear

More information

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Image Enhancement - Filtering Digital Image Processing Image Enhancement - Filtering Derivative Derivative is defined as a rate of change. Discrete Derivative Finite Distance Example Derivatives in 2-dimension Derivatives of Images

More information

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation Texture Outline Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation 1 Image Representation The standard basis for images is the set

More information

Spatial Enhancement Definition

Spatial Enhancement Definition Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing

More information

Filtering Images. Contents

Filtering Images. Contents Image Processing and Data Visualization with MATLAB Filtering Images Hansrudi Noser June 8-9, 010 UZH, Multimedia and Robotics Summer School Noise Smoothing Filters Sigmoid Filters Gradient Filters Contents

More information

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16] Code No: 07A70401 R07 Set No. 2 1. (a) What are the basic properties of frequency domain with respect to the image processing. (b) Define the terms: i. Impulse function of strength a ii. Impulse function

More information

Segmentation and Grouping

Segmentation and Grouping Segmentation and Grouping How and what do we see? Fundamental Problems ' Focus of attention, or grouping ' What subsets of pixels do we consider as possible objects? ' All connected subsets? ' Representation

More information

Sorting Algorithms. - rearranging a list of numbers into increasing (or decreasing) order. Potential Speedup

Sorting Algorithms. - rearranging a list of numbers into increasing (or decreasing) order. Potential Speedup Sorting Algorithms - rearranging a list of numbers into increasing (or decreasing) order. Potential Speedup The worst-case time complexity of mergesort and the average time complexity of quicksort are

More information

Other Linear Filters CS 211A

Other Linear Filters CS 211A Other Linear Filters CS 211A Slides from Cornelia Fermüller and Marc Pollefeys Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin

More information

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich. Autonomous Mobile Robots Localization "Position" Global Map Cognition Environment Model Local Map Path Perception Real World Environment Motion Control Perception Sensors Vision Uncertainties, Line extraction

More information

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN CHAPTER 3: IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN Principal objective: to process an image so that the result is more suitable than the original image

More information

Assignment 3: Edge Detection

Assignment 3: Edge Detection Assignment 3: Edge Detection - EE Affiliate I. INTRODUCTION This assignment looks at different techniques of detecting edges in an image. Edge detection is a fundamental tool in computer vision to analyse

More information

(Refer Slide Time: 0:32)

(Refer Slide Time: 0:32) Digital Image Processing. Professor P. K. Biswas. Department of Electronics and Electrical Communication Engineering. Indian Institute of Technology, Kharagpur. Lecture-57. Image Segmentation: Global Processing

More information

APPM 2360 Project 2 Due Nov. 3 at 5:00 PM in D2L

APPM 2360 Project 2 Due Nov. 3 at 5:00 PM in D2L APPM 2360 Project 2 Due Nov. 3 at 5:00 PM in D2L 1 Introduction Digital images are stored as matrices of pixels. For color images, the matrix contains an ordered triple giving the RGB color values at each

More information

Image Compression System on an FPGA

Image Compression System on an FPGA Image Compression System on an FPGA Group 1 Megan Fuller, Ezzeldin Hamed 6.375 Contents 1 Objective 2 2 Background 2 2.1 The DFT........................................ 3 2.2 The DCT........................................

More information

Image Processing

Image Processing Image Processing 159.731 Canny Edge Detection Report Syed Irfanullah, Azeezullah 00297844 Danh Anh Huynh 02136047 1 Canny Edge Detection INTRODUCTION Edges Edges characterize boundaries and are therefore

More information

Digital Signal Processing. Soma Biswas

Digital Signal Processing. Soma Biswas Digital Signal Processing Soma Biswas 2017 Partial credit for slides: Dr. Manojit Pramanik Outline What is FFT? Types of FFT covered in this lecture Decimation in Time (DIT) Decimation in Frequency (DIF)

More information

2D Image Processing INFORMATIK. Kaiserlautern University. DFKI Deutsches Forschungszentrum für Künstliche Intelligenz

2D Image Processing INFORMATIK. Kaiserlautern University.   DFKI Deutsches Forschungszentrum für Künstliche Intelligenz 2D Image Processing - Filtering Prof. Didier Stricker Kaiserlautern University http://ags.cs.uni-kl.de/ DFKI Deutsches Forschungszentrum für Künstliche Intelligenz http://av.dfki.de 1 What is image filtering?

More information

Computer Vision and Graphics (ee2031) Digital Image Processing I

Computer Vision and Graphics (ee2031) Digital Image Processing I Computer Vision and Graphics (ee203) Digital Image Processing I Dr John Collomosse J.Collomosse@surrey.ac.uk Centre for Vision, Speech and Signal Processing University of Surrey Learning Outcomes After

More information

Digital Image Processing. Lecture 6

Digital Image Processing. Lecture 6 Digital Image Processing Lecture 6 (Enhancement in the Frequency domain) Bu-Ali Sina University Computer Engineering Dep. Fall 2016 Image Enhancement In The Frequency Domain Outline Jean Baptiste Joseph

More information

Final Review. Image Processing CSE 166 Lecture 18

Final Review. Image Processing CSE 166 Lecture 18 Final Review Image Processing CSE 166 Lecture 18 Topics covered Basis vectors Matrix based transforms Wavelet transform Image compression Image watermarking Morphological image processing Segmentation

More information

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava Image Enhancement in Spatial Domain By Dr. Rajeev Srivastava CONTENTS Image Enhancement in Spatial Domain Spatial Domain Methods 1. Point Processing Functions A. Gray Level Transformation functions for

More information

IMAGING. Images are stored by capturing the binary data using some electronic devices (SENSORS)

IMAGING. Images are stored by capturing the binary data using some electronic devices (SENSORS) IMAGING Film photography Digital photography Images are stored by capturing the binary data using some electronic devices (SENSORS) Sensors: Charge Coupled Device (CCD) Photo multiplier tube (PMT) The

More information

Chapter - 2 : IMAGE ENHANCEMENT

Chapter - 2 : IMAGE ENHANCEMENT Chapter - : IMAGE ENHANCEMENT The principal objective of enhancement technique is to process a given image so that the result is more suitable than the original image for a specific application Image Enhancement

More information

Edge Detection. EE/CSE 576 Linda Shapiro

Edge Detection. EE/CSE 576 Linda Shapiro Edge Detection EE/CSE 576 Linda Shapiro Edge Attneave's Cat (1954) 2 Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused

More information

Edge detection. Gradient-based edge operators

Edge detection. Gradient-based edge operators Edge detection Gradient-based edge operators Prewitt Sobel Roberts Laplacian zero-crossings Canny edge detector Hough transform for detection of straight lines Circle Hough Transform Digital Image Processing:

More information

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions Others -- Noise Removal Techniques -- Edge Detection Techniques -- Geometric Operations -- Color Image Processing -- Color Spaces Xiaojun Qi Noise Model The principal sources of noise in digital images

More information

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation Image enhancement (GW-Ch. 3) Classification of image operations Process of improving image quality so that the result is more suitable for a specific application. contrast stretching histogram processing

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spatial Domain Filtering http://www.ee.unlv.edu/~b1morris/ecg782/ 2 Outline Background Intensity

More information

Numerical Algorithms

Numerical Algorithms Chapter 10 Slide 464 Numerical Algorithms Slide 465 Numerical Algorithms In textbook do: Matrix multiplication Solving a system of linear equations Slide 466 Matrices A Review An n m matrix Column a 0,0

More information

Image Enhancement Techniques for Fingerprint Identification

Image Enhancement Techniques for Fingerprint Identification March 2013 1 Image Enhancement Techniques for Fingerprint Identification Pankaj Deshmukh, Siraj Pathan, Riyaz Pathan Abstract The aim of this paper is to propose a new method in fingerprint enhancement

More information

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han Computer Vision 10. Segmentation Computer Engineering, Sejong University Dongil Han Image Segmentation Image segmentation Subdivides an image into its constituent regions or objects - After an image has

More information

Fourier transforms and convolution

Fourier transforms and convolution Fourier transforms and convolution (without the agonizing pain) CS/CME/BioE/Biophys/BMI 279 Oct. 26, 2017 Ron Dror 1 Why do we care? Fourier transforms Outline Writing functions as sums of sinusoids The

More information

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels

Edge detection. Convert a 2D image into a set of curves. Extracts salient features of the scene More compact than pixels Edge Detection Edge detection Convert a 2D image into a set of curves Extracts salient features of the scene More compact than pixels Origin of Edges surface normal discontinuity depth discontinuity surface

More information

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT. Vivekananda Collegee of Engineering & Technology Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT Dept. Prepared by Harivinod N Assistant Professor, of Computer Science and Engineering,

More information

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement Chapter 3 Image Enhancement in the Spatial Domain The principal objective of enhancement to process an image so that the result is more suitable than the original image for a specific application. Enhancement

More information

Fast Fourier Transform (FFT)

Fast Fourier Transform (FFT) EEO Prof. Fowler Fast Fourier Transform (FFT). Background The FFT is a computationally efficient algorithm for computing the discrete Fourier transform (DFT). The DFT is the mathematical entity that is

More information

UNIT 2 GRAPHIC PRIMITIVES

UNIT 2 GRAPHIC PRIMITIVES UNIT 2 GRAPHIC PRIMITIVES Structure Page Nos. 2.1 Introduction 46 2.2 Objectives 46 2.3 Points and Lines 46 2.4 Line Generation Algorithms 48 2.4.1 DDA Algorithm 49 2.4.2 Bresenhams Line Generation Algorithm

More information

Example 1: Regions. Image Segmentation. Example 3: Lines and Circular Arcs. Example 2: Straight Lines. Region Segmentation: Segmentation Criteria

Example 1: Regions. Image Segmentation. Example 3: Lines and Circular Arcs. Example 2: Straight Lines. Region Segmentation: Segmentation Criteria Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image Example 1: Regions. into linear

More information

Example 2: Straight Lines. Image Segmentation. Example 3: Lines and Circular Arcs. Example 1: Regions

Example 2: Straight Lines. Image Segmentation. Example 3: Lines and Circular Arcs. Example 1: Regions Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually cover the image Example : Straight Lines. into

More information

EN1610 Image Understanding Lab # 3: Edges

EN1610 Image Understanding Lab # 3: Edges EN1610 Image Understanding Lab # 3: Edges The goal of this fourth lab is to ˆ Understanding what are edges, and different ways to detect them ˆ Understand different types of edge detectors - intensity,

More information

ECG782: Multidimensional Digital Signal Processing

ECG782: Multidimensional Digital Signal Processing Professor Brendan Morris, SEB 3216, brendan.morris@unlv.edu ECG782: Multidimensional Digital Signal Processing Spring 2014 TTh 14:30-15:45 CBC C313 Lecture 03 Image Processing Basics 13/01/28 http://www.ee.unlv.edu/~b1morris/ecg782/

More information

Lecture 4: Spatial Domain Transformations

Lecture 4: Spatial Domain Transformations # Lecture 4: Spatial Domain Transformations Saad J Bedros sbedros@umn.edu Reminder 2 nd Quiz on the manipulator Part is this Fri, April 7 205, :5 AM to :0 PM Open Book, Open Notes, Focus on the material

More information

Lecture 8: Fitting. Tuesday, Sept 25

Lecture 8: Fitting. Tuesday, Sept 25 Lecture 8: Fitting Tuesday, Sept 25 Announcements, schedule Grad student extensions Due end of term Data sets, suggestions Reminder: Midterm Tuesday 10/9 Problem set 2 out Thursday, due 10/11 Outline Review

More information

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS334: Digital Imaging and Multimedia Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators From Edges

More information

Tutorial 5. Jun Xu, Teaching Asistant March 2, COMP4134 Biometrics Authentication

Tutorial 5. Jun Xu, Teaching Asistant March 2, COMP4134 Biometrics Authentication Tutorial 5 Jun Xu, Teaching Asistant nankaimathxujun@gmail.com COMP4134 Biometrics Authentication March 2, 2017 Table of Contents Problems Problem 1: Answer The Questions Problem 2: Indeterminate Region

More information

CoE4TN4 Image Processing

CoE4TN4 Image Processing CoE4TN4 Image Processing Chapter 11 Image Representation & Description Image Representation & Description After an image is segmented into regions, the regions are represented and described in a form suitable

More information

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017 Reading Jain, Kasturi, Schunck, Machine Vision. McGraw-Hill, 1995. Sections 4.2-4.4, 4.5(intro), 4.5.5, 4.5.6, 5.1-5.4. [online handout] Image processing Brian Curless CSE 457 Spring 2017 1 2 What is an

More information

ELEN E4830 Digital Image Processing. Homework 6 Solution

ELEN E4830 Digital Image Processing. Homework 6 Solution ELEN E4830 Digital Image Processing Homework 6 Solution Chuxiang Li cxli@ee.columbia.edu Department of Electrical Engineering, Columbia University April 10, 2006 1 Edge Detection 1.1 Sobel Operator The

More information

Lecture 9: Hough Transform and Thresholding base Segmentation

Lecture 9: Hough Transform and Thresholding base Segmentation #1 Lecture 9: Hough Transform and Thresholding base Segmentation Saad Bedros sbedros@umn.edu Hough Transform Robust method to find a shape in an image Shape can be described in parametric form A voting

More information

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates

COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates COMP30019 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than

More information

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection By Dr. Yu Cao Department of Computer Science The University of Massachusetts Lowell Lowell, MA 01854, USA Part of the slides

More information

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding

Edges and Lines Readings: Chapter 10: better edge detectors line finding circle finding Edges and Lines Readings: Chapter 10: 10.2.3-10.3 better edge detectors line finding circle finding 1 Lines and Arcs Segmentation In some image sets, lines, curves, and circular arcs are more useful than

More information

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13. Announcements Edge and Corner Detection HW3 assigned CSE252A Lecture 13 Efficient Implementation Both, the Box filter and the Gaussian filter are separable: First convolve each row of input image I with

More information

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations EEM 463 Introduction to Image Processing Week 3: Intensity Transformations Fall 2013 Instructor: Hatice Çınar Akakın, Ph.D. haticecinarakakin@anadolu.edu.tr Anadolu University Enhancement Domains Spatial

More information

Point operation Spatial operation Transform operation Pseudocoloring

Point operation Spatial operation Transform operation Pseudocoloring Image Enhancement Introduction Enhancement by point processing Simple intensity transformation Histogram processing Spatial filtering Smoothing filters Sharpening filters Enhancement in the frequency domain

More information

Edge Detection Lecture 03 Computer Vision

Edge Detection Lecture 03 Computer Vision Edge Detection Lecture 3 Computer Vision Suggested readings Chapter 5 Linda G. Shapiro and George Stockman, Computer Vision, Upper Saddle River, NJ, Prentice Hall,. Chapter David A. Forsyth and Jean Ponce,

More information

Biomedical Image Processing

Biomedical Image Processing Biomedical Image Processing Jason Thong Gabriel Grant 1 2 Motivation from the Medical Perspective MRI, CT and other biomedical imaging devices were designed to assist doctors in their diagnosis and treatment

More information

An Accurate Method for Skew Determination in Document Images

An Accurate Method for Skew Determination in Document Images DICTA00: Digital Image Computing Techniques and Applications, 1 January 00, Melbourne, Australia. An Accurate Method for Skew Determination in Document Images S. Lowther, V. Chandran and S. Sridharan Research

More information

Point and Spatial Processing

Point and Spatial Processing Filtering 1 Point and Spatial Processing Spatial Domain g(x,y) = T[ f(x,y) ] f(x,y) input image g(x,y) output image T is an operator on f Defined over some neighborhood of (x,y) can operate on a set of

More information

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering Digital Image Processing Prof. P. K. Biswas Department of Electronic & Electrical Communication Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Image Enhancement Frequency Domain Processing

More information

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN

UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN UNIT - 5 IMAGE ENHANCEMENT IN SPATIAL DOMAIN Spatial domain methods Spatial domain refers to the image plane itself, and approaches in this category are based on direct manipulation of pixels in an image.

More information

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University CS534: Introduction to Computer Vision Edges and Contours Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What makes an edge? Gradient-based edge detection Edge Operators Laplacian

More information

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick

Edge Detection. CSE 576 Ali Farhadi. Many slides from Steve Seitz and Larry Zitnick Edge Detection CSE 576 Ali Farhadi Many slides from Steve Seitz and Larry Zitnick Edge Attneave's Cat (1954) Origin of edges surface normal discontinuity depth discontinuity surface color discontinuity

More information

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus Image Processing BITS Pilani Dubai Campus Dr Jagadish Nayak Image Segmentation BITS Pilani Dubai Campus Fundamentals Let R be the entire spatial region occupied by an image Process that partitions R into

More information

Image features. Image Features

Image features. Image Features Image features Image features, such as edges and interest points, provide rich information on the image content. They correspond to local regions in the image and are fundamental in many applications in

More information

Lecture 15: Segmentation (Edge Based, Hough Transform)

Lecture 15: Segmentation (Edge Based, Hough Transform) Lecture 15: Segmentation (Edge Based, Hough Transform) c Bryan S. Morse, Brigham Young University, 1998 000 Last modified on February 3, 000 at :00 PM Contents 15.1 Introduction..............................................

More information

CMPUT 206. Introduction to Digital Image Processing

CMPUT 206. Introduction to Digital Image Processing CMPUT 206 Introduction to Digital Image Processing Overview. What is a pixel in an image? 2. How does Photoshop, + human assistance, detect an edge in a picture/photograph? 3. Behind Photoshop - How does

More information

Digital Image Processing COSC 6380/4393

Digital Image Processing COSC 6380/4393 Digital Image Processing COSC 6380/4393 Lecture 21 Nov 16 th, 2017 Pranav Mantini Ack: Shah. M Image Processing Geometric Transformation Point Operations Filtering (spatial, Frequency) Input Restoration/

More information

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking Representation REPRESENTATION & DESCRIPTION After image segmentation the resulting collection of regions is usually represented and described in a form suitable for higher level processing. Most important

More information

Image Enhancement in Spatial Domain (Chapter 3)

Image Enhancement in Spatial Domain (Chapter 3) Image Enhancement in Spatial Domain (Chapter 3) Yun Q. Shi shi@njit.edu Fall 11 Mask/Neighborhood Processing ECE643 2 1 Point Processing ECE643 3 Image Negatives S = (L 1) - r (3.2-1) Point processing

More information

Digital filters. Remote Sensing (GRS-20306)

Digital filters. Remote Sensing (GRS-20306) Digital filters Remote Sensing (GRS-20306) Digital filters Purpose Operator Examples Properties (L&K pp. 494-499 and section 7.5) Digital filters and RS images Local operation by "mask" or "window" or

More information

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I) Edge detection Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Elaborazione delle immagini (Image processing I) academic year 2011 2012 Image segmentation Several image processing

More information

Mastery. PRECALCULUS Student Learning Targets

Mastery. PRECALCULUS Student Learning Targets PRECALCULUS Student Learning Targets Big Idea: Sequences and Series 1. I can describe a sequence as a function where the domain is the set of natural numbers. Connections (Pictures, Vocabulary, Definitions,

More information

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into

2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into 2D rendering takes a photo of the 2D scene with a virtual camera that selects an axis aligned rectangle from the scene. The photograph is placed into the viewport of the current application window. A pixel

More information

Algorithms and Applications

Algorithms and Applications Algorithms and Applications 1 Areas done in textbook: Sorting Algorithms Numerical Algorithms Image Processing Searching and Optimization 2 Chapter 10 Sorting Algorithms - rearranging a list of numbers

More information

Anno accademico 2006/2007. Davide Migliore

Anno accademico 2006/2007. Davide Migliore Robotica Anno accademico 6/7 Davide Migliore migliore@elet.polimi.it Today What is a feature? Some useful information The world of features: Detectors Edges detection Corners/Points detection Descriptors?!?!?

More information

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis

HOUGH TRANSFORM. Plan for today. Introduction to HT. An image with linear structures. INF 4300 Digital Image Analysis INF 4300 Digital Image Analysis HOUGH TRANSFORM Fritz Albregtsen 14.09.2011 Plan for today This lecture goes more in detail than G&W 10.2! Introduction to Hough transform Using gradient information to

More information

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space Medical Image Processing Using Transforms Hongmei Zhu, Ph.D Department of Mathematics & Statistics York University hmzhu@yorku.ca Outlines Image Quality Gray value transforms Histogram processing Transforms

More information

Introduction to Digital Image Processing

Introduction to Digital Image Processing Fall 2005 Image Enhancement in the Spatial Domain: Histograms, Arithmetic/Logic Operators, Basics of Spatial Filtering, Smoothing Spatial Filters Tuesday, February 7 2006, Overview (1): Before We Begin

More information

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi 1. Introduction The choice of a particular transform in a given application depends on the amount of

More information

Graphics and Interaction Transformation geometry and homogeneous coordinates

Graphics and Interaction Transformation geometry and homogeneous coordinates 433-324 Graphics and Interaction Transformation geometry and homogeneous coordinates Department of Computer Science and Software Engineering The Lecture outline Introduction Vectors and matrices Translation

More information