Polytechnic Institute of NYU Fall 2012 EL5123/BE DIGITAL IMAGE PROCESSING

Similar documents
Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Physical Color. Color Theory - Center for Graphics and Geometric Computing, Technion 2

EE795: Computer Vision and Intelligent Systems

Types of Edges. Why Edge Detection? Types of Edges. Edge Detection. Gradient. Edge Detection

Visible Color. 700 (red) 580 (yellow) 520 (green)

Lecture 12 Color model and color image processing

CHAPTER 3 FACE DETECTION AND PRE-PROCESSING

Chapter 5 Extraction of color and texture Comunicação Visual Interactiva. image labeled by cluster index

Introduction ti to JPEG

Digital Image Processing

3D graphics, raster and colors CS312 Fall 2010

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

Midterm Exam CS 184: Foundations of Computer Graphics page 1 of 11

ELEC Dr Reji Mathew Electrical Engineering UNSW

IMAGE COMPRESSION USING FOURIER TRANSFORMS

EXAM SOLUTIONS. Computer Vision Course 2D1420 Thursday, 11 th of march 2003,

Practice Exam Sample Solutions

(0, 1, 1) (0, 1, 1) (0, 1, 0) What is light? What is color? Terminology

Image enhancement for face recognition using color segmentation and Edge detection algorithm

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Write detailed answers. That way, you can have points for partial answers;

this is processed giving us: perceived color that we actually experience and base judgments upon.

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

CS635 Spring Department of Computer Science Purdue University

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

3. (a) Prove any four properties of 2D Fourier Transform. (b) Determine the kernel coefficients of 2D Hadamard transforms for N=8.

Medical Image Processing using MATLAB

Midterm Exam! CS 184: Foundations of Computer Graphics! page 1 of 13!

Chapter 3: Intensity Transformations and Spatial Filtering

Digital Image Processing. Lecture 6

Lecture 4: Spatial Domain Transformations

Edge and local feature detection - 2. Importance of edge detection in computer vision

Colour Reading: Chapter 6. Black body radiators

Digital Image Processing. Introduction

IT Digital Image ProcessingVII Semester - Question Bank

CS 111: Digital Image Processing Fall 2016 Midterm Exam: Nov 23, Pledge: I neither received nor gave any help from or to anyone in this exam.

Illumination and Shading

Islamic University of Gaza Faculty of Engineering Computer Engineering Department

Lecture 7: Most Common Edge Detectors

Filtering Images. Contents

Examination in Digital Image Processing, TSBB08

Digital Image Processing

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

Cvision 3 Color and Noise

Broad field that includes low-level operations as well as complex high-level algorithms

ECE 484 Digital Image Processing Lec 12 - Mid Term Review

Reading. 2. Color. Emission spectra. The radiant energy spectrum. Watt, Chapter 15.

Chapter 11 Image Processing

EE795: Computer Vision and Intelligent Systems

Part 3: Image Processing

Color Space Converter

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

The exam begins at 2:40pm and ends at 4:00pm. You must turn your exam in when time is announced or risk not having it accepted.

Digital Image Processing. Image Enhancement in the Frequency Domain

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Image Processing

Sobel Edge Detection Algorithm

Segmentation and Grouping

IMAGE PROCESSING >FILTERS AND EDGE DETECTION FOR COLOR IMAGES UTRECHT UNIVERSITY RONALD POPPE

Computer Vision. The image formation process

Pupil Localization Algorithm based on Hough Transform and Harris Corner Detection

ECG782: Multidimensional Digital Signal Processing

ENGR Fall Exam 1

Sharpening through spatial filtering

ARTWORK REQUIREMENTS Artwork Submission

Linear Algebra and Image Processing: Additional Theory regarding Computer Graphics and Image Processing not covered by David C.

One image is worth 1,000 words

Color. making some recognition problems easy. is 400nm (blue) to 700 nm (red) more; ex. X-rays, infrared, radio waves. n Used heavily in human vision

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

C2: Medical Image Processing Linwei Wang

Lecture 5: Compression I. This Week s Schedule

Low-level Vision Processing Algorithms Speaker: Ito, Dang Supporter: Ishii, Toyama and Y. Murakami

Point and Spatial Processing

% Close all figure windows % Start figure window

Image Differentiation

PSD2B Digital Image Processing. Unit I -V

Graphics (INFOGR ): Example Exam

Motivation. Gray Levels

CHAPTER 3 COLOR MEASUREMENT USING CHROMATICITY DIAGRAM - SOFTWARE

Today. Motivation. Motivation. Image gradient. Image gradient. Computational Photography

CS223b Midterm Exam, Computer Vision. Monday February 25th, Winter 2008, Prof. Jana Kosecka

1.Some Basic Gray Level Transformations

Midterm Exam Fundamentals of Computer Graphics (COMP 557) Thurs. Feb. 19, 2015 Professor Michael Langer

Edge Detection. EE/CSE 576 Linda Shapiro

Image Processing. Application area chosen because it has very good parallelism and interesting output.

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

A New Technique of Extraction of Edge Detection Using Digital Image Processing

ECG782: Multidimensional Digital Signal Processing

Multimedia Signals and Systems Still Image Compression - JPEG

Digital image processing

Midterm Exam! CS 184: Foundations of Computer Graphics! page 1 of 14!

Intensity Transformations and Spatial Filtering

2. You are required to enter a password of up to 100 characters. The characters must be lower ASCII, printing characters.

Edge and corner detection

EXAMINATIONS 2016 TRIMESTER 2

CHAPTER 3 WAVELET DECOMPOSITION USING HAAR WAVELET

Transcription:

Polytechnic Institute of NYU Fall EL53/BE63 --- DIGITAL IMAGE PROCESSING Yao Wang Midterm Exam (/4, 3:-5:3PM) Closed book, sheet of notes (double sided) allowed. No peeking into neighbors or unauthorized notes. Cheating will result in getting an F on the course. Write your answers on this problem sheet for problems where space is provided. Otherwise write you answer in the blue book. Submit both the blue book and this problem set. Your name: SOLUTION. (pt) Briefly answer the following questions: a) how does the human visual system perceives color? (3pt) b) What does the color mixing theory tells us? (pt) How do we use this theory to capture color images (pt), display color images (pt), and print color images (pt), respectively? c) What are the three attributes of a color? (pt) (a) The human being perceives color through different types of cones in the retina, sensitive to different parts of the visible spectrum, corresponding to red, green and blue colors primarily. The human brain fuses the received signals from these cones and gives sensations of different colors depending on the combinations of the responses from these cones. (b) The color mixing theory tells us that any color can be obtained by mixing three primary colors in an appropriate proportion. To capture a color image, we use different sensors that are sensitive to each of three primary colors; To display a color image, we excite three types of phosphors at each screen location that emits three separate primary colors; To print a color image, we use three types of color inks. For capture and display, we use primary colors for illuminating light sources, mainly red, green and blue; For printing, we use primary colors for reflecting ligh sources, namely cyan, magenta, and yellow. (c) The three attributes of a color are: intensity (or luminance), hue and saturation.. (5 pt) A conventional color image using the RGB coordinate requires 8 bits per color component or 4 bits per pixel. One way to reduce the bit requirement is by converting the RGB to YCbCr representation, and representing Cb and Cr components using fewer bits than the Y component, because the human eye is less sensitive to the chrominance. Suppose we use 6 bits for Y, and 3 bits each for the Cb and Cr components respectively, and use uniform quantizer in the range of -56 for each component. (a) What is the total number of bits per pixel? (pt) (b) Illustrate the quantizer function for the Y and Cb component respectively. (pt+pt) (c) Assume a pixel has the following RGB values: R5, G3, B8. What are the corresponding Y, Cb, and Cr values? (pt) (d) What are the quantized Y, Cb, and Cr values? (++pt) (e) What are the reconstructed R,G,B values? The RGB to YCbCr conversion matrix and the inverse conversion matrix are given below. (pt) (a) Total number of bits per pixel 6 + 3 + 3 bits. (pt) (b) The number of reconstruction levels of 6 bits uniform quantizer is 6 64. Quantization interval q (56 )/64 4. Q Y "# q + + Y "# 4 + (pt) The number of reconstruction levels of 3 bits uniform quantizer is 3 8. Quantization interval q (56 )/8 3. Q Cb "" "# q + + Cb "# " " 3 + 6 (pt) (c) The YCbCr valuses corresponding to the given RGB values can be obtained by.57.54.98 5 6 33.575.48.9.439 3 + 8.336 (pt).439.368.7 8 8 7.66 (d) The quantized YCbCr values are Y ""." 4 + 34, Cb ""." " 3 + 6, Cr "#. 3 + 6 76 (++pt) "

(e) The reconstructed RGB values from quantized YCbCr values before rounding to integers are.64.596 34 6 3.96.64.39.83 8 4.6 (pt).64.7 76 8 5.8 After rounding, the integer RGB values are R4, G5, B5. (no points will be taken whether you did rounding or no). 3. (pt) The histograms of two images are illustrated below. Sketch a transformation function for each image that will make the image has a better contrast. Use the axis provided below to sketch your transformation functions. (5pt+5pt) If shape is correct, but specifics (vertical values, horizontal transitions) are wrong, deduct a small amount. (5pt+5pt) The vertical value of 55/ should be 8 since it needs to be an integer. If a student uses 7 or 55/, it is also OK. If you use and ½ in the vertical axis, deduct pt. 4. ( pt) For the image shown in Fig. A, find a transformation function (i.e. a look-up-table) that will change its histogram to match the one shown in Table A. Draw the transformed image in Fig. B. Assume that the processed images can only take integer values between and 3 (including and 3), and give the histograms of the original and processed images in Table B and Table C, respectively Fig. A Fig. B (pt) Table A: Desired histogram Gray level f 3 Histogram h(f) 5/5 /5 Table B: Original image histogram (pt) Gray level f 3 Histogram h(f) 5/5 /5 6/5 3/5

Table C: Transformed image histogram (pt) Gray level f 3 Histogram h(f) 6/5 9/5 Process of deriving solutions (4pt) Original value (f) Histogram of f CDF of f CDF of z Desired Histogram of z Transformed value (z) 5/5 5/5 (map to ) 5/5 5/5 /5 6/5 (map to ) 5/5 6/5 /5 (map to 3) 5/5 3 3/5 (map to 3) /5 3 5. (5pt) For each filter given below, answer the following questions. a) Is it a separable filter? If yes, present the horizontal and vertical filters. b) What is the functionality of the filter? Explain your reasoning. H 4 6, H 3 3 H_: non-separable (pt), high emphasis because coefficients sum to, but has both positive and negative values (pt) H_: separable (pt), H_ 3, edge detection because coefficients sum to (pt), horizontally smoothing, vertically edge detection to detect horizontal edges (pt) 6. ( pt) For the H filter in the previous problem, a) Determine the DTFT H(u,v) of the filter (assuming the origin is at the center), (3pt) and sketch the one dimensional profiles H(u,) (pt) and H(,v) (pt). Note: you should assume u represent the vertical frequency and v the horizontal frequency. b) What is the function of this filter based on its frequency response? a) Horizontal filter: Hh(v) e () + 3 e + e () 3 + cos (πv) Vertical filter: Hv(u) e () + e + e () jsin (πu) Total: H(u,v) -jsin (πu)(3 + cos (πv)) (3pt) H(,v)Hh(v) Hv() H(u,) Hh() Hv(u) jsin (πu), H(u,) sin (πu).8 9.6 8.4 7. 6 H(,v) F(u,) 5. 4.4 3.6.8.5.4.3.....3.4.5 v (pt).5.4.3.....3.4.5 u (pt) 3

b) horizontal low pass (pt), vertical bandpass (pt), overall detecting horizontal edges (pt) 7. ( pt) Consider linear convolution of an image of size NxN with a filter of size NxN. We can either do the convolution directly, or using D fast DFT (FFT) of size N3xN3. (a) What is the number of multiplications needed if we do convolution directly? Your result should be expressed in terms of N and N, and you should not assume the filter satisfy any symmetry property. (b) How should N3 be related to N and N, for the result obtained using FFT to be the same as linear convolution? (c) What is the number of multiplications required by using the FFT? You should represent your results in terms of N3, which is in turn represented in terms of N and N. (d) Assume N a*n, where a is a fraction number between and. For what range of a, will the FFT approach take less computation? (a) C (N+N-)^ *N^, (3pt) if answer N3^4, give (pt) (b) N3>N+N- (pt) (c) If we assume that the DFT for the filter has been precalculated, we just need to compute DFT of the image (using separable FFT, with complexity N3^ logn3), point-wise multiplication of the DFT of the image and DFT of the filter (with complexity N^), and inverse DFT (complexity N3^ logn3). The total complexity is C N3^ logn3 (FFT) +N3^ (multiplication) + N3^ logn3 (IFFT) 4 N3^ log N3 + N3^ (3pt) If you assume that the DFT of the filter has to be calculated, then you will have C 6 N3^ log N3 + N3^ (d) Let N3 (+a)n - \approx (+a)n, (Let NN) C(+a)^ N^ * a^ N^ a^ (+a)^ N^4 C4 (+a)^ N^ log (+a)n +(+a)^ N^ \approx 4 (+a)^ N^ log (+a)n Let C<C -> 4 (+a)^ N^ log (+a)n < a^ (+a)^ N^4 -> 4 log (+a)n < a^ N^ Further approx, assume log (+a)\approx, 4 log N < a^ N^ à a> (log N)^(/) /N Eg. N4, a> sqrt() /4 (3pt) 8. ( pt) You are given the following basis images for x image patterns: H ; ; ; ; H H H (a) Show that they form orthonormal basis images. 3pt 4 7 (b) Calculate the transform coefficients of the image F using these basis images. 3pt 3 5 (c) Find the reconstructed image Fˆ obtained with the largest two coefficients (in magnitude). pt (d) By observing the appearance of the original and reconstructed images, explain what is the effect of not using the two coefficients with least values. Solution: (a) The inner product of any different H_ij is, and the norm of each H_ij. (3pt) (b) T H F 9/, T H F 3/, T H F -5/, T H F -/ (3pt) (c) The two largest coefficients (in magnitude) are T and T ". The reconstructed image is T H + T " H " 7/ 6 (pt) 7/ 6 Some students used T_ and T_. If you resulting reconstructed image is correct based on T_, T_, I only deduct pt. If answer to (d) is correct based on using T_ and T_, I did not deduct pt in (d). (d) Because coefficient T and T are discarded, the reconstructed image only contain the changes in the horizontal direction, the changes in vertical direction and diagonal direction are removed. (pt) If you just said that by using the two largest coefficients, you minimize the error, you get pt. 4

9. (pt) Write a MATLAB code to implement the following edge detection algorithm using the Sobel filters given below. Your program should i) Read in an image, ii) Perform filtering using Hx and Hy filters respectively to obtain gradient images Gx and Gy; iii) Do edge detection using the gradient images. Specifically you should calculate the gradient magnitude image by using Gm Gx + Gy. For any pixel, if the gradient magnitude is greater than a threshold T, this pixel will be considered an edge pixel and assigned a value of 55 in the edge map. Otherwise, this pixel will be assigned a value of ; iv) Display the original, filtered images Gx and Gy, the gradient magnitude image Gm, and the edge map Eimg; v) Save the edge map image as an image file. You should write a MATLAB function for filtering using a given filter. Your function can assume that the filter is non-zero in the region (-W,W) both horizontally and vertically. For simplicity, you only need to do filtering and edge detection over the image region which does not involve pixels outside the boundary. Your main program should call this function. You can assign the value of T and W in the main program. H, x H y Note that the origin of the filters Hx and Hy are both at the center. Filtering function pt, all other steps are included in the other pt (missing each step, or wrong in each step, up to pt).. Read image. Perform filtering using Hx and Hy filters respectively to obtain gradient images Gx and Gy. 3. Do edge detection using the gradient images. Gm Gx + Gy and T 4. Display the original, filtered images Gx and Gy, the gradient magnitude image Gm, and the edge map Eimg. 5. Save the edge map image as an image file. In the filter function: Range of convolution: pt Make it as a function: pt Determine W: pt Double and Uint8: pt Rotate filter: pt Sample program: % Step input_img imread('image.jpg'); % Step Hx [- - -; ; ]; Hy [- ; - ; - ]; W ; Gx filterd(input_img, Hx, W); Gy filterd(input_img, Hy, W); % Step 3 T 8; Gm abs(gx) + abs(gy); edge_img uint8(zeros(size(gm))); edge_img(gm>t) 55; % Step 4 figure(); subplot(,,); imshow(gx); subplot(,,); imshow(gy); 5

subplot(,,3); imshow(gm); subplot(,,4); imshow(edge_img); % Step 5 imwrite(edge_img, 'result.jpg', 'jpg'); function [ output_image ] filterd( input_image, filter, W ) %MFILTER Summary of this function goes here % Detailed explanation goes here input_image double(input_image); pt sizeimage size(input_image); sizefilter size(filter); sizeoutput sizeimage-sizefilter+; flt_ctr ceil(sizefilter/); filter_inv filter(flt_ctr()+w:-:flt_ctr()-w, flt_ctr()+w:-:flt_ctr()-w ); 3 pt output_image zeros(sizeoutput); for m :sizeoutput() 5 pt for n :sizeoutput() output_image(m,n) sum(sum(input_image(m:m+sizefilter()-, n:n+sizefilter()- ).*filter_inv)); end end end. (Bonus problem, pt) Suppose an image has a probability density function as shown on the left. We would like to modify it so that it has a probability density function given on the right. Derive the transformation function g(f) that will accomplish this. For simplicity, assume both the original image and the modified image can take on gray levels in the continuous range of (,55). CDF of f (pt) CDF of g (pt), equate and solve (6pt) P f p f df P g p g dg ( )df "" "" gdg "" "" "" (f "" f ) (pt) "" (55 f) (pt) g 55 f 55 55 f > g (5f f ) g ± 5f f g should be ~ 55, so g f 5f f (6pt) 6