Similar documents
Feature Extraction and Image Processing, 2 nd Edition. Contents. Preface

IT Digital Image ProcessingVII Semester - Question Bank

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

Image Processing, Analysis and Machine Vision

Digital Image Processing

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

The. Handbook ijthbdition. John C. Russ. North Carolina State University Materials Science and Engineering Department Raleigh, North Carolina

Fundamentals of Digital Image Processing

Final Review. Image Processing CSE 166 Lecture 18

Examination in Image Processing

Digital Image Processing COSC 6380/4393

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Lecture 6: Edge Detection

COMPUTER AND ROBOT VISION

CS534: Introduction to Computer Vision Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Anno accademico 2006/2007. Davide Migliore

MEDICAL IMAGE ANALYSIS

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Other Linear Filters CS 211A

EE795: Computer Vision and Intelligent Systems

Review for the Final

Digital Image Processing Fundamentals

IMAGE ANALYSIS, CLASSIFICATION, and CHANGE DETECTION in REMOTE SENSING

Ulrik Söderström 16 Feb Image Processing. Segmentation

ECG782: Multidimensional Digital Signal Processing

Chapter 11 Representation & Description

2: Image Display and Digital Images. EE547 Computer Vision: Lecture Slides. 2: Digital Images. 1. Introduction: EE547 Computer Vision

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

Lecture 7: Most Common Edge Detectors

CS4733 Class Notes, Computer Vision

Index. χ 2 distance, 144

SECTION 5 IMAGE PROCESSING 2

Topic 6 Representation and Description

ECEN 447 Digital Image Processing

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

Region & edge based Segmentation

Chapter 3: Intensity Transformations and Spatial Filtering

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Region Based Image Fusion Using SVM

Local Image preprocessing (cont d)

Image Analysis Lecture Segmentation. Idar Dyrdal

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

REGION & EDGE BASED SEGMENTATION

Practical Image and Video Processing Using MATLAB

ECG782: Multidimensional Digital Signal Processing

CLASSIFICATION AND CHANGE DETECTION

Today INF How did Andy Warhol get his inspiration? Edge linking (very briefly) Segmentation approaches

Topic 4 Image Segmentation

3. (a) Prove any four properties of 2D Fourier Transform. (b) Determine the kernel coefficients of 2D Hadamard transforms for N=8.

Lecture 8 Object Descriptors

Ulrik Söderström 21 Feb Representation and description

Image Analysis, Classification and Change Detection in Remote Sensing

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

COMP_4190 Artificial Intelligence Computer Vision. Computer Vision. Levels of Abstraction. Digital Images

Classification of image operations. Image enhancement (GW-Ch. 3) Point operations. Neighbourhood operation

Detection of Edges Using Mathematical Morphological Operators

ECG782: Multidimensional Digital Signal Processing

Lecture 18 Representation and description I. 2. Boundary descriptors

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

The BIOMEDICAL ENGINEERING Series Series Editor Michael R. Neuman. Uriiwsity of Calßy ülgaiy, Nbeitai, Cart. (g) CRC PRESS

Computer and Machine Vision

N.Priya. Keywords Compass mask, Threshold, Morphological Operators, Statistical Measures, Text extraction

Three-Dimensional Computer Vision

CS 4495 Computer Vision. Linear Filtering 2: Templates, Edges. Aaron Bobick. School of Interactive Computing. Templates/Edges

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

Detailed Program Image Processing Summer School 2010

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Image Analysis. Edge Detection

Lecture: Segmentation I FMAN30: Medical Image Analysis. Anders Heyden

Multimedia Computing: Algorithms, Systems, and Applications: Edge Detection

Image Segmentation Image Thresholds Edge-detection Edge-detection, the 1 st derivative Edge-detection, the 2 nd derivative Horizontal Edges Vertical

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Albert M. Vossepoel. Center for Image Processing

Second Generation Curvelet Transforms Vs Wavelet transforms and Canny Edge Detector for Edge Detection from WorldView-2 data

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

Digital Image Processing (EI424)

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

CHAPTER 4 TEXTURE FEATURE EXTRACTION

A COMPARISON OF WAVELET-BASED AND RIDGELET- BASED TEXTURE CLASSIFICATION OF TISSUES IN COMPUTED TOMOGRAPHY

Digital Image Processing. Image Enhancement - Filtering

convolution shift invariant linear system Fourier Transform Aliasing and sampling scale representation edge detection corner detection

CoE4TN4 Image Processing

Image Segmentation. Schedule. Jesus J Caban 11/2/10. Monday: Today: Image Segmentation Topic : Matting ( P. Bindu ) Assignment #3 distributed

Comparison between Various Edge Detection Methods on Satellite Image

CHAPTER 6 DETECTION OF MASS USING NOVEL SEGMENTATION, GLCM AND NEURAL NETWORKS

Multimedia Information Retrieval

Image features. Image Features

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

An Introduction to Content Based Image Retrieval

CHAPTER 5 PALMPRINT RECOGNITION WITH ENHANCEMENT

A Visual Programming Environment for Machine Vision Engineers. Paul F Whelan

Lecture 4 Image Enhancement in Spatial Domain

EDGE BASED REGION GROWING

Image Processing

CS5670: Computer Vision

Edge and Texture. CS 554 Computer Vision Pinar Duygulu Bilkent University

Overview. Spectral Processing of Point- Sampled Geometry. Introduction. Introduction. Fourier Transform. Fourier Transform

Transcription:

DEPARTMENT OF ELECTRONICS AND COMMUNICATION ENGINEERING DS7201 ADVANCED DIGITAL IMAGE PROCESSING II M.E (C.S) QUESTION BANK UNIT I 1. Write the differences between photopic and scotopic vision? 2. What is mach band effect? 3. Write the properties of cosine transform 4. Write the properties of KL transform 5. Define SVD. 6. Define Brightness and Contrast. 7. Define Hue and saturation. 8. Justify that KLT is an optimal transform. 9. Give the Conditions for perfect transform? 10. Give two properties of SVD transform? 11. What is Image Enhancement? Explain the 2 categories of image enhancement. 12. Define Histogram? What is histogram Equalization (or) histogram Linearization? 13. What are drawbacks of histogram equalization? 14. Define histogram specification. 15. What is contrast stretching? 16. What is image negative? 17. What is bit plane slicing? 18. Why noise is always considered to be additive in images? 19. Define Dilation and erosion 20. Define Morphology 1. Describe the elements of visual perception. (16) 2. (i) Explain and the properties of 2D Fourier Transform. (8) (ii) Explain singular value decomposition and specify its properties. (8) 3. Explain discrete cosine transform and specify its properties. (16)

4. Find DCT Transform and its inverse for the given 2 x 2 image [3 6: 6 4]. (16) 5. Obtain forward KL transform for the given vectors. X1=[1 0 0]; X2=[1 0 1]; X3=[1 1 0] (Transpose these vectors) and analyze how the principal components are used for remote sensing applications? (16) 6. Determine convolution of the given array f(x,y) = and h(x,y) = (16) 7. (i) Write notes on Homomorphic filtering (8) (ii) Explain K-L transform in detail. (8) 8. How do you enhance a monochrome image by histogram Equalization? Specification technique how do you assess the qualities of enhancement? (16) 9. Describe Image enhancement in spatial and frequcny domain. (16) 10. Obtain Histogram and Histogram equalization for a given image (4 x 4) 4 bit per pixel is given by UNIT II 1. Name the different types of derivative filters? 2. What are the 2 principles steps involved in marker selection? 3. What is segmentation? 4. What are the three types of discontinuity in digital image? 5. What is global, Local and dynamic or adaptive threshold? 6. Define region growing? 7. Define region splitting and merging 8. What is meant by markers? 9. What are the various methods of thresholding in image segmentation? 10. Define catchment basin. 11. Write sobel horizontal and vertical edge detection masks (or) Define sobel operator 12. How do you know that an image is getting over segmented? 13. What are the two general approaches to performing texture segmentation 14. What are the advantages and disadvantages of region based approach of texture feature based segmentation. 15. Define clustering. 16. What are the limitations of snake based active contour model. 17. What are the advantages of level set based active contour model. 18. What are the advantages and disadvantages of atlas based segmentation. (16)

19. What are the applications of atlas based segmentation. 20. What are the applications of active contour based segmentation. 1. Discuss how (i) Region growing (8) (ii) Region splitting and merging approaches are used for image segmentation (8) 2. Describe in detail about segmentation by morphological watersheds. (16) 3. Discuss in detail about the threshold based segmentation. (16) 4. Explain the texture feature based segmentation. (16) 5. Explain the methods of model based segmentation. (16) 6. Explain the wavelet based segmentation. (16) 7. Explain the atlas based segmentation. (16) 8. Explain the snake based active contour model segmentation. (16) 9. Explain the fuzzy clustering based segmentation. (16) 10. Explain the level set based active contour model segmentation. (16) UNIT III 1. What are the advantages of canny edge detection? 2. What are the advantages and disadvantages of laplacian operator? 3. Define edge magnitude and direction of Prewitt operator. 4. Define Marr Hildreth operator. 5. Define phase congruency. 6. What are the advantages of Hough transform. 7. Define polar HT for lines. 8. What are the advantages polar HT for lines. 9. Define Hough transform for ellipses 10. Define distance weighting function of discrete symmetry operator 11. Define direction and phase weighting function of discrete symmetry operator 12. Define descriptors. 13. Define boundary descriptors. 14. What are the advantages of Fourier descriptors? 15. Define region descriptors. 16. Define compactness 17. Define irregularity.

18. Define invariance moments 19. Define gabor filter 20. Define fractal dimension 1. Explain second order edge detection operators. (16) 2. Explain Sobel and Canny edge detection operators. (16) 3. Explain feature extraction using Phase congruency. (16) 4. Explain the feature extraction using different Hough transform techniques. (16) 5. How to estimate and analyze the image curvature. (16) 6. Explain the type of shape skeletonization. (16) 7. (i) Explain chain code. (8) (ii) Explain Fourier descriptor. (8) 8. (i) Explain the region descriptors for feature extraction. (8) (ii) Explain the texture description of image analysis. (8) 9. Explain the Fractal model based feature extraction. (16) 10. Explain Gabor filter and wavelet features for the classification of images. (16) UNIT IV 1. What are the steps to follow the image registration? 2. Define image registration. 3. Define inverse filter. What are the uses? 4. Define cornerness measure. 5. Define Hausdorff distance 6. What is the principle to relate the points in sensed and references images? 7. Define Projective transformation. 8. What are the uses of Invariant moments? 9. Define relaxation labeling 10. Define shape matrices. 11. Define Image fusion 12. Define multisensor fusion 13. What are the potential advantages of image fusion? 14. What are the applications of image fusion? 15. What are the methods of image fusion?

16. What are multiscale decomposition methods? 17. What are non multiscale decomposition methods? 18. Define overall cross entropy. 19. Define universal quality index. 20. Define mutual information. 1. (i) Explain the feature points to determine the parameters of a transformation function that registers the images. (8) (ii) Explain the feature lines to determine the parameters of a transformation function that registers the images. (8) 2. (i) Explain the steps in image registration. (8) (ii) Write the Matching using scene coherence algorithm to find the correspondence between the points of sensed and reference images. (8) 3. (i) Explain the algorithm to find the correspondence between the lines of sensed and reference images. (8) (ii) Explain the region by shape matching to find the correspondence between sensed and reference images. (8) 4. (i) Explain the region matching by relaxation labeling to find the correspondence between sensed and reference images. (8) (ii) Write the algorithm to find the correspondence between the lines of sensed and reference images. (8) 5. Explain the template matching using similarity measures to find the correspondence between sensed and reference images. (8) 6. (i) Explain the methods for determining a transformation function for mapping to sensed image to reference images. (8) (ii) Explain the methods for sensed image is point-by-point resampled to the geometry of the reference image. (8) 7. (i) Explain the Multiscale decomposition based fusion methods. (8) (ii) Explain the Nonmultiscale decomposition based fusion methods. (8) 8. (i) Explain the performance evaluation of objective evaluation measures with reference image. (8) (ii) Explain the performance evaluation of objective evaluation measures with not requiring a reference image. (8) 9. Explain Multiresolution based fusion using discrete wavelet transform and curvelet transform. (16) 10. (i) Explain Region based fusion (8) (ii) Explain hybrid image registration of image fusion. (8)

UNIT V 1. Define absorption rule 2. Define Voxel. 3. What are the difference between pixel and voxel. 4. How to store 3D data sets 5. How to obtain 3D data sets by different imaging techniques. 6. What is the use of slicing the data set? 7. What is the use of Color? 8. What are the advantages and disadvantages of volumetric display? 9. How to construct stereo views of image? 10. Define ray tracing. 11. What is the use of reflection of 3D images? 12. Define Dull surface. 13. Define Shiny surface. 14. Write about feature specific improvement. 15. Name the various sources of 3D data and how is the 3D image represented? 16. What are the surfaces of 3D image visualization? 17. What are the reflection of 3D image visualization 18. Define volumetric display. 19. What are the classifications of measurements on 3D images? 20. How to determine the position of features in 3D images? 1. Explain the image processing in 3D. (16) 2. Explain the slicing and data sets of 3D image visualization. (16) 3. Explain arbitrary section planes. (16) 4. Explain volumetric display of 3D image visualization. (16) 5. Explain Ray tracing. (16) 6. Explain surfaces of 3D image visualization. (16) 7. Explain reflection of 3D image visualization. (16) 8. Explain measurements on 3D images. (16) 9. Explain multiply connected surfaces of 3D image visualization. (16) 10. Explain stereo viewing of 3D image visualization. (16)