Digital Image Processing (EI424)

Similar documents
Topic 5 Image Compression

IT Digital Image ProcessingVII Semester - Question Bank

1.Define image compression. Explain about the redundancies in a digital image.

Digital Image Processing

PSD2B Digital Image Processing. Unit I -V

Image Restoration and Reconstruction

Image Restoration and Reconstruction

Digital Image Processing. Introduction

Image representation. 1. Introduction

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

Noise Model. Important Noise Probability Density Functions (Cont.) Important Noise Probability Density Functions

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

IMAGE COMPRESSION. Chapter - 5 : (Basic)

Lecture 6: Edge Detection

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

Chapter 3: Intensity Transformations and Spatial Filtering

Image Processing Fundamentals. Nicolas Vazquez Principal Software Engineer National Instruments

Lecture 4 Image Enhancement in Spatial Domain

So, what is data compression, and why do we need it?

CoE4TN4 Image Processing. Chapter 5 Image Restoration and Reconstruction

JNTUWORLD. 4. Prove that the average value of laplacian of the equation 2 h = ((r2 σ 2 )/σ 4 ))exp( r 2 /2σ 2 ) is zero. [16]

Lecture 4. Digital Image Enhancement. 1. Principle of image enhancement 2. Spatial domain transformation. Histogram processing

Computer Vision. Image Segmentation. 10. Segmentation. Computer Engineering, Sejong University. Dongil Han

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

Introduction to Digital Image Processing

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Image Processing Lecture 10

Digital Image Processing COSC 6380/4393

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

An Introduc+on to Mathema+cal Image Processing IAS, Park City Mathema2cs Ins2tute, Utah Undergraduate Summer School 2010

Topic 4 Image Segmentation

Chapter 11 Representation & Description

CoE4TN4 Image Processing

Lecture 4: Spatial Domain Transformations

Motivation. Intensity Levels

Lecture 8 Object Descriptors

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

MRT based Fixed Block size Transform Coding

Filtering and Enhancing Images

Chapter 11 Representation & Description

CoE4TN3 Medical Image Processing

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

Computer Vision 2. SS 18 Dr. Benjamin Guthier Professur für Bildverarbeitung. Computer Vision 2 Dr. Benjamin Guthier

CHAPTER 1 Introduction 1. CHAPTER 2 Images, Sampling and Frequency Domain Processing 37

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

CS4733 Class Notes, Computer Vision

Final Review. Image Processing CSE 166 Lecture 18

Segmentation algorithm for monochrome images generally are based on one of two basic properties of gray level values: discontinuity and similarity.

IMAGE ENHANCEMENT in SPATIAL DOMAIN by Intensity Transformations

Lecture 12 Color model and color image processing

Image restoration. Restoration: Enhancement:

3. (a) Prove any four properties of 2D Fourier Transform. (b) Determine the kernel coefficients of 2D Hadamard transforms for N=8.

Fundamentals of Digital Image Processing

Unit - I Computer vision Fundamentals

Color and Shading. Color. Shapiro and Stockman, Chapter 6. Color and Machine Vision. Color and Perception

Digital Image Fundamentals

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Statistical Image Compression using Fast Fourier Coefficients

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Digital Image Processing, 2nd ed. Digital Image Processing, 2nd ed. The principal objective of enhancement

Digital Image Processing

Operation of machine vision system

Intensity Transformation and Spatial Filtering


IMAGE COMPRESSION TECHNIQUES

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

DIGITAL IMAGE PROCESSING. Question Bank [VII SEM ECE] CONTENTS

Digital Image Processing Chapter 11: Image Description and Representation

Review for the Final

11. Image Data Analytics. Jacobs University Visualization and Computer Graphics Lab

VU Signal and Image Processing. Image Restoration. Torsten Möller + Hrvoje Bogunović + Raphael Sahann

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

1.Some Basic Gray Level Transformations

Practical Image and Video Processing Using MATLAB

CS 548: Computer Vision and Image Processing Digital Image Basics. Spring 2016 Dr. Michael J. Reale

Image Compression Algorithm and JPEG Standard

Denoising and Edge Detection Using Sobelmethod

CMPT 365 Multimedia Systems. Media Compression - Image

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Image Enhancement in Spatial Domain. By Dr. Rajeev Srivastava

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

Feature extraction. Bi-Histogram Binarization Entropy. What is texture Texture primitives. Filter banks 2D Fourier Transform Wavlet maxima points

7.5 Dictionary-based Coding

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Texture. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors. Frequency Descriptors

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Learning and Inferring Depth from Monocular Images. Jiyan Pan April 1, 2009

Point operation Spatial operation Transform operation Pseudocoloring

Spatial Enhancement Definition

C E N T E R A T H O U S T O N S C H O O L of H E A L T H I N F O R M A T I O N S C I E N C E S. Image Operations II

Transcription:

Scheme of evaluation Digital Image Processing (EI424) Eighth Semester,April,2017.

IV/IV B.Tech (Regular) DEGREE EXAMINATIONS ELECTRONICS AND INSTRUMENTATION ENGINEERING April,2017 Digital Image Processing Eighth Semester Max Marks:60 Scheme of evaluation All questions carries equal marks 1X12=12M 1.(a) An image is a two-dimensional function f(x,y), where x and y are the spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x,y) is called the intensity of the image at that level. (b)photopic vision is the vision of the eye under well-lit conditions (luminance level 10 to 10 8 cd/m²). In humans and many other animals, photopic vision allows color perception, mediated by cone cells, and a significantly higher visual acuity and temporal resolution than available with scotopic vision. (c)weber Ratio: Ic / I where I is the light source intensity and Ic is increment in illumination. A small value of Weber ratio means Good brightness discrimination. A large value of Weber ratio means Poor brightness discrimination. (d)the principle objectives of image enhancement techniques is to process an image so that the result is more suitable image than the original image for a specific application. (e)log transformation technique is applied to compress the dynamic range of gray levels in an image. s=c log (1+r) where c is constant and it is assumed that r 0. (f)the enhancement techniques that are using arithmetic operators are (i)image subtraction (ii)image Averaging (g)image restoration is method to improve an image in some predefined sense.

(h) compression ratio is defined as Where n1=original image ;n2=compressed image (i)variable Length Coding is the simplest approach to error free compression. It reduces only the coding redundancy. It assigns the shortest possible codeword to the most probable gray levels.variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. With the right coding strategy an independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. (j) There are three general approaches to segmentation, termed thresholding, edge-based methods and region-based methods. In thresholding, pixels are allocated to categories according to the range of values in which a pixel lies. Pixels with values less than 128 have been placed in one category, and the rest have been placed in the other category. The boundaries between adjacent pixels in different categories have been superimposed in white on the original image. (k) The three principal approaches used in image processing to describe the texture of a region are statistical, structural, and spectral. Statistical approaches yield characterizations of textures as smooth, coarse, grainy, and so on. Structural techniques deal with the arrangement of image primitives, such as the description of texture based on regularly spaced parallel lines. Spectral techniques are based on properties of the Fourier spectrum and are used primarily to detect global periodicity in an image by identifying high-energy, narrow peaks in the spectrum. (l) Various representation schemes are Chain Codes, Polygonal Approximations, Signatures,Boundary Segments, The Skeleton of a Region.

Outputs of these processes generally are image attributes UNIT-I 2.(a) Block diagram of different steps in Digital Image processing Explanation of each steps 4M 4M 2.(a) Fundamental Steps in Digital Image Processing: Outputs of these processes generally are images Colour Image Processing Wavelets & Multiresolution processing Image Compression Morphological Processing Image Restoration Segmentation Image Enhancement Image Acquisition Knowledge Base Representation & Description Object Recognition Problem Domain Step 1: Image Acquisition The image is captured by a sensor (eg. Camera), and digitized if the output of the camera or sensor is not already in digital form, using analogue-to-digital convertor. Step 2: Image Enhancement The process of manipulating an image so that the result is more suitable than the original for specific applications. The idea behind enhancement techniques is to bring out details that are hidden, or simple to highlight certain features of interest in an image. Step 3: Image Restoration - Improving the appearance of an image - Tend to be mathematical or probabilistic models. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a good enhancement result. Step 4: Colour Image Processing

Use the colour of the image to extract features of interest in an image Step 5: Wavelets Are the foundation of representing images in various degrees of resolution. It is used for image data compression. Step 6: Compression Techniques for reducing the storage required to save an image or the bandwidth required to transmit it. Step 7: Morphological Processing Tools for extracting image components that are useful in the representation and description of shape. In this step, there would be a transition from processes that output images, to processes that output image attributes. Step 8: Image Segmentation Segmentation procedures partition an image into its constituent parts or objects. Step 9: Recognition and Interpretation Recognition: the process that assigns label to an object based on the information provided by its description. Step 10: Knowledge Base Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. 2.(b) Image sampling 2M Image Quantization 2M 2.(b) Image Sampling and Quantization Image sampling: discretize an image in the spatial domain Spatial resolution / image resolution: pixel size or number of pixels Nyquist Rate:

Spatial resolution must be less or equal half of the minimum period of the image or sampling frequency must be greater or Equal twice of the maximum frequency. Image Quantization Image quantization: discretize continuous pixel values into discrete numbers Color resolution/ color depth/ levels: - No. of colors or gray levels or - No. of bits representing each pixel value - No. of colors or gray levels N c is given by N 2 c where b = no. of bits b (OR) 3.(a) Names of Different color models 2M Explanation of models 6M Different color models are: RGB : Color Monitor, Color Camera, Color Scanner CMY : Color Printer, Color Copier YIQ : Color TV, Y(luminance), I(Inphase), Q(quadrature) HSI, HSV

YIQ models: 3.(b) Complete solution 4M B G R Y M C 1 1 1 B G R Q I Y 0.311 0.523 0.212 0.321 0.275 0.596 0.114 0.587 0.299 Q I Y B G R 1.705 1.108 1 0.647 0.272 1 0.620 0.956 1

UNIT-II 4.(a) Different steps in frequency domain filtering Explanation of all steps 2M 4M Figure.Basic steps for filtering in the frequency domain Fourier transform F(u,v) Filter function H(u,v) Inverse Fourier transform Preprocess ing (Zero phase shift filters) Post processing f(x,y) Input Image g(x,y) Enhanced Image Low frequencies in the fourier transform are responsible for the general gray level appearance of an image over smooth areas, while high frequencies are responsible for details such as edge and noise.

4.(b) Smoothing in frequency Domain 2M Explanation of different filters 4M 4.(b).Smoothing in the Frequency Domain G(u,v) = H(u,v) F(u,v) Ideal low pass filter Butterworth (parameter: filter order) Gaussian

(OR) 5.(a) Concept of histogram processing 2M Histogram Equalization and specification 4M 5.(a)

Histogram equalization method: Only generates one result: an image with approximately uniform histogram (without any flexibility) Enhancement may not be achieved as desired. Histogram specification: Transform an image according to a specified gray-level histogram Includes Specify particular histogram shapes (pz (z)) capable of highlighting certain gray-level ranges Obtain the transformation function for transformation of r to z.

5.(b) Block diagram of Homomorphic filtering 2M Explanation 4M 5.(b).

UNIT-III 6.(a) Estimation of degradation function 2M Explanation of 4M 6.(a) Estimation of degradation function for use in image restoration The parameters of periodic noise typically estimated by inspecting the image s Fourier spectrum The parameters of noise PDFs may be known partially from sensor specifications often necessary to be estimated for a particular imaging arrangement - capturing a set of images of flat environments Possible to be estimated from small patches of reasonably constant background intensity, when only images already generated by a sensor are available e.g., the vertical strips of 150x20 pixels.

6.(b) Names of different restoration filters 2M Explanation 4M 6.(b). Spatial filtering is suitable when only additive random noise is present. Mean Filters Arithmetic mean filter Geometric mean filter Harmonic mean filter Contraharmonic mean filter Order-Statistic Filters Median filter Max and min filters Midpoint filter Alpha-trimmed mean filter Adaptive Filters Adaptive, local noise reduction filter Adaptive median filter D

(OR) 7.(a) General compression models 2M Explanation of encoder and decoder 6M 7.(a). A compression system consists of two distinct structural blocks: an encoder and a decoder. An input image f(x, y) is fed into the encoder, which creates a set of symbols from the input data. After transmission over the channel, the encoded representation is fed to the decoder, where a reconstructed output image f^(x, y) is generated. In general, f^(x, y) may or may not be an exact replica of f(x, y). If it is, the system is error free or information preserving; if not, some level of distortion is present in the reconstructed image. Both the encoder and decoder shown in Fig. 3.1 consist of two relatively independent functions or subblocks. The encoder is made up of a source encoder, which removes input redundancies, and a channel encoder, which increases the noise immunity of the source encoder's output. As would be expected, the decoder includes a channel decoder followed by a source decoder. If the channel between the encoder and decoder is noise free (not prone to error), the channel encoder and decoder are omitted, and the general encoder and decoder become the source encoder and decoder, respectively.

The Source Encoder and Decoder: The source encoder is responsible for reducing or eliminating any coding, interpixel, or psychovisual redundancies in the input image. The specific application and associated fidelity requirements dictate the best encoding approach to use in any given situation. Normally, the approach can be modeled by a series of three independent operations. As Fig. 3.2 (a) shows, each operation is designed to reduce one of the three redundancies. Figure 3.2 (b) depicts the corresponding source decoder. In the first stage of the source encoding process, the mapper transforms the input data into a (usually nonvisual) format designed to reduce interpixel redundancies in the input image. This operation generally is reversible and may or may not reduce directly the amount of data required to represent the image. Run-length coding is an example of a mapping that directly results in data compression in this initial stage of the overall source encoding process. The representation of an image by a set of transform coefficients is an example of the opposite case. Here, the mapper transforms the image into an array of coefficients, making its interpixel redundancies more accessible for compression in later stages of the encoding process. The second stage, or quantizer block in Fig. 3.2 (a), reduces the accuracy of the mapper's output in accordance with some preestablished fidelity criterion. This stage reduces the psychovisual redundancies of the input image. This operation is irreversible. Thus it must be omitted when error-free compression is desired. In the third and final stage of the source encoding process, the symbol coder creates a fixed- or variable-length code to represent the quantizer output and maps the output in accordance with the code. The term symbol coder distinguishes this coding operation from the overall source encoding process. In most cases, a variable-length code is used to represent the

mapped and quantized data set. It assigns the shortest code words to the most frequently occurring output values and thus reduces coding redundancy. The operation, of course, is reversible. Upon completion of the symbol coding step, the input image has been processed to remove each of the three redundancies. Figure 3.2(a) shows the source encoding process as three successive operations, but all three operations are not necessarily included in every compression system. Recall, for example, that the quantizer must be omitted when error-free compression is desired. In addition, some compression techniques normally are modeled by merging blocks that are physically separate infig. 3.2(a). In the predictive compression systems, for instance, the mapper and quantizer are often represented by a single block, which simultaneously performs both operations. The source decoder shown in Fig. 3.2(b) contains only two components: a symbol decoder and an inverse mapper. These blocks perform, in reverse order, the inverse operations of the source encoder's symbol encoder and mapper blocks. Because quantization results in irreversible information loss, an inverse quantizer block is not included in the general source decoder model shown in Fig. 3.2(b). The Channel Encoder and Decoder: The channel encoder and decoder play an important role in the overall encodingdecoding process when the channel of Fig. 3.1 is noisy or prone to error. They are designed to reduce the impact of channel noise by inserting a controlled form of redundancy into the source encoded data. As the output of the source encoder contains little redundancy, it would be highly sensitive to transmission noise without the addition of this "controlled redundancy." One of the most useful channel encoding techniques was devised by R. W. Hamming (Hamming [1950]). It is based on appending enough bits to the data being encoded to ensure that some minimum number of bits must change between valid code words. Hamming showed, for example, that if 3 bits of redundancy are added to a 4-bit word, so that the distance between any two valid code words is 3, all single-bit errors can be detected and corrected. (By appending additional bits of redundancy, multiple-bit errors can be detected and corrected.) The 7-bit Hamming (7, 4) code word h1, h2, h3., h6, h7 associated with a 4-bit binary number b3b2b1b0

7.(b) Names of Different fidelity criteria 2M Explanation 2M 7. (b). The removal of psychovisually redundant data results in a loss of real or quantitative visual information. Because information of interest may be lost, a repeatable or reproducible means of quantifying the nature and extent of information loss is highly desirable. Two general classes of criteria are used as the basis for such an assessment: A) Objective fidelity criteria and B) Subjective fidelity criteria. When the level of information loss can be expressed as a function of the original or input image and the compressed and subsequently decompressed output image, it is said to be based on an objective fidelity criterion. A good example is the root-mean-square (rms) error between an input and output image. Let f(x, y) represent an input image and let f(x, y) denote an estimate or approximation of f(x, y) that results from compressing and subsequently decompressing the input. For any value of x and y, the error e(x, y) between f (x, y) and f^ (x, y) can be defined as

The rms value of the signal-to-noise ratio, denoted SNRrms, is obtained by taking the square root of Eq. above. Although objective fidelity criteria offer a simple and convenient mechanism for evaluating information loss, most decompressed images ultimately are viewed by humans. Consequently, measuring image quality by the subjective evaluations of a human observer often is more appropriate. This can be accomplished by showing a "typical" decompressed image to an appropriate cross section of viewers and averaging their evaluations. The evaluations may be made using an absolute rating scale or by means of side-by-side comparisons of f(x, y) and f^(x, y). UNIT-IV 8.(a) Names of Different discontinuities in images Explanation 2M 4M 8.(a). Detection of Discontinuities : Point Detection:

Detection of Discontinuities : Line Detection: +45 Horizotal vertical -1-1 2-1 2-1 2-1 -1-1 -1-1 2 2 2-1 -1-1 -1 2-1 -1 2-1 -1 2-1 Detection of Discontinuities: Edge Detection:

8.(b) Sketch Gradient of Sobel operator 3M Sketch the laplacian of the image 3M 8.(b). Sobel operator: Laplacian operator:

(OR) 9.(a) Names of Different boundary discriptors 2M Explanation 4M 9.(a). The results of segmentation is a set of regions. Regions have then to be represented and described. Two main ways of representing a region: - external characteristics (its boundary): focus on shape - internal characteristics (its internal pixels): focus on color, textures The next step: description E.g.: a region may be represented by its boundary, and its boundary described by some features such as length, regularity Features should be insensitive to translation, rotation, and scaling. Both boundary and regional descriptors are often used together. In order to represent a boundary, it is useful to compact the raw data (list of boundary pixels) Chain codes: list of segments with defined length and direction - 4-directional chain codes - 8-directional chain codes

9.(b) Determination chain code 2M Determination of shape number 4M 9.(b). Shape numbers The order n of a shape number is defined as the number of digits in its representation. n is even for a closed boundary and it s value limit s the number of possible different shapes. Chain code: 000332123211 Difference code: 003033113303 Shape number of a boundary is defined as the first difference of smallest magnitude. Shape number compute the chain code difference re-order this to create the minimum integer this is called the shape number Shape number: 003033113303