Scheme of evaluation Digital Image Processing (EI424) Eighth Semester,April,2017.
IV/IV B.Tech (Regular) DEGREE EXAMINATIONS ELECTRONICS AND INSTRUMENTATION ENGINEERING April,2017 Digital Image Processing Eighth Semester Max Marks:60 Scheme of evaluation All questions carries equal marks 1X12=12M 1.(a) An image is a two-dimensional function f(x,y), where x and y are the spatial (plane) coordinates, and the amplitude of f at any pair of coordinates (x,y) is called the intensity of the image at that level. (b)photopic vision is the vision of the eye under well-lit conditions (luminance level 10 to 10 8 cd/m²). In humans and many other animals, photopic vision allows color perception, mediated by cone cells, and a significantly higher visual acuity and temporal resolution than available with scotopic vision. (c)weber Ratio: Ic / I where I is the light source intensity and Ic is increment in illumination. A small value of Weber ratio means Good brightness discrimination. A large value of Weber ratio means Poor brightness discrimination. (d)the principle objectives of image enhancement techniques is to process an image so that the result is more suitable image than the original image for a specific application. (e)log transformation technique is applied to compress the dynamic range of gray levels in an image. s=c log (1+r) where c is constant and it is assumed that r 0. (f)the enhancement techniques that are using arithmetic operators are (i)image subtraction (ii)image Averaging (g)image restoration is method to improve an image in some predefined sense.
(h) compression ratio is defined as Where n1=original image ;n2=compressed image (i)variable Length Coding is the simplest approach to error free compression. It reduces only the coding redundancy. It assigns the shortest possible codeword to the most probable gray levels.variable-length codes can allow sources to be compressed and decompressed with zero error (lossless data compression) and still be read back symbol by symbol. With the right coding strategy an independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. (j) There are three general approaches to segmentation, termed thresholding, edge-based methods and region-based methods. In thresholding, pixels are allocated to categories according to the range of values in which a pixel lies. Pixels with values less than 128 have been placed in one category, and the rest have been placed in the other category. The boundaries between adjacent pixels in different categories have been superimposed in white on the original image. (k) The three principal approaches used in image processing to describe the texture of a region are statistical, structural, and spectral. Statistical approaches yield characterizations of textures as smooth, coarse, grainy, and so on. Structural techniques deal with the arrangement of image primitives, such as the description of texture based on regularly spaced parallel lines. Spectral techniques are based on properties of the Fourier spectrum and are used primarily to detect global periodicity in an image by identifying high-energy, narrow peaks in the spectrum. (l) Various representation schemes are Chain Codes, Polygonal Approximations, Signatures,Boundary Segments, The Skeleton of a Region.
Outputs of these processes generally are image attributes UNIT-I 2.(a) Block diagram of different steps in Digital Image processing Explanation of each steps 4M 4M 2.(a) Fundamental Steps in Digital Image Processing: Outputs of these processes generally are images Colour Image Processing Wavelets & Multiresolution processing Image Compression Morphological Processing Image Restoration Segmentation Image Enhancement Image Acquisition Knowledge Base Representation & Description Object Recognition Problem Domain Step 1: Image Acquisition The image is captured by a sensor (eg. Camera), and digitized if the output of the camera or sensor is not already in digital form, using analogue-to-digital convertor. Step 2: Image Enhancement The process of manipulating an image so that the result is more suitable than the original for specific applications. The idea behind enhancement techniques is to bring out details that are hidden, or simple to highlight certain features of interest in an image. Step 3: Image Restoration - Improving the appearance of an image - Tend to be mathematical or probabilistic models. Enhancement, on the other hand, is based on human subjective preferences regarding what constitutes a good enhancement result. Step 4: Colour Image Processing
Use the colour of the image to extract features of interest in an image Step 5: Wavelets Are the foundation of representing images in various degrees of resolution. It is used for image data compression. Step 6: Compression Techniques for reducing the storage required to save an image or the bandwidth required to transmit it. Step 7: Morphological Processing Tools for extracting image components that are useful in the representation and description of shape. In this step, there would be a transition from processes that output images, to processes that output image attributes. Step 8: Image Segmentation Segmentation procedures partition an image into its constituent parts or objects. Step 9: Recognition and Interpretation Recognition: the process that assigns label to an object based on the information provided by its description. Step 10: Knowledge Base Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. 2.(b) Image sampling 2M Image Quantization 2M 2.(b) Image Sampling and Quantization Image sampling: discretize an image in the spatial domain Spatial resolution / image resolution: pixel size or number of pixels Nyquist Rate:
Spatial resolution must be less or equal half of the minimum period of the image or sampling frequency must be greater or Equal twice of the maximum frequency. Image Quantization Image quantization: discretize continuous pixel values into discrete numbers Color resolution/ color depth/ levels: - No. of colors or gray levels or - No. of bits representing each pixel value - No. of colors or gray levels N c is given by N 2 c where b = no. of bits b (OR) 3.(a) Names of Different color models 2M Explanation of models 6M Different color models are: RGB : Color Monitor, Color Camera, Color Scanner CMY : Color Printer, Color Copier YIQ : Color TV, Y(luminance), I(Inphase), Q(quadrature) HSI, HSV
YIQ models: 3.(b) Complete solution 4M B G R Y M C 1 1 1 B G R Q I Y 0.311 0.523 0.212 0.321 0.275 0.596 0.114 0.587 0.299 Q I Y B G R 1.705 1.108 1 0.647 0.272 1 0.620 0.956 1
UNIT-II 4.(a) Different steps in frequency domain filtering Explanation of all steps 2M 4M Figure.Basic steps for filtering in the frequency domain Fourier transform F(u,v) Filter function H(u,v) Inverse Fourier transform Preprocess ing (Zero phase shift filters) Post processing f(x,y) Input Image g(x,y) Enhanced Image Low frequencies in the fourier transform are responsible for the general gray level appearance of an image over smooth areas, while high frequencies are responsible for details such as edge and noise.
4.(b) Smoothing in frequency Domain 2M Explanation of different filters 4M 4.(b).Smoothing in the Frequency Domain G(u,v) = H(u,v) F(u,v) Ideal low pass filter Butterworth (parameter: filter order) Gaussian
(OR) 5.(a) Concept of histogram processing 2M Histogram Equalization and specification 4M 5.(a)
Histogram equalization method: Only generates one result: an image with approximately uniform histogram (without any flexibility) Enhancement may not be achieved as desired. Histogram specification: Transform an image according to a specified gray-level histogram Includes Specify particular histogram shapes (pz (z)) capable of highlighting certain gray-level ranges Obtain the transformation function for transformation of r to z.
5.(b) Block diagram of Homomorphic filtering 2M Explanation 4M 5.(b).
UNIT-III 6.(a) Estimation of degradation function 2M Explanation of 4M 6.(a) Estimation of degradation function for use in image restoration The parameters of periodic noise typically estimated by inspecting the image s Fourier spectrum The parameters of noise PDFs may be known partially from sensor specifications often necessary to be estimated for a particular imaging arrangement - capturing a set of images of flat environments Possible to be estimated from small patches of reasonably constant background intensity, when only images already generated by a sensor are available e.g., the vertical strips of 150x20 pixels.
6.(b) Names of different restoration filters 2M Explanation 4M 6.(b). Spatial filtering is suitable when only additive random noise is present. Mean Filters Arithmetic mean filter Geometric mean filter Harmonic mean filter Contraharmonic mean filter Order-Statistic Filters Median filter Max and min filters Midpoint filter Alpha-trimmed mean filter Adaptive Filters Adaptive, local noise reduction filter Adaptive median filter D
(OR) 7.(a) General compression models 2M Explanation of encoder and decoder 6M 7.(a). A compression system consists of two distinct structural blocks: an encoder and a decoder. An input image f(x, y) is fed into the encoder, which creates a set of symbols from the input data. After transmission over the channel, the encoded representation is fed to the decoder, where a reconstructed output image f^(x, y) is generated. In general, f^(x, y) may or may not be an exact replica of f(x, y). If it is, the system is error free or information preserving; if not, some level of distortion is present in the reconstructed image. Both the encoder and decoder shown in Fig. 3.1 consist of two relatively independent functions or subblocks. The encoder is made up of a source encoder, which removes input redundancies, and a channel encoder, which increases the noise immunity of the source encoder's output. As would be expected, the decoder includes a channel decoder followed by a source decoder. If the channel between the encoder and decoder is noise free (not prone to error), the channel encoder and decoder are omitted, and the general encoder and decoder become the source encoder and decoder, respectively.
The Source Encoder and Decoder: The source encoder is responsible for reducing or eliminating any coding, interpixel, or psychovisual redundancies in the input image. The specific application and associated fidelity requirements dictate the best encoding approach to use in any given situation. Normally, the approach can be modeled by a series of three independent operations. As Fig. 3.2 (a) shows, each operation is designed to reduce one of the three redundancies. Figure 3.2 (b) depicts the corresponding source decoder. In the first stage of the source encoding process, the mapper transforms the input data into a (usually nonvisual) format designed to reduce interpixel redundancies in the input image. This operation generally is reversible and may or may not reduce directly the amount of data required to represent the image. Run-length coding is an example of a mapping that directly results in data compression in this initial stage of the overall source encoding process. The representation of an image by a set of transform coefficients is an example of the opposite case. Here, the mapper transforms the image into an array of coefficients, making its interpixel redundancies more accessible for compression in later stages of the encoding process. The second stage, or quantizer block in Fig. 3.2 (a), reduces the accuracy of the mapper's output in accordance with some preestablished fidelity criterion. This stage reduces the psychovisual redundancies of the input image. This operation is irreversible. Thus it must be omitted when error-free compression is desired. In the third and final stage of the source encoding process, the symbol coder creates a fixed- or variable-length code to represent the quantizer output and maps the output in accordance with the code. The term symbol coder distinguishes this coding operation from the overall source encoding process. In most cases, a variable-length code is used to represent the
mapped and quantized data set. It assigns the shortest code words to the most frequently occurring output values and thus reduces coding redundancy. The operation, of course, is reversible. Upon completion of the symbol coding step, the input image has been processed to remove each of the three redundancies. Figure 3.2(a) shows the source encoding process as three successive operations, but all three operations are not necessarily included in every compression system. Recall, for example, that the quantizer must be omitted when error-free compression is desired. In addition, some compression techniques normally are modeled by merging blocks that are physically separate infig. 3.2(a). In the predictive compression systems, for instance, the mapper and quantizer are often represented by a single block, which simultaneously performs both operations. The source decoder shown in Fig. 3.2(b) contains only two components: a symbol decoder and an inverse mapper. These blocks perform, in reverse order, the inverse operations of the source encoder's symbol encoder and mapper blocks. Because quantization results in irreversible information loss, an inverse quantizer block is not included in the general source decoder model shown in Fig. 3.2(b). The Channel Encoder and Decoder: The channel encoder and decoder play an important role in the overall encodingdecoding process when the channel of Fig. 3.1 is noisy or prone to error. They are designed to reduce the impact of channel noise by inserting a controlled form of redundancy into the source encoded data. As the output of the source encoder contains little redundancy, it would be highly sensitive to transmission noise without the addition of this "controlled redundancy." One of the most useful channel encoding techniques was devised by R. W. Hamming (Hamming [1950]). It is based on appending enough bits to the data being encoded to ensure that some minimum number of bits must change between valid code words. Hamming showed, for example, that if 3 bits of redundancy are added to a 4-bit word, so that the distance between any two valid code words is 3, all single-bit errors can be detected and corrected. (By appending additional bits of redundancy, multiple-bit errors can be detected and corrected.) The 7-bit Hamming (7, 4) code word h1, h2, h3., h6, h7 associated with a 4-bit binary number b3b2b1b0
7.(b) Names of Different fidelity criteria 2M Explanation 2M 7. (b). The removal of psychovisually redundant data results in a loss of real or quantitative visual information. Because information of interest may be lost, a repeatable or reproducible means of quantifying the nature and extent of information loss is highly desirable. Two general classes of criteria are used as the basis for such an assessment: A) Objective fidelity criteria and B) Subjective fidelity criteria. When the level of information loss can be expressed as a function of the original or input image and the compressed and subsequently decompressed output image, it is said to be based on an objective fidelity criterion. A good example is the root-mean-square (rms) error between an input and output image. Let f(x, y) represent an input image and let f(x, y) denote an estimate or approximation of f(x, y) that results from compressing and subsequently decompressing the input. For any value of x and y, the error e(x, y) between f (x, y) and f^ (x, y) can be defined as
The rms value of the signal-to-noise ratio, denoted SNRrms, is obtained by taking the square root of Eq. above. Although objective fidelity criteria offer a simple and convenient mechanism for evaluating information loss, most decompressed images ultimately are viewed by humans. Consequently, measuring image quality by the subjective evaluations of a human observer often is more appropriate. This can be accomplished by showing a "typical" decompressed image to an appropriate cross section of viewers and averaging their evaluations. The evaluations may be made using an absolute rating scale or by means of side-by-side comparisons of f(x, y) and f^(x, y). UNIT-IV 8.(a) Names of Different discontinuities in images Explanation 2M 4M 8.(a). Detection of Discontinuities : Point Detection:
Detection of Discontinuities : Line Detection: +45 Horizotal vertical -1-1 2-1 2-1 2-1 -1-1 -1-1 2 2 2-1 -1-1 -1 2-1 -1 2-1 -1 2-1 Detection of Discontinuities: Edge Detection:
8.(b) Sketch Gradient of Sobel operator 3M Sketch the laplacian of the image 3M 8.(b). Sobel operator: Laplacian operator:
(OR) 9.(a) Names of Different boundary discriptors 2M Explanation 4M 9.(a). The results of segmentation is a set of regions. Regions have then to be represented and described. Two main ways of representing a region: - external characteristics (its boundary): focus on shape - internal characteristics (its internal pixels): focus on color, textures The next step: description E.g.: a region may be represented by its boundary, and its boundary described by some features such as length, regularity Features should be insensitive to translation, rotation, and scaling. Both boundary and regional descriptors are often used together. In order to represent a boundary, it is useful to compact the raw data (list of boundary pixels) Chain codes: list of segments with defined length and direction - 4-directional chain codes - 8-directional chain codes
9.(b) Determination chain code 2M Determination of shape number 4M 9.(b). Shape numbers The order n of a shape number is defined as the number of digits in its representation. n is even for a closed boundary and it s value limit s the number of possible different shapes. Chain code: 000332123211 Difference code: 003033113303 Shape number of a boundary is defined as the first difference of smallest magnitude. Shape number compute the chain code difference re-order this to create the minimum integer this is called the shape number Shape number: 003033113303