Spatial Enhancement Definition

Similar documents
Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

An Intuitive Explanation of Fourier Theory

Introduction to Digital Image Processing

Examination in Image Processing

CS1114 Section 8: The Fourier Transform March 13th, 2013

Texture. Outline. Image representations: spatial and frequency Fourier transform Frequency filtering Oriented pyramids Texture representation

Digital Image Processing. Image Enhancement - Filtering

Digital Image Processing. Prof. P. K. Biswas. Department of Electronic & Electrical Communication Engineering

Face identification system using MATLAB

DIGITAL IMAGE ANALYSIS. Image Classification: Object-based Classification

Lecture 4: Spatial Domain Transformations

Image restoration. Restoration: Enhancement:

Image Compression Techniques

3. Image formation, Fourier analysis and CTF theory. Paula da Fonseca

Vivekananda. Collegee of Engineering & Technology. Question and Answers on 10CS762 /10IS762 UNIT- 5 : IMAGE ENHANCEMENT.

Image processing in frequency Domain

Chapter 3: Intensity Transformations and Spatial Filtering

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Fourier transforms and convolution

Final Review. Image Processing CSE 166 Lecture 18

Theoretically Perfect Sensor

Relationship between Fourier Space and Image Space. Academic Resource Center

SYDE 575: Introduction to Image Processing

Computer Vision I. Announcements. Fourier Tansform. Efficient Implementation. Edge and Corner Detection. CSE252A Lecture 13.

Image Processing. Traitement d images. Yuliya Tarabalka Tel.

Lecture 6: Edge Detection

Unit - I Computer vision Fundamentals

Theoretically Perfect Sensor

Pictures at an Exhibition

Computer Graphics. Texture Filtering & Sampling Theory. Hendrik Lensch. Computer Graphics WS07/08 Texturing

Computer Vision and Graphics (ee2031) Digital Image Processing I

A Real-time Algorithm for Atmospheric Turbulence Correction

ECE 176 Digital Image Processing Handout #14 Pamela Cosman 4/29/05 TEXTURE ANALYSIS

1.Some Basic Gray Level Transformations

Digital Image Processing. Lecture 6

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

Digital Image Fundamentals II

Ulrik Söderström 16 Feb Image Processing. Segmentation

Image Processing. Application area chosen because it has very good parallelism and interesting output.

Lecture Image Enhancement and Spatial Filtering

Image Processing and Analysis

Using Image's Processing Methods in Bio-Technology

(Type your answer in radians. Round to the nearest hundredth as needed.)

Geometric Rectification of Remote Sensing Images

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

IMPULSE RESPONSE IDENTIFICATION BY ENERGY SPECTRUM METHOD USING GEOINFORMATION DATA IN CASE OF REMOTE SENSING IMAGES

EEM 463 Introduction to Image Processing. Week 3: Intensity Transformations

Texture Analysis of Painted Strokes 1) Martin Lettner, Paul Kammerer, Robert Sablatnig

Sampling and Reconstruction

Copyright 2005 Center for Imaging Science Rochester Institute of Technology Rochester, NY

Short Survey on Static Hand Gesture Recognition

Image Enhancement: To improve the quality of images

Accelerometer Gesture Recognition

Image processing. Reading. What is an image? Brian Curless CSE 457 Spring 2017

Digital Image Processing. Chapter 7: Wavelets and Multiresolution Processing ( )

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Outlines. Medical Image Processing Using Transforms. 4. Transform in image space

FMA901F: Machine Learning Lecture 3: Linear Models for Regression. Cristian Sminchisescu

CS4442/9542b Artificial Intelligence II prof. Olga Veksler

Experiments with Edge Detection using One-dimensional Surface Fitting

Lecture 2 Image Processing and Filtering

MODULE 3 LECTURE NOTES 3 ATMOSPHERIC CORRECTIONS

Image Processing. BITS Pilani. Dr Jagadish Nayak. Dubai Campus

Extensions of One-Dimensional Gray-level Nonlinear Image Processing Filters to Three-Dimensional Color Space

Chapter 11 Image Processing

Redundant Data Elimination for Image Compression and Internet Transmission using MATLAB

Motivation. Gray Levels

Lecture 5: Frequency Domain Transformations

3. (a) Prove any four properties of 2D Fourier Transform. (b) Determine the kernel coefficients of 2D Hadamard transforms for N=8.

Digital Image Processing. Image Enhancement in the Frequency Domain

Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation. Range Imaging Through Triangulation

Image Enhancement. Digital Image Processing, Pratt Chapter 10 (pages ) Part 1: pixel-based operations

IMAGE COMPRESSION USING FOURIER TRANSFORMS

Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18

Computer Vision. Fourier Transform. 20 January Copyright by NHL Hogeschool and Van de Loosdrecht Machine Vision BV All rights reserved

Image Processing

Forensic Image Recognition using a Novel Image Fingerprinting and Hashing Technique

AN APPROACH OF SEMIAUTOMATED ROAD EXTRACTION FROM AERIAL IMAGE BASED ON TEMPLATE MATCHING AND NEURAL NETWORK

sin(a+b) = sinacosb+sinbcosa. (1) sin(2πft+θ) = sin(2πft)cosθ +sinθcos(2πft). (2)

EE795: Computer Vision and Intelligent Systems

FACE RECOGNITION USING INDEPENDENT COMPONENT

Adaptive Waveform Inversion: Theory Mike Warner*, Imperial College London, and Lluís Guasch, Sub Salt Solutions Limited

Fuzzy Inference System based Edge Detection in Images

Edge and local feature detection - 2. Importance of edge detection in computer vision

Linear Operations Using Masks

Digital Image Processing, 3rd ed.

MetroPro Surface Texture Parameters

Tutorial on Fourier Theory

Problem 1 (10 Points)

Computer Graphics. Sampling Theory & Anti-Aliasing. Philipp Slusallek

Intensity Transformations and Spatial Filtering

Practical Image and Video Processing Using MATLAB

Assignment 3: Edge Detection

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Computational Foundations of Cognitive Science

PROCESS > SPATIAL FILTERS

Basics. Sampling and Reconstruction. Sampling and Reconstruction. Outline. (Spatial) Aliasing. Advanced Computer Graphics (Fall 2010)

The Constant MTF Interpolator

Reconstructing Images of Bar Codes for Construction Site Object Recognition 1

Transcription:

Spatial Enhancement Nickolas Faust The Electro- Optics, Environment, and Materials Laboratory Georgia Tech Research Institute Georgia Institute of Technology Definition Spectral enhancement relies on changing the gray scale representation of pixels to give an image with more contrast for interpretation. It applies the same spectral transformation to all pixels with a given gray scale in an image. However, it does not take full advantage of human recognition capabilities even though it may allow better interpretation of an image by a user, Interpretation of an image includes the use of brightness information, and the identification of features in the image. Several examples will demonstrate the value of spatial characteristics in image interpretation. Spatial enhancement is the mathematical processing of image pixel data to emphasize spatial relationships. This process defines homogeneous regions based on linear edges. Spatial enhancement techniques use the concept of spatial frequency within an image. Spatial frequency is the manner in which gray- scale values change relative to their neighbors within an image. If there is a slowly varying change in gray scale in an image from one side of the image to the other, the image is said to have a low spatial frequency. If pixel values vary radically for adjacent pixels in an image, the image is said to have a high spatial frequency. Figure 1 (a- b) shows examples of high and low spatial frequencies:

Many natural and manmade features in images have high spatial frequency: Geologic faults Edges of lakes Roads Airports Spatial enhancement involves the enhancement of either low or high frequency information within an image. Algorithms that enhance low frequency image information employ a "blurring" filter (commonly called a low pass filter) that emphasizes low frequency parts of an image while de- emphasizing the high frequency components. The enhancement of high frequency information within an image is often called edge enhancement. It emphasizes edges in the image while retaining overall image quality. Objectives or Purposes There are three main purposes that underlie spatial enhancement techniques: To improve interpretability of image data To aid in automated feature extraction To remove and/or reduce sensor degradation Methods The two major methods commonly used in spatial enhancement are: Convolution Fourier Transform

Convolution involves the passing of a moving window over an image and creating a new image where each pixel in the new image is a function of the original pixel values within the moving window and the coefficients of the moving window as specified by the user. The window, a convolution operator, may be considered as a matrix (or mask) of coefficients that are to be multiplied by image pixel values to derive a new pixel value for a resultant enhanced image. This matrix may be of any size in pixels and does not necessarily have to be square. Examples As an example of the convolution methodology, take a 3 by 3 matrix of coefficients and see the effects on an example image subset. A set of coefficients that is used for image smoothing and noise removal is given below: If we have a sample image, given directly above where the image normally has a low smoothly varying gray scale, except for the bottom right region, which exhibits a sharp brightness change, we can see the effects of the convolution filter on a pixel- by- pixel basis. Because we do not wish to consider edge effects, we will start the overlay of the

moving window on the x=2, y=2 pixel of the input image and end at the x=6, y=5 position of the original image. The first p(x,y) (x=1, y=1), pixel of the output image would then be Convolution Example Examples As an example of the convolution methodology, take a 3 by 3 matrix of coefficients and see the effects on an example image subset. A set of coefficients that is used for image smoothing and noise removal is given below: 1/9 * 1 1 1 1 1 1 1 1 1 If we have a sample image, given below: 3 3 4 4 5 6 2 3 3 4 4 5 1 2 2 3 3 4 1 1 2 4 4 7 1 2 4 20 20 20 2 3 6 20 20 20 2 3 4 20 20 20 where the image normally has a low smoothly varying gray scale, except for the bottom right region, which exhibits a sharp brightness change, we can see the effects of the convolution filter on a pixel- by- pixel basis. Because we do not wish to consider edge effects, we will start the overlay of the moving window on the x=2, y=2 pixel of the input image and end at the x=6, y=5 position of the original image. The first p(x,y) (x=1, y=1), pixel of the output image would then be p(1,1) = 1/9 * (3*1 + 3*1 + 4*1 + 1*1 + 2*1 + 2*1 + 1*1 + 1*1 + 4*1) = 21/9 = 2.333

Because the output image, as well as the input image, is normally a whole number (integer) quantity, we will round the values to the nearest integer, p(1,1) = 3. Similarly, p(1,2) = 1/9 * (3*1 + 4*1 + 4*1 + 3*1 + 3*1 + 4*1 + 2*1 + 2*1 + 3*1) = 28/9 = 3.111 and p(1,2) = 3. p(1,3) = 1/9 * (4*1 + 4*1 + 5*1 + 3*1 + 4*1 + 4*1 + 2*1 + 3*1 + 3*1) = 32/9 = 3.555 p(1,3) = 4. Continued application of the same window (or filter kernel) will result in an output image given by: 2 3 3 4 1 2 3 4 1 4 6 9 2 6 11 15 3 9 14 20

This should be compared to the original data values for those pixel locations of 3 3 4 4 2 2 3 3 1 2 4 4 2 4 20 20 3 6 20 20 where there is a sharp discontinuity in the image. The moving window filter, in effect, smoothed out the sharp discontinuity in the original pixel imagery. A sample edge detection mask might be given as - 1 8-1 and a value for p(1,1) would be p(1,1) = ( - 1*3) + (- 1*3) + (- 1*4) + (- 1*2) + ( 8*3 ) + (- 1*3) + (- 1*1) + (- 1*2) + (- 1*2) The resulting image after application of the mask is given by 4-1 4-2 1-6 - 2-11 - 7-22 - 26-49 - 4-26 80 45 0-28 46 0 assuming that only positive values are allowed in a image file, all values are offset by the absolute value of the minimum image elements ( in this case +49).

The resultant image would then be: 53 48 53 47 50 43 47 38 42 27 23 0 45 23 129 98 49 21 95 49 Values greater than 90 are present in the output image and represent the edge of the bright region in the original image. Alternately, the negative values could be set to 0 giving and output image of 4 0 4 0 1 0 0 0 0 0 0 0 0 0 80 45 0 0 46 0 Again, the output images maybe compared to the original pixel values 3 3 4 4 2 2 3 3 1 2 4 4 2 4 20 20 3 6 20 20 One of the most used convolution kernels for edge enhancement of images was given by Chavez. The kernel is specified as: 1/9 * - 1 17-1 Chavez originally derived the above kernel for enhancement of high frequency information in an ERTS MSS image. For a particular image pixel location and channel number, a low pass filter may be used to evaluate the average value in a 3 by 3 window. The convolution kernel would be given by:

avg = 1/9 * 1 1 1 1 1 1 1 1 1 The high frequency (HF) component in any given pixel wil then be given by HF = pixel - avg Represented in terms of a convolution kernel, this would be HF = 0 0 01 1 1 0 1 01 1 1-1/9 * 0 0 01 1 1 which means that HF = 1/9 * - 1 8-1 By adding the high frequency part, HF, back to the original pixel, a high frequency enhancement will be achieved: New value = pixel value + HF. This may be accomplished by: New value

= 0 0 0 0 1 0 + 1/9 * 0 0 0-1 8-1 Or New Value = 1/9 * - 1-1 - 1 17-1 - 1 Figure 2 (a- b) shows two Landsat Thematic Mapper (TM) images of Downtown Savannah, Georgia. Figure 2(a) shows the area before enhancement, and Figure 2(b) shows the results of applying the above convolution kernel to the areas depicted in the image. Fourier Transform Theory Definition Fourier transforms are used extensively in information theory, signal processing, and image processing. In the Fourier transform theory any one- dimensional function, f(x), may be fully represented by some superposition of trigonometric sine and cosine terms, F(x). The estimation of the coefficients and frequencies of each term necessary for full representation of the original function is involved in the calculation of the Fourier transform. Images are often discontinuous along a line or column, and end unceremoniously at the edges of the image. To handle this discrete image pixel data, a discrete version of the Fourier transform was developed and called the fast Fourier transform (FFT) due to Fourier transform theory assumes that the signal for

which the transform is desired is continuous with an infinite extent (ref. Cooley, Tukey). The performing of a two- dimensional Fourier transform on an image is equivalent to independently processing each single line of image data by a one- dimensional Fourier transform and then individually processing each single column of the results of the line- oriented one- dimensional Fourier transforms through another Fourier transform. This separability is a key factor in the implementation of two- dimensional transforms. Fourier transforms heavily utilize the theory of complex numbers and are often hard to visualize. By recalling examples at the first of the moving window convolution section, perhaps the interpretation of two- dimensional Fourier transforms will be made easier. Any image may be represented by a two- dimensional Fourier transform, which may be considered as an image with a real and a complex part. The two- dimensional FFT is a mapping of image pixel values into the image spatial frequency space. By performing a two- dimensional FFT on an image, we are creating a two- dimensional map of all spatial frequencies within an image. As a result of the FFT, every output image pixel has a real and an imaginary number associated with it. The real pixels form an image that may be thought of as the magnitude of the spatial frequencies present in an image, and the imaginary pixels form and image representing the phase of the spatial frequencies. As shown above, the highest spatial frequency that can be present in an image is equivalent to every other pixel having black- and- white values. Therefore, if an x and y axis are used to represent spatial frequencies on a plot, the width of the plot will, at most, be the total width of the image divided by 2. A useful way to display the spatial frequencies within an image is by using a star diagram representation of the magnitude of the complex two- dimensional FFT. In such a diagram, the lowest frequency component within an image (the average value, or albedo, of the image) is shown at the center of the diagram, and spatial frequencies increase pixel by pixel away from the center of the diagram. The brightness of the pixels at each x and y position relate to the relative occurrence of that spatial frequency in the original image. Spatial frequencies only exist up to the Nyquist frequency in the x and y directions, so the display is reflected about the center of the diagram. Furthermore, information in the +x and +y direction from the diagram center duplicates information in the - x and - y direction of the diagram. Figure 3(a) is

the same image shown in Figure 1(b). Figure 3(b) shows the magnitude of the two- dimensional FFT for the same image. Note that the majority of the spatial information(bright values) in the two- dimensional FFT is in the lower frequencies, as indicated in the original image. Figure 3 Figure 4(a) depicts the same checkerboard image shown in Figure 1(b), and Figure 4(b) shows the two- dimensional FFT for the image. The magnitude image has high values along 2 lines crossing in the center of the star diagram. The outside edge of the star diagram also has high values showing an abundance of high frequency information.

Figure 4 Fourier Transforms and Image Enhancement Fourier Transforms and Image Enhancement A two- dimensional FFT image may be useful in itself in developing an understanding of individual images, but Fourier Transform theory lends itself to image enhancement techniques as well. The ability to produce a two- dimensional FFT star diagram is known as the running of a forward FFT. This process can also be thought of as transforming an image from the normal time domain to the frequency domain. The resulting frequency domain image may be transformed back to the time domain by performing an inverse two- dimensional FFT. If no changes are made to the spatial frequency complex image, the inverse two- dimensional FFT will provide the exact same image that we began with. Fourier theory, however, tells us that we may perform certain operations, called convolutions, in the frequency domain that may enhance the image after the inverse two- dimensional FFT. These frequency convolutions are not to be confused with the

kernel convolutions discussed above. A convolution in the frequency domain is a simple multiplication of an image mask that may be arbitrarily designed by a user, multiplied by the complex frequency domain image. The resultant frequency domain image is then run through the inverse two- dimensional FFT process to yield a transformed image. This process of convolution in the frequency domain is extremely valuable in the spatial enhancement of image data. We may perform the operations discussed earlier with kernel convolution in a more complete and flexible manner. In addition, there are some functions that may be done by frequency convolution that as yet have not been achieved by kernel convolution, such as noise removal from an image, and image restoration An example of high- pass filtering is shown below, using the Savannah dataset, in Figure 5(a- d). To create a high- frequency enhanced image, the high spatial frequency components of the image are extracted and added back to the original image. This is easily done using frequency convolution. First, a TM image (Figure 5(a)) is transformed into the frequency domain (Figure 5(b)). Next, a mask is developed in the frequency domain, which is 0 for all spatial frequencies less than the selected value and 1 for all spatial frequencies greater than the value (Figure 5(c)). Thus, only the high frequency parts of the complex spatial frequency image are retained. When the inverse two- dimensional FFT is performed, the resultant image represents a high pass filter of the original image (Figure 5(d)). It is simple to define masks to be used in the frequency domain, but one must be careful to know what types of effects to expect in the time domain.

An example of low- pass filtering using Fourier Transforms is shown below, in Figure 6(a- d):

Notes P.S. Chavez, Jr., "Atmospheric, Solar, and MTF Corrections for ERTS Digital Imagery," in Proceedings of the American Society of Photogrammetry Fall Meeting, October 1975, p. 68.