DCT Image Compression for Color Images

Similar documents
IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

JPEG Compression Using MATLAB

Digital Watermarking with Copyright Authentication for Image Communication

Image Compression Algorithm and JPEG Standard

Implementation of efficient Image Enhancement Factor using Modified Decision Based Unsymmetric Trimmed Median Filter

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Lecture 8 JPEG Compression (Part 3)

Volume 2, Issue 9, September 2014 ISSN

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

An introduction to JPEG compression using MATLAB

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

Part 1 of 4. MARCH

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

CS 335 Graphics and Multimedia. Image Compression

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

Keywords DCT, SPIHT, PSNR, Bar Graph, Compression Quality

Compression II: Images (JPEG)

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Enhancing the Image Compression Rate Using Steganography

Interactive Progressive Encoding System For Transmission of Complex Images

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

JPEG 2000 compression

Medical Image Compression using DCT and DWT Techniques

So, what is data compression, and why do we need it?

Robust Image Watermarking based on DCT-DWT- SVD Method

Introduction ti to JPEG

Adaptive Quantization for Video Compression in Frequency Domain

Redundant Data Elimination for Image Compression and Internet Transmission using MATLAB

Robert Matthew Buckley. Nova Southeastern University. Dr. Laszlo. MCIS625 On Line. Module 2 Graphics File Format Essay

Removing Salt and Pepper Noise using Modified Decision- Based Approach with Boundary Discrimination

A Comprehensive Review of Data Compression Techniques

A Parallel Reconfigurable Architecture for DCT of Lengths N=32/16/8

IMAGE COMPRESSION USING ANTI-FORENSICS METHOD

Digital Image Representation Image Compression

Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper Noise Removal

Statistical Image Compression using Fast Fourier Coefficients

A Comprehensive lossless modified compression in medical application on DICOM CT images

A Review on Digital Image Compression Techniques

Feature Based Watermarking Algorithm by Adopting Arnold Transform

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Lecture 8 JPEG Compression (Part 3)

IMAGE COMPRESSION USING TWO DIMENTIONAL DUAL TREE COMPLEX WAVELET TRANSFORM

COLOR IMAGE COMPRESSION USING DISCRETE COSINUS TRANSFORM (DCT)

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

University of Mustansiriyah, Baghdad, Iraq

An Improved Performance of Watermarking In DWT Domain Using SVD

Stereo Image Compression

CMPT 365 Multimedia Systems. Media Compression - Image

MRT based Fixed Block size Transform Coding

CSEP 521 Applied Algorithms Spring Lossy Image Compression

IMAGE COMPRESSION TECHNIQUES

Topic 5 Image Compression

NOVEL ALGORITHMS FOR FINDING AN OPTIMAL SCANNING PATH FOR JPEG IMAGE COMPRESSION

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

A new robust watermarking scheme based on PDE decomposition *

Hybrid Image Compression Using DWT, DCT and Huffman Coding. Techniques

ROI Based Image Compression in Baseline JPEG

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

A Novel Secure Digital Watermark Generation from Public Share by Using Visual Cryptography and MAC Techniques

Video Compression An Introduction

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

A Robust Digital Watermarking Scheme using BTC-PF in Wavelet Domain

Total Variation Based Forensics for JPEG Compression

Using Shift Number Coding with Wavelet Transform for Image Compression

Fingerprint Image Compression

AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

Lecture 5: Compression I. This Week s Schedule

A New Lossy Image Compression Technique Using DCT, Round Variable Method & Run Length Encoding

Robust Image Watermarking based on Discrete Wavelet Transform, Discrete Cosine Transform & Singular Value Decomposition

13.6 FLEXIBILITY AND ADAPTABILITY OF NOAA S LOW RATE INFORMATION TRANSMISSION SYSTEM

ANALYSIS OF DIFFERENT DOMAIN WATERMARKING TECHNIQUES

JPEG 2000 Still Image Data Compression

Image and Video Compression Fundamentals

Image Compression using Haar Wavelet Transform and Huffman Coding

Fundamentals of Video Compression. Video Compression

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

AUDIO COMPRESSION USING WAVELET TRANSFORM

Digital Image Watermarking Using DWT Based DCT Technique

( ) ; For N=1: g 1. g n

Comparative Study between DCT and Wavelet Transform Based Image Compression Algorithm

[Singh*, 5(3): March, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

REMOVAL OF HIGH DENSITY IMPULSE NOISE USING MORPHOLOGICAL BASED ADAPTIVE UNSYMMETRICAL TRIMMED MID-POINT FILTER

A combined fractal and wavelet image compression approach

ISSN Vol.06,Issue.10, November-2014, Pages:

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

Wavelet Based Image Compression Using ROI SPIHT Coding

International Journal of Advance Research in Computer Science and Management Studies

Comparative Analysis of 2-Level and 4-Level DWT for Watermarking and Tampering Detection

Image Compression Using K-Space Transformation Technique

Digital Image Processing

Megapixel Video for. Part 2 of 4. Brought to You by. Presented by Video Security Consultants

Transcription:

DCT Image Compression for Color Images Priya Kapoor *1, Sunaina Patyal 2 1 M.Tech Research Scholar, GIMET, Amritsar e-mail:priyakapoor91290@gmail.com 2 M.Tech Research Scholar, GIMET, Amritsar Abstract--Image compression attempts to condense the number of bits obligatory to digitally symbolize an image while maintaining its apparent visual excellence Image compression is a procedure that is very vastly used for the integral and resourceful convey of data. It not only reduces the dimension of realistic file to be transferred but at the equivalent time reduces the storage space requirements, cost of the data transferred, and the time required for the transfer. It makes the diffusion progression faster, provides superior bandwidth and security beside illegitimate use of data. Image compression involve two types lossy image compression and lossless image compression. In lossy image compression there is no loss of data. However lossless image compression is used to retain original multimedia object The main objective of this research work is to implement 1DCT, 2DCT and True compression in MATLAB by using grey scale images.the comparison among the selected algorithms will also be drawn in order to get better result. Keywords--DCT 1, DCT2, True Compression ***** 1. Introduction In the case of video, compression causes some information to be lost; some information at a detail level is considered not essential for a reasonable reproduction of the scene. This type of compression is called lossy compression. Audio compression on the other hand, is not lossy. It is called lossless compression. Type of compression Various type of compression are a follow Lossless Compression Lossless techniques compress data without destroying or losing anything during the process. When the original document is decompressed, it's bit-for-bit identical to the original. Lossless is a term applied to image data compression techniques where very little of the original data is lost. It is typically used by the photographic and print media, where high resolution imagery is required and larger file sizes aren't a problem. In lossless compression schemes, the reconstructed image, after compression, is numerically identical to the original image. However lossless compression can only achieve a modest amount of compression. Lossy Compression Lossy is a term applied to data compression techniques in which some amount of the original data is lost during the compression process. Lossy image compression applications attempt to eliminate redundant or unnecessary information in terms of what the human eye can perceive. As the amount of data is reduced in the compressed image, the file size is smaller than the original. Lossy schemes are capable of achieving much higher compression. Under normal viewing conditions, no visible loss is perceived (visually lossless). Lossy image data compression is useful for application to World Wide Web images for quicker transmission across the Internet. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. Typical Image Coder Consist Of Following Components A typical lossy image compression system which consistsof three closely connected components namely (a) Source Encoder (b) Quantizer, and (c) Entropy Encoder. Compression is accomplished by applying a linear transform to decorrelate the image data, quantizing the resulting transform coefficients, and entropy coding the quantized values. a Source Encoder (or Linear Transformer) Over the years, a variety of linear transforms have beendeveloped which include Discrete Fourier Transform (DFT),Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT)and many more, each with its own advantages and disadvantages. b. Quantizer A quantizer simply reduces the number of bits needed to store the transformed coefficients by reducing the precision of those values. Since this is a many-to-one mapping, it is a lossy process and is the main source of compression in an encoder. Quantization can be performed on each individual coefficient,which is known as Scalar Quantization (SQ). Quantization can also be performed on a group of coefficients together, and this is known as Vector Quantization (VQ). Both uniform and non uniform quantizers can be used depending on the problem at hand. 3247

8 pixels International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 c. Entropy Encoder 4,compression factor of 8 to get the compressed form of the An entropy encoder further compresses the quantized values image. lossless to give better overall compression. It uses a model In case of compression factor of 2 we get the compressed to accurately determine the probabilities for each quantized image but in case of compression factor of 4 we get more value and produces an appropriate code based on these compressed image.and in case of compression factor of 8 we probabilities so that the resultant output code stream will be get extremely compressed image.but the quality of the smaller than the input stream. The most commonly used image that is obtained from the compression factor 2 is entropy encoders are the Huffman encoder and the better than image we obtained from compression factor 4 arithmetic encoder, although for applications requiring fast and compression factor 8. execution, simple run-length encoding (RLE) has proven This type of compression is obtained by applying the DCT very effective. to each row.and we get compressed form of the image. A discrete cosine transform (DCT) expresses a finite 8 pixels sequence of data points in terms of a sum of cosine An 8x8 functions oscillating at different frequencies. DCTs are block important to numerous applications in science and engineering, from lossy compression of audio (e.g. MP3) and images (e.g. JPEG) (where small high-frequency components can be discarded), to spectral methods for the numerical solution of partial differential equations. The use Image of cosine rather than sine functions is critical in these Entropy DCT Quantiser zigzag Encoder applications: for compression, it turns out that cosine Channel functions are much more efficient (as described below, or Storage fewer functions are needed to approximate a typical signal), reverse Entropy IDCT Dequantiser zigzag Decoder whereas for differential equations the cosines express a particular choice of boundary conditions VARIOUS FORMS OF DCT In this project we explain various forms of the DCT. a) 1DCT b) DCT c) TRUE COMPRESSION 1 LEVEL DCT the discrete cosine transform (DCT) helps separate the image into parts (or spectral sub-bands) of differing importance (with respect to the image's visual quality). The DCT is similar to the discrete Fourier transform: it transforms a signal or image from the spatial domain to the frequency domain Fig 1.2: compression and decompression [9] The coefficients are then reordered into a onedimensional array in a zigzag manner before further entropy encoding. The compression is achieved in two stages; the first is during quantisation and the second during the entropy coding process. JPEG decoding is the reverse process of coding. Factoring reduces problem to a series of 1D DCTs apply 1D DCT (Vertically) to Columns apply 1D DCT (Horizontally) to resultant Vertical DCT above. or alternatively Horizontal to Vertical. Fig 1: 1 Level DCT[9] In this case firstly we select the input image by clicking the browse button and after that we apply 1 dct to the input image by compression factor of 2,compression factor of Fig 1.3 2 Level DCT [9] In this case we apply 2 dct to the image by compression factor of 2*2,compression factor of 4*4,compression factor of 8*8 to get the compressed form of the image. 3248

to color still image. After a preprocessing step(mean removing and RGB to YCbcr transformation),the DCT transform is applied and followed by an iterative phase(using the bisection method) including the thresholding,the quantization,dequantization,the inverse DCT,YCbCr to RGB transform and the mean recovering.author done in order to guarantee that a desired quality (fixed in advance using the well known PSNR metric) is checked.for the aim to obtain the best possible compression ratio CR, the next step is the application of a proposed adaptive scanning providing,for each(n, n) DCT block a corresponding(n_n) vector containing the maximum possible run of zero at its end.the efficiency of proposed scheme is demonstrated by results especially when the method use is block truncation using paper filtering principle.as society has become increasingly reliant upon digital images to communicate visual information, a number of forensic techniques have been developed to verify the authenticity of digital images. Amongst the most successful of these are techniques that make use of an image s compression history and its associated compression In case of compression factor of 2*2 we get the compressed fingerprints. Little consideration has been given, however, to image but in case of compression factor of 4 *4 we get more anti-forensic techniques capable of fooling forensic compressed image.and in case of compression factor of 8*8 algorithms. In this paper, author present a set of anti- we get extremely compressed image. but the quality of the image that is obtained from the compression factor 2*2 is better than image we obtained from compression factor 4*4 and compression factor 8*8. This type of compression is obtained by applying the 2 DCT to each row and column..and we get compressed form of the image. One of the properties of the 2-D DCT is that it is separable meaning that it can be separated into a pair of 1-D DCTs. To obtain the 2-D DCT of a block a 1-D DCT is first performed on the rows of the block then a 1-D DCT is performed on the columns of the resulting block. The same applies to the IDCT. This process is illustrated on the following slide. \TRUE COMPRESSION In this type of compression we compress the image by using two methods.first one is by 2 color component and secondly by using 3 color component In first case we remove one color component of the image and in second case we remove two color components of the image.and get the compressed form of the image,in this case the image that is obtained after the second case is the compressed form of the image. This true compression is basically used in medical imaging 2. Related work The design a lossy image compression algorithm dedicated forensic techniques designed to remove forensically significant indicators of compression from an image. author do this by first developing a generalized framework for the design of anti-forensic techniques to remove compression fingerprints from an image s transform coefficients. This framework operates by estimating the distribution of an image s transform coefficients before compression, then adding anti-forensic dither to the transform coefficients of a compressed image so that their distribution matches the estimated one.author then use this framework to develop anti-forensic techniques specifically targeted at erasing compression fingerprints left by both JPEG and waveletbased coders. Additionally, author propose a technique to remove statistical traces of the blocking artifacts left by image compression algorithms that divide an image into segments during processing. Through a series of experiments author demonstrate that anti-forensic techniques are capable of removing forensically detectable traces of image compression without significantly impacting an image s visual qualitya novel reduced-reference (RR) image quality assessment (IQA) is proposed by statistical modeling of the discrete cosine transform (DCT) coefficient distributions. In order to reduce the RR data rates and further exploit the identical nature of the coefficient distributions between adjacent DCT subbands, the DCT coefficients are reorganized into a three-level coefficient tree. Subsequently, generalized Gaussian density (GGD) is employed to model the coefficient distribution of each reorganized DCT subband. Image compression is a method through which we can reduce the storage space of images, videos will helpful to increase storage and transmission process s performance. In image compression, we do not only concentrate on reducing size but also concentrate on doing it without losing quality and information of image. In this paper, author describe two image compression techniques a. The first technique was based on Discrete Cosine Transform (DCT) and the second one was based on Discrete Wavelet Transform (DWT). The results of simulation are shown and compared different quality parameters of its by applying on various imagesimage compression is now essential for applications such as transmission and storage in data bases. In this paper author review and discuss about the image compression, need of compression, its principles, and classes of compression and various algorithm of image compression. In paper author attempts to give a recipe for selecting one of the popular image compression algorithms based on Wavelet, JPEG/DCT, VQ, and Fractal approaches. DCT-based image watermarking technique is proposed To improve the 3249

robustness of watermark against JPEG compression, the Step4. Now apply inverse DCT on each block. most recently proposed techniques embed watermark into And combine these blocks into an image the low-frequency components of the image. However, these components hold significant information of the image. which is Directly replacing the low-frequency components with watermark may introduce undesirable degradation to image quality. To preserve acceptable visual quality for watermarked images, author propose a watermarking technique that adjusts the DCT low-frequency coefficients by the concept of mathematical remainder. Simulation results demonstrate that the embedded watermarks can be almost fully extracted from the JPEG-compressed images with very high compression ratio. 3. Problem Formulation A. Problems in existing Work The DCT method is a type of transform method. Rather than simply trying to compress the pixel values directly, the image is first TRANSFORMED into the frequency domain. Compression can now be achieved by more coarsely quantizing the large amount of high-frequency components usually present. The JPEG standard algorithm for fullcolour and grey-scale image compression is a DCT compression standard that uses 8x8 blocks. It was not designed for graphics or line drawings and is not suited to these image types. The Discrete Cosine Method uses continuous cosine waves, like cos(x) below, of increasing frequencies to represent the image pixels. B. Experimental set-up In order to implement the proposed algorithm; design and implementation is done in MATLAB using image processing toolbox. In order to do cross validation the proposed algorithm is compared with the existing standard median filter and relaxed median filter. Table 1 is showing the various images which are used in this research work. Images are given along with their format and size. All the images are of different kind and also the filtering evaluation is different for each image. Sequence No NAME FORMAT SIZE 1 PIC 1.JPG 73 KB 2 PIC 2.JPG 128 KB 3 PIC 3.JPG 147KB 4 PIC 4.JPG 117KB 5. Experimental results Figure 2 has shown the input image which is passed to the simulation. 4. Experimental Setup and Proposed Algorithm A. Proposed Algorithm This section consists proposed algorithm s steps. It has given the different steps which are required to implement the proposed algorithm. Encoding System: Step1. The image is broken into N*N blocks of pixels. Here N may be 4, 8, 16,etc. Step2. Working from left to right, top to bottom, the DCT is applied to each block. Step3. Each block s elements are compressed through quantization means dividing by some specific value. Step4. The array of compressed blocks that constitute the image is stored in a drastically reduced amount of space. Decoding System: Step1. Load compressed image from disk Step2. Image is broken into N*N blocks of pixels. Step3. Each block is de-quantized by applying reverse process of quantization. Figure 2 Input image Figure 3 has shown theimage compression using 1 dct. the input image by compression factor of 2,compression factor of 4,compressin factor of 8 to get the compressed form of the image. 3250

Table 2 Image Compression Using 1DCT SERIAL NO SIZE OF IMAGE 1 DCT CF2 1 DCT CF4 1DCT CF8 1 73KB 36KB 31KB 26KB 2 128KB 26KB 25KB 22KB 3 147KB 12KB 10KB 9KB 4 117KB 93KB 78KB 65KB 200 Figure 3 Image Compression Using 1DCT Figure 4 has shown the Image Compression Using 2 DCT. 150 100 50 0 PIC 1 PIC 2 PIC 3 PIC 4 ORIGINAL SIZE 1 DCT CF 2 1 DCT CF 4 1 DCT CF 8 Figure 6 Image Compression Using 1DCT Table 3 and Figure 7 is showing the comparative analysis of the Image Compression Using 2DCT. Figure 4 Image Compression Using 2 DCT Figure 5 has shown true compression. In this figure we apply true compression to the input image by compression of 2 color values and 3 color values. This type of compression is used in medical imaging. SERIAL NO Table 3 Image Compression Using 2DCT SIZE OF IMAGE 2 DCT CF 2*2 2 DCT CF 4*4 2 DCT CF 8*8 1 73KB 33KB 23KB 19KB 2 128KB 26KB 24KB 18KB 3 147KB 11KB 8KB 6KB 4 117KB 83KB 61KB 54KB 160 140 120 100 80 60 40 20 0 PIC 1 PIC 2 PIC 3 PIC 4 ORIGINAL IMAGE 2 DCT CF 2*2 2 DCT CF 4*4 2 DCT CF 8*8 Figure 5 true compression 6. Performance evaluation Table 2 and Figure 6 are showing the comparative analysis of Image Compression Using 1DCT Figure 7 Image Compression Using 2DCT Table 4 and Figure 8. is showing the comparative analysis of the Image Compression using true compression. 3251

Table 4 True compression 200 150 100 50 0 Serial No Size Of Image 2 Color Values 3 Color Values 1 73kb 12kb 11kb 2 128kb 10kb 8kb 3 147kb 4.9kb 4.5kb 4 117kb 26kb 24kb PIC 1 PIC 2 PIC 3 PIC 4. Figure 7 Image Compression Using True compression 7. Conclusion and Future work ORIGINAL SIZE 2 COLOR VALUES 3 COLOR VALUES This project work has focused on image compression techniques. The survey has shown that the DCT based techniques are quite effective in terms of compression ratio. DCT based techniques can preserve the maximum information while doing the compression. So the resultant images are not much blurred or not lost their information after compression. Different DCT based compression algorithms are designed and implemented in MATLAB using image processing toolbox by developing a user familiar GUI. Different compression factors are selected for comparison purpose. It has been shown that the 2DCT with 4*4 provides quite better results than others. However if the resultant images need to be converted into the gray like in medical applications then true color compression is quite effective. In near future we will extend this work by integrating the 2DCT based compression with run length coding to reduce the size of digital images in efficient manner. However in the discussed work the compression levels are selected manually in near future we will try to find the way in which compression factor can be selected automatically References [1] S. Esakkirajan, T. Veerakumar, Adabala N. Subramanyam, and C. H. PremChand. 2011 Removal of High Density Salt and Pepper Noise Through Modified Decision Based Unsymmetric Trimmed Median Filter.IEEE SIGNAL PROCESSING LETTERS, VOL. 18, NO. 5 [2] PriyankaKamboj, Versha Rani. 2013 Image Enhancement Using Hybrid Filtering Techniques. International Journal of Science and Research.Vol 2, No. 6, June 2013. [3] Shanmugavadivu, EliahimJeevaraj. 2012 Laplace Equation based Adaptive Median Filter for Highly Corrupted Images. International Conference on Computer Communication and Informatics [4] Shanmugavadivu P and EliahimJeevaraj P S. 2011 Fixed Value Impulse Noise Suppression for Images using PDE based Adaptive Two-Stage Median Filter. ICCCET-11 (IEEE Explore), pp. 290-295. [5] K. S. Srinivasan and D. Ebenezer. 2007 A new fast and efficient decision based algorithm for removal of high density impulse noise.ieee Signal Process.Lett, vol. 14, no. 3, pp. 189 192 [6] V. Jayaraj and D. Ebenezer. 2010 A new switching-based median filtering scheme and algorithm for removal of high-density salt and pepper noise in image.eurasip J. Adv. Signal Process. [7] K. Aiswarya, V. Jayaraj, and D. Ebenezer. 2010 A new and efficient algorithm for the removal of high density salt and pepper noise in images and videos. Second Int. Conf. Computer Modeling and Simulation, pp. 409 413. [8] GnanambalIlango and R. Marudhachalam. 2011 new hybrid filtering techniques for removal of Gaussian noise from medical images.arpn Journal of Engineering and Applied Sciences. [9] P. E. Ng and K. K. Ma. 2006 A switching median filter with boundary discriminative noise detection for extremely corrupted images. IEEE Trans. Image Process. vol. 15, no. 6, pp. 1506 1516 [10] Afrose. 2011 Relaxed Median Filter: A Better Noise Removal Filter for Compound Images. International Journal on Computer Science and Engineering (IJCSE) Vol. 4 No. 07 [11] Rafael C. Gonzalez, et al., 2005. Digital Image Processing using MATLAB, second Ed, Pearson Education, India. [12] Median Filtering, [Last Visited] 18 June 2013 [Online] [Available] www.mathworks.com. [13] Chan, R. H. Salt and pepper noise removal by mediantype noise detectors and detail-preserving regularization. IEEE Trans. on Image Processing, Vol. 14, no. 10, pp 1479-148 [14] H. Hwang and R. A. Hadded. 1995 Adaptive median filter: New algorithms and results.ieee Trans. Image Process, vol. 4, no. 4, pp. 499 502, Apr. 1995. [15] J. Astola and P. Kuosmaneen.1997 Fundamentals of Nonlinear Digital Filtering.Boca Raton, FL: CRC, 1997. 3252