CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Similar documents
CHAPTER 7. Page No. 7 Conclusions and Future Scope Conclusions Future Scope 123

IMAGE COMPRESSION USING TWO DIMENTIONAL DUAL TREE COMPLEX WAVELET TRANSFORM

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Fingerprint Image Compression

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Modified SPIHT Image Coder For Wireless Communication

Image Compression Using K-Space Transformation Technique

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

CSEP 521 Applied Algorithms Spring Lossy Image Compression

IMAGE COMPRESSION TECHNIQUES

MRT based Fixed Block size Transform Coding

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

Topic 5 Image Compression

Enhancing the Image Compression Rate Using Steganography

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

IMAGE COMPRESSION. Chapter - 5 : (Basic)

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

So, what is data compression, and why do we need it?

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

An Optimum Novel Technique Based on Golomb-Rice Coding for Lossless Image Compression of Digital Images

Improved Image Compression by Set Partitioning Block Coding by Modifying SPIHT

Medical Image Compression using DCT and DWT Techniques

A WAVELET BASED BIOMEDICAL IMAGE COMPRESSION WITH ROI CODING

2-D SIGNAL PROCESSING FOR IMAGE COMPRESSION S. Venkatesan, Vibhuti Narain Rai

Statistical Image Compression using Fast Fourier Coefficients

Image Compression Algorithms using Wavelets: a review

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

CS 335 Graphics and Multimedia. Image Compression

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

SSIM based image quality assessment for vector quantization based lossy image compression using LZW coding

Image Compression Algorithm and JPEG Standard

Wavelet Based Image Compression Using ROI SPIHT Coding

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Volume 2, Issue 9, September 2014 ISSN

A combined fractal and wavelet image compression approach

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

Comparative Study between DCT and Wavelet Transform Based Image Compression Algorithm

Compression of Stereo Images using a Huffman-Zip Scheme

VC 12/13 T16 Video Compression

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

Comparative Study of Dual-Tree Complex Wavelet Transform and Double Density Complex Wavelet Transform for Image Denoising Using Wavelet-Domain

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Optimization of Bit Rate in Medical Image Compression

Wavelet Transform (WT) & JPEG-2000

Hyper Spectral Image Compression Using Fast Discrete Curve Let Transform with Entropy Coding

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

A Comparative Study between Two Hybrid Medical Image Compression Methods

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

ELL 788 Computational Perception & Cognition July November 2015

Hybrid Image Compression Using DWT, DCT and Huffman Coding. Techniques

Comparison of Image Compression Techniques: Huffman and DCT

Error Protection of Wavelet Coded Images Using Residual Source Redundancy

SATELLITE IMAGE COMPRESSION TECHNIQUE BASED ON THE EVIDENCE THEORY

Digital Image Processing

A Comprehensive lossless modified compression in medical application on DICOM CT images

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

A Comparative Study of Image Compression Techniques

Keywords DCT, SPIHT, PSNR, Bar Graph, Compression Quality

IMAGE COMPRESSION. October 7, ICSY Lab, University of Kaiserslautern, Germany

Lossless Compression Algorithms

Part 1 of 4. MARCH

Digital Image Representation Image Compression

Comparison of different Fingerprint Compression Techniques

Fuzzy Logic Based Hybrid Image Compression Technology

Performance Comparison of Discrete Orthonormal S-Transform for the Reconstruction of Medical Images

Design of DTCWT-DWT Image Compressor-Decompressor with Companding Algorithm

Comparative Analysis on Medical Images using SPIHT, STW and EZW

A NEW ENTROPY ENCODING ALGORITHM FOR IMAGE COMPRESSION USING DCT

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

7.5 Dictionary-based Coding

PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE

A New Approach to Compressed Image Steganography Using Wavelet Transform

JPEG. Wikipedia: Felis_silvestris_silvestris.jpg, Michael Gäbler CC BY 3.0

Lecture 5: Compression I. This Week s Schedule

Denoising of Fingerprint Images

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Fundamentals of Video Compression. Video Compression

Image Compression. CS 6640 School of Computing University of Utah

DCT Based, Lossy Still Image Compression

signal-to-noise ratio (PSNR), 2

VLSI Implementation of Daubechies Wavelet Filter for Image Compression

Comparative Evaluation of DWT and DT-CWT for Image Fusion and De-noising

Reconstruction PSNR [db]

PERFORMANCE IMPROVEMENT OF SPIHT ALGORITHM USING HYBRID IMAGE COMPRESSION TECHNIQUE

Image Resolution Improvement By Using DWT & SWT Transform

Adaptive Quantization for Video Compression in Frequency Domain

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Transcription:

CHAPTER 6 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform Page No 6.1 Introduction 103 6.2 Compression Techniques 104 103 6.2.1 Lossless compression 105 6.2.2 Lossy compression 105 6.2.3 Entropy 105 6.3 Wavelet Transform based compression technique 106 6.3.1 Dual Tree Complex Wavelet Transform 107 6.3.2 Thresholding in Image Compression 107 6.4 Introduction to Huffman coding 108 6.4.1 Huffman Coding 108 6.4.2 Huffman Decoding 110 6.5 DT-CWT Based Compression Algorithm 111 6.6 Results and Discussion 112 6.6.1 Performance Metrics 112 6.7 Summary 120

103 CHAPTER 6 HUFFMAN CODING BASED IMAGE COMPRESSION USING COMPLEX WAVELET TRANSFORM 6.1 INTRODUCTION Image compression is one of the most significant application of the wavelet transform. Capon was one of the first person to introduce the compression of normal images by run length encoding[97]. Today there is no change in the basic methodology of compression, but higher compression ratios could be attained. In this chapter the investigator discussed about the need of compression in section 6.1, types of compression techniques under section 6.2, and wavelet transform based compression in section 6.3. Huffman coding is explained in section 6.4 and proposed complex wavelet transform based image compression algorithm using huffman coding is discussed along with results and discussion under section 6.5 and 6.6 respectively. The goal of compression algorithm is to eliminate redundancy in the data i.e. the compression algorithms calculates which data is to be considered to recreate the original image along with the data to be removed [98]. By eliminating redundant (duplicate) information, the image can be compressed. The three types of redundancies are coding redundancy, inter pixel redundancy and psycho visual redundancy. Best possible code words are used to reduce coding redundancy. The correlation among the pixels results in inter pixel redundancy. Visually unimportant information lead to psycho visual redundancy. The purpose of compression system is to shrink the number of bits to the possible extent, while keeping the visual quality of the reconstructed image as close

104 to the original image. Fig 6.1 shows the basic block diagram of a general image compression system. It consists of an encoder block and a decoder block shown in the fig: 6.1. Original Image f(x,y) Mapper Quantizer Symbol coder Encoder Compressed data for storage and transmission Compressed data Symbol decoder Inverse Mapper Reconstructed Image Decoder f (x, y) Fig 6.1 General image compression system In the fig:6.1, the encoder block reduces different redundancies of the input image. In the first stage, the mapper converts the input image into another set-up designed to eliminate inter pixel redundancy. In the second stage, a quantized block reduces the precision of the first stage output in accordance with a predefined decisive factor. In the final stage, a symbol encoder maps the quantized output in accordance with a predefined criterion. In the decoder block, an inverse mapper along with a symbol decoder perform the opposite operations of the encoder block. An inverse quantizer is not considered since quantization is irretrievable [1]. 6.2 COMPRESSION TECHNIQUES Depending on the possibility of reconstruction of clear-cut original image, compression techniques are classified into lossless and lossy compression techniques.

105 6.2. 1 Lossless compression In lossless technique, the original image can be perfectly recovered from the compressed (encoded) image. In this technique each and every bit of information is important. Lossless image compression scheme is used in the applications, where no loss of data is required such as in medical imaging. 6.2.2 Lossy compression Higher compression ratios are possible in lossy techniques by reducing some more amount of redundancies. Lossy techniques are extensively used in most of the applications like internet browsing and video processing, because the quality of the reconstructed image is not important. Here a perfect restoration of the image is not desirable. [99] 6.2.3 Entropy To categorize various compression methods, it is essential to distinguish the entropy of the image. A low frequency and highly correlated image which has low entropy can be compressed more by any compression technique. A compression algorithm designed for a particular application may not perform good for other applications. The entropy H can be calculated as: G 1 H = k=0 P(k)log 2 [P(k)] (6.1) Where G is gray levels and P(k) is probability of gray level k. P(k) can be calculated by knowing the frequency[h(k)] of gray level k in an image of size M N as P(k) = h(k) M.N (6.2) The investigator concentrates on wavelet based transform coding image compression algorithms. The coefficients obtained from the discrete image transform, which make little contribution to the information can be eliminated. In Discrete

106 Cosine Transform (DCT) based compression method, the input image is split into 8 8 blocks and each block is transformed separately. However this method introduces blocking artifacts, and also higher compression ratios are not achieved. But the wavelet transform produces no artifacts and it can be applied to entire image rather than blocks. 6.3 WAVELET TRANSFORM BASED COMPRESSION TECHNIQUE In DWT, the most significant information corresponds to high amplitudes and less significant information corresponds to low amplitudes. Compression can be accomplished by neglecting the least significant information. The wavelet transforms make possible in attaining higher compression ratios along with high-quality of reconstruction. Wavelets are also used in mobile applications, denoising, edge detection, speech recognition, feature extraction, real time audio-video applications, biomedical imagining, orthogonal divisional multiplexing. So the wavelet transform popularity is increasing due to its ability to decrease distraction in the reconstruction signal. Original Image Forward Transform Encode Transform Values Compressed Image Fig 6.2a : Transform based compression procedure. Compressed - Image Decode Transform Values Inverse Transform Reconstructed Image Fig 6.2b : Transform based decompression procesdure. Fig. 6.2 shows the general transform based image compression standard. The forward and inverse transforms along with encoding and decoding processes are shown in fig: 6.2. DWT and IDWT are appropriate transforms used in wavelet

107 transform based image compression [76]. Discrete Cosine Transform (DCT) is used in JPEG algorithm. 6.3.1 Dual Tree-Complex Wavelet Transform (DT-CWT) Complex wavelets are not widely used in image processing applications due to the difficulty in designing complex wavelet filters. Dr. Nick Kingsbury of Cambridge university projected a dual tree arrangement of CWT to overcome the limitations in standard DWT. DT-CWT employ two trees to produce the real and imaginary parts of wavelet coefficients with real filter set separately. This Transform allows practical usage of complex wavelets in image processing [19][20][38][100]. DT-CWT can be used for a variety of applications like image compression, denoising, enhancement, inpainting and image restoration etc., DT-CWT is a structure of DWT which produce complex coefficients by means of a two set of wavelet filters to achieve the real and imaginary coefficients. The purpose of usage of complex wavelets is that it presents a high grade of shift invariance and good directionality. The DT-CWT applied to the rows and columns of the image results in six complex high pass detailed sub-images and two low-pass subimages at each level. The subsequent stage iterate on only low pass sub-image. In 2D DWT, the three high pass sub-bands result in 0 0,45 0, and 90 0 orientations. Whereas CWT has six high pass sub-bands at each level which are oriented at ±15 0,± 45 0 and ±75 0. 6.3.2 Thresholding in Image Compression The wavelet coefficients obtained are near to zero value for some kind of signals. Thresholding can revise these coefficients to generate more zeros. With hard thresholding many coefficients are tend to zero. Actual compression of signal is not

108 achieved though the wavelet analysis and thresholding are performed. Standard entropy coding methods like huffman coding allow to compress the data by assigning short codes for more frequently occurring symbols and large code words for other symbols. By proper encoding, the signal requires less memory space for storage, and takes less time during transmission. A good threshold value is considered to adjust the energy preservation and quantity of zeros. Energy is lost and high compression is achieved if higher threshold value is chosen. Thresholding can be selected locally or globally. In global threshold same threshold is applied to each subband, whereas local thresholding involve usage of various threshold values for every subband.[68]. 6.4 INTRODUCTION TO HUFFMAN CODING Huffman codes developed by D.A. Huffman in 1952 are optimal codes that map one symbol to one code word. Huffman coding is one of the well-liked technique to eliminate coding redundancy. It is a variable length coding and is used in lossless compression. Huffman encoding assigns smaller codes for more frequently used symbols and larger codes for less frequently used symbols [101]. Variable-length coding table can be constructed based on the probability of occurrence of each source symbol. Its outcome is a prefix code that expresses most common source symbols with shorter sequence of bits. The more probable of occurrence of any symbol the shorter is its bit representation. 6.4.1 Huffman Coding Huffman coding generate the smallest probable number of code symbols for any one source symbol. The source symbol is likely to be either intensities of an image or intensity mapping operation output. In the first step of huffman procedure, a chain of source reductions by arranging the probabilities in descending order and

109 merging the two lowest probable symbols to create a single compound symbol that replaces in the successive source reduction. This procedure is repeated continuously upto two probabilities of two compound symbols are only left[1]. This process with an example is illustrated in Table 6.1a. In Table 6.1a the list of symbols and their corresponding probabilities are placed in descending order. A compound symbol with probability 0.1 is obtained by merging the least probabilities 0.06 and 0.04. This is located in source reduction column 1. Once again the probabilities of source symbols are arranged in descending order and the procedure is continued until only two probabilities are recognized. These probabilities observed at far right are 0.6 and 0.4 shown in the Table 6.1a.

110 In the second step, huffman procedure is to prepare a code for each compact source starting with smallest basis and operating back to its original source. As shown in the table 6.1b, the code symbols 0 and 1 are assigned to the two symbols of probabilities 0.6 and 0.4 on the right. The source symbol with probability 0.6 is generated by merging two symbols of probabilities 0.3 and 0.3 in the reduced source. So to code both of these symbols the code symbol 0 is appended along with 0 and 1 to make a distinction from one another. This procedure is repeated for each reduced source symbol till the original source code is attained [99]. The final code become visible at the far-left in table 6.1b. The average code length is defined as the product of probability of the symbol and number of bits utilized to represent the same symbol. L average = (0.4)(1) +(0.3)(2) + (0.1)(3) + (0.1)(4) + (0.06)(5) + (0.04)(5) = 2.2 bits per symbol. The source entropy calculated is L H = P(k)log 2 [P(k)] = 2.14 bits/symbol k=0 Then the efficiency of Huffman code 2.14/2.2 = 0.973. 6.4.2 Huffman Decoding In huffman coding the optimal code is produced for a set of symbols. A unique error less decoding scheme is achieved in a simple look-up-table approach. Sequence of huffman encoded symbols can be deciphered by inspecting the individual symbols of the sequence from left-to-right. Using the binary code present in Table 6.1b, a left-to-right inspection of the encoded string 1010100111100 unveils the decoded message as a 2 a 3 a 1 a 2 a 2 a 6. Hence with huffman decoding process the compressed data of the image can be decompressed.

111 6.5 DT-CWT BASED COMPRESSION ALGORITHM The investigator proposed an huffman coding based 2D-DT-CWT image compression algorithm. The following is the basic procedure for implementing the proposed algorithm. o The input image to be compressed is considered. o The input image is decomposed into wavelet coefficients w using DT- CWT. o The detailed wavelet coefficients w are modified using thresholding. o Huffman encoding is applied to compress the data. o The ratio of original data size to compressed data size ( Compression ratio) is calculated. o Huffman decoding and inverse DT-CWT is applied to reconstruct the decompressed image. o MSE and PSNR are computed to test the quality of the decompressed image. Fig:6.3 illustrates the compression algorithm using DT-CWT along with Huffman coding. ORIGINAL DUAL TREE THRES - HUFFMAN IMAGE CWT HOLDING ENCODING PSNR RECONSTRUCTED IMAGE INVERSE DUAL TREE CWT HUFFMAN DECODING Fig 6.3 Block diagram of compression algorithm using DT-CWT

112 6.6 RESULTS AND DISCUSSION As shown in fig:6.3, the input image is decomposed into wavelet coefficients by using complex wavelet transform. The coefficients obtained are applied to thresholding. The thresholded coefficients are coded by huffman encoding. To reform the decompressed image, the decompression algorithm converts the compressed data through huffman decoding followed by conversion of wavelet coefficients into a time domain signal using inverse dual tree complex wavelet transform. 6.6.1 Performance Metrics Mean square error, root mean square error, peak signal to noise ratio, compression ratio and bits per pixel are the metrics used for evaluating the performance of image processing methods. MSE and RMSE can be calculated by considering the cumulative squared error between the compressed and the original image. PSNR is the measure of peak error. Compression ratio is the ratio of the original file to the compressed file and bits per pixel is the ratio of compressed file in bits to the original file in pixels. The mathematical formulae is determined as RMSE = 1 M.N M N i=1 j=1 (X(i, j) Y(i, j))2 (6.3) Where, M and N are width and height of an image, X and Y are original and processed images respectively. PSNR = 10 log 10 255 2 MSE (6.4) Compression ratio ( CR) = BPP = Original Image size Compressed iamge size 1 compression ratio (6.5) (6.6) In general a large value of PSNR is good. It means that the signal information is more than the noise (error). A lower value of MSE transforms to a higher value of PSNR. So a better compression method with high PSNR (low MSE ) and, with a

113 good compression ratio can be implemented. The same method may not perform well for all types of images. The proposed algorithm is tested on different images like a gray colored cameraman image, a colored lena image and a medical image. For cameraman image of size 256 x 256, different parameters like the original image size, compressed image size, CR, BPP, PSNR and RMS error for various threshold values are tabulated. Table 6.2a and Table 6.2b shows parameter values obtained using proposed and existing methods. Table 6.2a: Different parameter values at different thresholds for a gray color cameraman image using proposed method. Parameter TH=6 TH=10 TH=20 TH=30 TH=40 TH=50 TH=60 Original File 65240 65240 65240 65240 65240 65240 65240 Compressed File Size 10783 10114 9320 8125 8505 8310 8165 Compression Ratio (CR) 6.05 6.45 7 7.43 7.67 7.85 7.99 Bits Per Pixel (BPP) 1.32 1.24 1.14 1.07 1.04 1.01 1 Peak Signal to Noise Ratio 40.32 36.25 31.15 28.99 27.43 26.23 25.28 (PSNR) RMS error 2.43 3.9 6.92 8.93 10.74 12.34 13.74 Table 6.2b: Different parameter values at different thresholds for a gray color cameraman image using existing method(ezw and huffman encoding [62]) Parameter TH=6 TH=10 TH=30 TH=60 Original File 65240 65240 65240 65240 Compressed File Size 11186 10870 9944 8437 Compression Ratio (CR) 5.83 6.00 6.56 7.73 Bits Per Pixel (BPP) 2.52 1.48 0.74 0.33 Peak Signal to Noise Ratio (PSNR) 33.36 33.37 33.16 32.22

114 Chart 6.1: Threshold V/s Compression ratio for cameraman image Chart 6.2: Threshold V/s PSNR for cameraman image Chart 6.3: Threshold V/s BPP for cameraman image

115 From the tabulated results, it is evident that the proposed huffman coding based 2D-DT-CWT algorithm presents outstanding results. A higher compression ratio can be attained by selecting an appropriate threshold value. Threshold V/s Compression ratio, Threshold V/s PSNR and Threshold V/s BPP graphs have been depicted in the Charts 6.1, 6.2 and 6.3. respectively. Here the curves are related to the existing and proposed methods. The existing method is based on DWT with EZW and huffman encoding. The proposed method is based on only huffman coding with 2D-DT-CWT. It is apparent that the proposed method gives better performance in compression ratio, BPP and PSNR values compared to the existing method. Table 6.3:Different parameter values at different thresholds for lenargb image Parameter TH=6 TH=10 TH=20 TH=30 TH=40 TH=50 TH=60 Original File 786488 786488 786488 786488 786488 786488 786488 Compressed File Size 111876 107150 94986 90297 87777 85580 83936 Compression Ratio (CR) 7.03 7.34 8.28 8.71 8.96 9.19 9.37 Bits Per Pixel (BPP) Peak Signal to Noise Ratio (PSNR) 3.41 3.26 2.89 2.75 2.67 2.611 2.56 40.32 36.67 32.2 29.81 28.25 27.41 26.26 RMS error 3.01 3.4 5.8 7.64 9.15 10.39 11.52 Different parameters like original image size, compressed image size, CR, BPP, PSNR and RMS error for various thresholds are calculated and tabulated in the Table 6.3 for lenargb image of size 256 x 256.

116 The original lenargb image and retrieved images for different thresholding values ( TH = 10, 20, 30, 40, 50) and their corresponding PSNR values are shown in fig:6.5_a, fig:6.5_b, fig:6.5_c, fig:6.5_d, fig:6.5_e, and fig:6.5_f respectively. Magnetic Resonance Imaging(MRI) is useful for showing abnormalities of the brain such as hemorrhage, stroke, tumor, multiple sclerosis etc., Signal processing is required to detect and decode the abnormalities in MRI imaging. Table 6.4:Different parameter values at different thresholds for a medical image Parameter TH=6 TH=10 TH=20 TH=30 TH=40 TH=50 TH=60 Original File 17912 17912 17912 17912 17912 17912 17912 Compressed File Size 3589 3373 2847 2626 2494 2424 2372 Compression Ratio (CR) 4.99 5.31 6.29 6.82 7.18 7.39 7.55 Bits Per Pixel (BPP) 4.8 4.51 3.81 3.51 3.34 3.24 3.17 Peak Signal to Noise Ratio 40.32 35.89 30.96 28.31 26.4 25.07 24.06 (PSNR) RMS error 3.3 4.09 7.21 9.79 12.2 14.22 15.97 Different parameters like original image size, compressed image size, CR, BPP, PSNR and RMS error for various thresholds are calculated and tabulated in the Table 6.4 for medical image of size 256 x 256.

117 Cameraman image Image Size: 256*256 Fig 6.4(a)Input image Fig 6.4(d)Threshold=30,PSNR= 28.99 Fig 6.4(b)Threshold=10,PSNR= 36.25 Fig 6.4(e)Threshold=50,PSNR=26.23 Fig6.4(c)Threshold=20,PSNR= 31.15 Fig 6.4(f)Threshold=60,PSNR= 25.28 Fig 6.4 Illustration of 2D-DT-CWT based image compression for various thresholds of cameraman image

118 lenargb image Image Size: 256*256 Fig 6.5(a)Input image Fig 6.5(d)Threshold=30,PSNR=29.81 Fig 6.5(b)Threshold=10,PSNR=36.29 Fig 6.5(e)Threshold=40,PSNR =28.25 Fig 6.5(c)Threshold=20,PSNR= 32.20 Fig 6.5(f)Threshold=50,PSNR=27.41 Fig 6.5 Illustration of DTCWT based image compression for various Thresholds of LenaRGB image

119 Medical image Image Size: 256*256 Fig 6.6(a)Input image Fig 6.6(d)Threshold=30,PSNR= 28.31 Fig 6.6(b)Threshold=10,PSNR= 35.89 Fig 6.6(e)Threshold=40,PSNR= 26.40 Fig 6.6(c)Threshold=20,PSNR=30.96 Fig 6.6(f)Threshold=50,PSNR= 25.57 Fig 6.6 Illustration of DTCWT based image compression for various Thresholds of medical image

120 Original medical image and retrieved images for different thresholding values ( TH = 10, 20, 30, 40, 50) and their corresponding PSNR values are shown in fig:6.6_a, fig:6.6_b, fig:6.6_c, fig:6.6_d, fig:6.6_e, and fig:6.6_f respectively. The proposed huffman based 2D-DT-CWT image compression technique outperforms in terms of good compression ratio, better BPP and higher PSNR. The algorithm is examined on different standard images and it is investigated that the proposed image compression method gives consistent results compared to the results obtained from existing method[62]. It is also identified that the results obtained by the proposed method are as good as the compression method using embedded zerotree wavelet (EZW) and huffman coding 6.7 SUMMARY In this chapter huffman coding and decoding is explained. Huffman coding based complex wavelet transform image compression algorithm is implemented. The results are compared with existing embedded zero-tree wavelet (EZW) along with huffman coding method. From the investigation results it is evident that a slight better compression ratio is achieved with single encoding method compared to two encoding methods used in the existing method. In the next chapter conclusions and further scope for improvements are explained.