CHAPTER 2 LITERATURE REVIEW

Size: px
Start display at page:

Download "CHAPTER 2 LITERATURE REVIEW"

Transcription

1 CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION This chapter provides a detailed review of literature that is relevant to understand the development, and interpret the results of this convergent study. Each section will explore the research conducted, the assumptions made, the techniques involved and the major findings of various schemes of image coding that have been developed over the years. It is intended to familiarize the reader with the basic assumptions about problem solving that went into the design of this research program and the interpretation of the results. The objective of this survey is to introduce the concepts necessary for the understanding of the geometric wavelet based hybrid compression technique that is applicable to low bit rate image compression. The wavelet transforms and the segmentation based binary space partition scheme is integral to the technique that is studied, and a brief overview of literatures concerning the topics will be presented Advancements in the Field Digital image compression techniques have played an important role in the world of telecommunication and multimedia systems where bandwidth is still a valuable commodity. The proliferation of digital media has motivated innovative methods for compressing digital images. Starting at 1 with the first digital picture in the early 1960s, the compression ratio has reached a saturation level of around 300:1 recently. Even then, the reconstructed image quality still remains as an important issue to be investigated. To date, substantial advancements in the field of image compression have been made, ranging from the traditional predictive coding approaches, classical and popular transform coding techniques and vector quantization to the more latest second generation coding schemes. Many variations have since been introduced to each of these methodologically distinct techniques. Practically efficient compression systems based on hybrid techniques which combines the advantages of different classical methods of image coding, for example, the transform based techniques combined with the segmentation based techniques, to enhance the individual methods and improve the compression performance, are also coming up. 17

2 Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. In general, they are categorized in terms of data loss, or whether they use a transform coding or predictive coding. Based on the requirements of reconstruction, image compression schemes are commonly divided into two categories: lossless and lossy scheme. Lossless image compression algorithms refer to the perfect reconstruction of the original image from the compressed one. On the other hand, with lossy image compression scheme, merely an approximation of the original image is achieved. The main benefit of lossy image compression algorithm over lossless one is to gain in the encoding/decoding time, compression ratio [29], or also in case of power-constrained applications, in energy. The image coding algorithms are grouped into two categories: transformbased algorithms, such as Discrete Cosine Transform- (DCT-) and Discrete Wavelet Transform- (DWT-) based algorithms, and non transform-based algorithms, such as segmentation based algorithms and fractals. The typical design of a transform-based algorithm is based on three stages: spatial decorrelation (also called source encoder), followed by quantizer, and entropy encoder. Other schemes (non-transform-based algorithms) such as vector quantization or fractals do not follow this design. Hybrid coding of images, in this context, deals with combining two or more traditional approaches to enhance the individual methods and achieve better quality reconstructed images with higher compression ratio. A detailed review of various current approaches to image coding is presented in the following sections. Literature on hybrid techniques of image coding over the past years is examined as well. In addition, brief discussions on common still image compression standards like GIF, TIFF, JPEG, JPEG 2000 etc are provided. 2.2 CURRENT APPROACHES TO IMAGE COMPRESSION The concept of compressing two dimensional signals, especially images was introduced in the year 1961, by Wholey. J [2]. The first data compression approach was the predictive coding technique in which the statistical information of the input data is considered to reduce redundancy. Although the resulting compression was not great, there were reasons for believing that this procedure would be more successful with realistic pictorial data. A method of data compression by run length encoding was published in 1969, by Bradley, S.D [30]. In his work, the optimal performance of the code was for a compression factor of An adaptive 18

3 variable length coding system was presented by Rice et al. in 1971 [31]. Using sample to sample prediction, the coding system produces output rates within 0.25 bits per picture element (pixel) of the one dimensional difference entropy, for entropy values ranging from 0 to 8 bits/pixel. The most successful image compression algorithms in recent times are transform-based and among them are the Discrete Cosine Transform (DCT) based schemes. Quite a lot of commercially successful compression algorithms, including the JPEG standard [4] for still images and the MPEG standard [32] for moving images are based on DCT. Many of today s most competitive coders depend on wavelets to transform and compress images. The EZW [33], the SPIHT [34], the SPECK [35], the EBCOT [36] algorithms and the current JPEG 2000 [20] standard are based on the Discrete Wavelet Transform (DWT) [21], [37]. The Second Generation or the segmentation based image coding techniques are gaining popularity. Examples of such image compression algorithms are the Bandelets [19], the Prune tree [38], the Prune-Join tree [38] and the BSP based methods [39]. A survey of the transform based, wavelet based, segmentation based and hybrid techniques to image coding are discussed in the following sections Transform Based Coding Techniques The transform coding approaches to image compression was introduced in the year 1971 with application of Discrete Fourier Transform (DFT) for achieving image compression [40]. Pratt and Andrews studied bandwidth compression using the Fourier transform of complete pictures [41]. The most commonly used transform coding in early days uses the Fourier related transforms such as the KL transform (KLT) [42], Hadamard Transform [43] etc. Singular Value Decomposition (SVD) [44] has been applied for image compression in 1976 and was found successful. SVD is the representation of data using smaller number of variables and had been widely used for face detection and object recognition. There are various more recent transforms like Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DCT), Contourlet transform that are used for effective compression. In the past decades, the Discrete Cosine Transform (DCT) has been the most popular for image coding because of its optimal performance and ability to be implemented at a reasonable cost. Compared to DFT, application of DCT results in less blocking artifacts due to the even-symmetric extension properties of DCT. Also, 19

4 DCT uses real computations, unlike the complex computations used in DFT. This makes DCT hardware simpler, as compared to that of DFT. These advantages have made DCT-based image compression a standard in still-image and multimedia coding standards. The DCT is a technique for converting a signal into its elementary frequency components. The image is decomposed into several blocks, and for each block, DCT is mathematically expressed as a sum of cosine functions oscillating at different frequencies. Since the focus is on images, we consider only the two dimensional representation of DCT (2D DCT), which can be obtained from the cascade of two 1D DCTs. The discrete cosine transform is an orthogonal transform and first applied in 1974 for lossy image compression. It shows features like high energy compacting capability, parallel implementation, signal information tends to concentrate in low frequency components of the DCT [45], and requirement of low memory. The 2D DCT is computationally intensive and as such there is a great demand for high speed, high throughput and short latency computing architectures. Due to the high computation requirements, the 2D DCT processor design has been concentrated on small non-overlapping blocks (typical 8x8 or 16x16). Many 2D DCT algorithms have been proposed to achieve reduction of computational complexity and thus increase the operational speed and throughput. Transform-based compression techniques usually consist of three steps: Transforming the data, quantizing the coefficients, and lossless compression of the result. Compression using Discrete Cosine Transform (DCT) [45] divides up the image into 8 x 8 pixel blocks and then calculates the discrete cosine transform (DCT) of each block. A quantizer rounds off the DCT coefficients according to the quantization matrix. This step produces the "lossy" nature, but allows for large compression ratios. This compression technique uses a variable length code on these coefficients, and then writes the compressed data stream to an output file. For decompression, it recovers the quantized DCT coefficients from the compressed data stream, takes the inverse transforms and displays the image. Quite a lot of commercially successful compression algorithms, including the Joint Photographic Experts Group (JPEG) standard [4] for still images and the Moving Picture Experts Group (MPEG) standard [32] for moving images are based on DCT. The basic components of the JPEG standard [4] are the DCT transform, scalar quantization, zig-zag scan, and Huffman coding [46]. It has long been realized 20

5 that the current JPEG standard does not provide state-of-the-art coding performance. Several methods have been proposed to improve upon JPEG, including optimal Q-matrix design [47], optimal thresholding [48], and joint optimization [49]. Other variants of the compression scheme based on DCT are proposed in the literature to enhance JPEG features, such as minimizing the blocking artifacts, minimizing the complexity at the encoder and/or the decoder, and increasing the compression ratio. Even with excellent energy compaction capabilities, mean-square reconstruction error performance closely matching that of KLT and availability of fast computational approaches, DCT offers a few limitations which restrict its use in very low bit rate applications. The DCT as well as DFT, transform an image from the discrete space domain to the discrete spatial frequency domain. If, for example, a DCT is done across an entire image, the resulting transform coefficients will be less correlated with each other than the pixels in the space domain. However, because the transform is done across the entire image, and different segments of the image may not be correlated with each other, there is more room for decorrelation in the space domain than the DCT by itself offers. JPEG addresses this problem by segmenting the image into 8x8 blocks, depending on the notion that pixels in a small area are usually similar and therefore the information in that block can be represented in fewer bits. Similarity between blocks at all scales is not taken advantage of, and this is a fundamental limitation of Fourier transform-based techniques. Two main limitations of DCT based compression is the blocking artifacts and false contouring. Blocking artifacts is a distortion that appears due to heavy compression and appears as abnormally large pixel blocks. At higher compression ratios, the perceptible blocking artifacts across the block boundaries cannot be neglected [45]. The false contouring occurs when smoothly graded area of an image is distorted by a deviation that looks like a contour map for specific images having gradually shaded areas [45]. In addition, truncation of higher spectral coefficients or high frequency components results in blurring of the images, especially wherever the fine details, like edges and contours, are high. Coarse quantization of some of the low spectral coefficients introduces graininess in the smooth portions of the images. Serious blocking artifacts, as mentioned above, are introduced at the block boundaries, since each block is independently encoded, often with a different encoding strategy and the extent of quantization. Of all the above mentioned problems, the blocking artifact is 21

6 the most serious and objectionable one at low bit rates. Blocking artifacts may be reduced by applying an overlapped transform, like the Lapped Orthogonal Transform (LOT) or by applying post-processing. Later, the Discrete Wavelet Transforms (DWT) (to be discussed in subsequent sessions) were introduced which avoids the blocking artifacts of DCT and present better coding performance at lower bit rates. Despite providing outstanding results in terms of rate-distortion compression, the transform-based coding methods do not take an advantage of the geometry of the edge singularities in an image. There is often more weight given to low-frequency data than high-frequency data, because in typical optical images, more information detectable by the human visual system is stored there. This makes the transform based techniques unsuitable for geometric approximation and compression of images where more focus is on the curve and edge singularities, which contain high frequency information Wavelet Based Coding Algorithms Among a variety of new and powerful algorithms that have been developed for image compression over the years, the wavelet-based image compression has gained much popularity due to their overlapping nature which reduces the blocking artifacts and multiresolution character, leading to superior energy compaction with high quality reconstructed images. Wavelet-based coding [51] provides substantial improvements in picture quality at higher compression ratios. Furthermore, at higher compression ratios, wavelet coding methods degrade much more gracefully than the block-dct methods. Discrete Wavelet Transforms (DWT) has the ability to solve the blocking artifact introduced by the DCT. It also reduces the correlation between the neighboring pixels and gives multi scale sparse representation of the image. Since the wavelet basis consists of functions both with short support and long support for high frequencies and for low frequencies respectively, smooth areas of the image may be represented with very few bits, and detail can be added where ever required [52]. Their superior energy compaction properties and correspondence with the human visual system have made, wavelet compression methods produce subjective results. Due to the many advantages, wavelet based compression algorithms have paved way for the for the new JPEG-2000 standard [20]. 22

7 Wavelet compression schemes allow the integration of various compression techniques into one. With wavelets, a compression ratio of up to 300:1 is achievable [36]. A number of novel and sophisticated wavelet-based schemes for image compression have been developed and implemented over the past few years. Some of the most popular schemes are discussed in the paper. These include Embedded Zero Tree Wavelet (EZW) [33], Set-Partitioning in Hierarchical Trees (SPIHT) [34], Set Partitioned Embedded Block Coder (SPECK) [35], Embedded Block Coding with Optimized Truncation (EBCOT) [36], Wavelet Difference Reduction (WDR) [53], Adaptively Scanned Wavelet Difference Reduction (ASWDR) [54], Space - Frequency Quantization (SFQ) [55], Embedded Predictive Wavelet Image Coder (EPWIC) [56], Compression with Reversible Embedded Wavelet (CREW) [57], the Stack- Run (SR) [58], the recent Geometric Wavelet (GW) [59] and improved GW [60]. Wavelets were applied to image coding in the year 1989 [24]. Shapiro was one of the first authors to apply wavelet techniques to image compression [15]. Different wavelets and its variants were used in later years to achieve better compression. EZW coding for image compression presented by Shapiro in his paper Smart compression using the EZW algorithm in 1993 [33] uses wavelet coefficients for coding. In this paper, the 2-D wavelet transform was applied to the image by first subdividing the image into four equal subbands, determined using separable application of vertical and horizontal filters. The result is critically subsampled, such that each coefficient then corresponds to a 2x2 area of the image. The LH 1, HL 1, and HH 1 subbands are the finest scale coefficients, while the LL 1 subband is subdivided further in the very same fashion. The subdivision continues until LL N is a single coefficient, where N represents the number of subdivisions required for that operation. The result of the transform is the concatenation of the subband coefficients at each scale. This approach suggests a tree structure to the subband decompositions. Given that an image has been transformed, the remaining problem is to decide the significance of each coefficient, and if a particular coefficient is insignificant, represent that information somehow. In this paper, the initial significance is decided by a simple thresholding operation. The insignificant coefficients are set to a particular value, zero. The concept of a zerotree is based on the hypothesis that if a particular wavelet coefficient is insignificant, then all of the 23

8 corresponding coefficients in finer resolution subbands will also be insignificant. This has some parallels to the JPEG method [4] of using an end-of-block (EOB) code when the remainder of the DCT coefficients in a particular block is zero. However, this coder simply inserts a zerotree (ZTR) symbol when it detects an insignficant coefficient. The corresponding coefficients in finer subbands are then simply ignored. The method described in the paper uses multiple passes, with each pass using a decreasing value for the threshold. It also quantizes the significant coefficients in order to represent them in fewer bits. However, successive approximation is used to allow the significant coefficients to be quantized more precisely on successive passes. In this way the encoding can continue until a desired number of bits have been used to represent the image. The more bits used, the better the quality of the image. In addition to the traditional wavelet transform (referred to as the scalar wavelet transform), alternative wavelet-based compression schemes have shown great promise. They are multiwavelets, wavelet packets, and multiwavelets packets. The performance of each of these methods depends on the image content; performance varies for natural and synthetic images and for high and low frequency content. For instance, multiwavelets have been shown to capture high-frequency content better than scalar wavelets, especially when used with shuffling a SPIHTlike quantization scheme [34]. Scalar wavelets still perform best on natural images with low frequency content such as the commonly-used Lena image. However, research to date has been limited to grayscale images and the results published in the literature often show only one or two images. While the good results obtained by wavelet coders are partly attributable to the wavelet transform, much of the performance gain is obtained by carefully designing quantizers (e.g., zerotree quantizer) that are tailored to the transform structure. EZW coding exploits the multi resolution properties of the wavelet transforms to give a computationally simple algorithm with better performance compared to other existing wavelet transforms. The embedded zerotree wavelet (EZW) coder was determined to have significantly better PSNR performance than JPEG at low bit rates. The algorithm gives brilliant results without any training, prestored tables or codebooks, or prior knowledge of the image source. EZW encoder does not actually compress anything; it only reorders the wavelet coefficients in such a way that they can be compressed very efficiently. 24

9 The artifacts produced at low bit rates using this method are unavoidable and are the common characteristics of all wavelet coding schemes coded to the same PSNRs. However, these artifacts are subjectively not objectionable like the blocking effects produced by DCT [45] or other block transform based coding schemes. The EZW encoder is always followed by a symbol encoder, for example an arithmetic encoder [65], due to the above mentioned reason. The performance measure of EZW is used as reference for comparison with the new techniques of image compression. Hence, it has become one of the state-of-the-art algorithms for image compression. The SPIHT coding is an improved version of the EZW algorithm that achieves higher compression and better performance than EZW. It was introduced by Said and Pearlman [34] in SPIHT is expanded as Set Partitioning in Hierarchical Trees. The term Hierarchical Trees refers to the quadtrees that is defined in the discussion of EZW. Set Partitioning refers to the way these quadtrees divide up and partition the wavelet transform values at a given threshold. By a careful analysis of this partitioning of transform values, Said and Pearlman were able to develop the EZW algorithm, considerably increasing its compressive power. SPIHT algorithm produces an embedded bit stream from which the best reconstructed images with minimal mean square error can be extracted at various bit rates. Some of the best results like highest PSNR values for given compression ratios for a wide range of images, have been obtained with SPIHT algorithm. The SPIHT method is not a simple extension of traditional methods for image compression, and represents an important advance in the field. The main features of SPIHT coding are good quality reconstructed images, high PSNR, especially for colour images, optimized for progressive image transmission, produces a fully embedded coded file, simple quantization algorithm, fast coding/decoding (near symmetric), has wide applications, fully adaptive, can be used for lossless compression, can code to exact bit rate or distortion and efficient combination with error protection. SPIHT coding yields all these qualities simultaneously. This makes it really outstanding. Effectiveness of the algorithm can be further enhanced by entropy coding its output, but at the cost of a larger encoding/decoding time. The need for reducing the number of bits used in this scheme led to the formation of the subsequent algorithm, called SPECK [61]. The EBCOT algorithm [36] shows advanced compression performance while producing a bitstream with a rich set of features, like resolution and SNR 25

10 scalability together with a random access property. All these features coexist within a single bit-stream without considerable reduction in compression efficiency. The EBCOT algorithm makes use of a wavelet transform to create the subband coefficients which are then quantized and coded. Although the usual dyadic wavelet decomposition is often used, other "packet" decompositions are also supported and occasionally preferred. The original image is characterized in terms of a collection of subbands, which may be organized into increasing resolution levels. The lowest resolution level comprises of the single LL subband. Each successive resolution level contains the additional subbands that are required to reconstruct the image with twice the horizontal and vertical resolution. The EBCOT algorithm is a scalable type image compression technique where the advantage is that the target bit-rate or reconstruction resolution need not be known at the time of compression. Another advantage of practical significance is that the image need not be compressed multiple times so as to achieve a target bit-rate, which is common in the existing JPEG compression technique. EBCOT algorithm divides each subband into comparatively small blocks of samples and creates a separate highly scalable bit-stream to represent each so called code-block. The algorithm has modest complexity and is well suited to applications involving remote browsing of large compressed images. This algorithm uses codeblocks of size 64 x 64 with subblocks of size 16 x 16. The EBCOT bit-stream is composed of a collection of quality layers and SNR scalability is obtained by discarding unwanted layers. The EBCOT images exhibit significantly less ringing around edges and superior interpretation of texture. Simulations have showed that some details preserved in the EBCOT images are totally lost by the SPIHT algorithm. However, the performance of EBCOT algorithm continues to be competitive with the state-of-the-art compression algorithms, considerably outdoing the SPIHT algorithm especially. The techniques of EZW, SPIHT, and EBCOT have become the state-ofthe-art algorithms for image compression and are used as reference for comparison with the new techniques of image compression. Although the basic wavelet theory and the application of wavelets to image compression have been well developed, modeling the joint behavior of wavelet coefficients along an edge offers a distinct challenge. A detailed and recent survey on wavelet based image coding schemes is 26

11 published in the International Journal on Soft Computing, August 2012, vol. 3, no. 3, pp Segmentation Based Image Coding Schemes In spite of providing exceptional results in terms of rate-distortion compression, the transform-based coding methods do not take exploit the geometry of the edge singularities in an image. This led to the design of Second Generation or the segmentation based image coding techniques [62] that make use of the underlying geometry of edge singularities of an image. Recently, many such image compression algorithms, for example, the Bandelets [19], the Prune tree [38], the Prune-Join tree [38] and the BSP based methods have been introduced. The segmentation based methods of image coding [62] were introduced in the year 1985 and many variations have been introduced since then. Two groups can be formed in this class: methods using local operators and combining their output in a suitable way and methods using contour-texture descriptions. For low bit-rate compression applications, segmentation-based coding methods provide, in general, high compression ratios when compared with traditional (e.g. transform and subband) coding approaches. There are two major steps in segmentation based compression namely, segmentation and compression of segmented region. In image segmentation algorithm, the image is segmented, based on any one of the two basic properties of intensity values, namely discontinuity and similarity. This segmentation step is used to improve reconstructed image quality by preserving edge information. In compression step, the segmentation output is encoded in effective manner to get efficient compression performance. Different approaches to segmentation based compression have been developed in the recent years. A lossless Image Compression Algorithm Using Variable Block Size Segmentation was proposed [63]. In this work, a lossless image compression scheme that exploits redundancy both at local and global levels in order to obtain maximum compression efficiency is presented. This algorithm segments the image into variable size blocks and encodes them depending on the characteristics exhibited by the pixels within the block. The performance of this algorithm is superior to other lossless compression schemes such as the Huffman coding [46], the Arithmetic coding [64], the Lempel-Ziv coding [66], [67], and the JPEG [4]. But estimating the 27

12 distribution of image characteristics and the resulting compression efficiency is a very difficult task due to the huge amount of computations involved. The block-based MAP segmentation for image compression was proposed by Chee Sun Won [69]. Here, the segmentation algorithm using the Maximum-A-Posteriori (MAP) criterion is used. The conditional probability in the MAP criterion, which is formulated by the Bayesian framework, is in charge of classifying image blocks into edge, monotone, and textured blocks. On the other hand, the a-priori probability is responsible for edge connectivity and homogeneous region continuity. After a few iterations to achieve a deterministic MAP optimization, a block-based segmented image in terms of edge, monotone, or textured blocks are obtained. Using a connected block labelling algorithm, then assigned a number to all connected homogeneous blocks to define an interior of a region. Finally, uncertainty blocks, which are not given any region number yet, are assigned to one of the neighbouring homogeneous regions by a block-based regiongrowing method. During this process, the balance between the accuracy and the cost of the contour coding need to be checked by adjusting the size of the uncertainty blocks. This algorithm yields larger homogeneous regions which are suitable for the object based image compression. The multiscale segmentation [70] for image compression is presented by Krishna Ratakonda, and Narendra Ahuja in Multiscale segmentation is obtained using a transform which provides a tree-structured segmentation of the image into regions characterized by grayscale homogeneity. In this algorithm, the tree is pruned to control the size and number of regions thus obtaining a rate-optimal balance between the overhead inherent in coding the segmented data and the coding gain that derived from it. An image model is used for comprising separate descriptions of pixels lying near the edges of a region and those lying in the interior. The results show that the performance of this algorithm is comparable to the lossless JPEG compression standard for a wide range of images. Hierarchical segmentation-based image coding using Hybrid Quad- Binary (QB) Trees is presented in 2009 [71]. A hybrid quad-binary (QB) tree structure is utilized to efficiently model and code geometrical information within images. The QB-tree is a compromise between the rigidity of discrete space structures of quadtrees, which allows spatial partitioning for local analysis, and the generality of Binary Space Partitioning (BSP) tree, which facilitates the creation of 28

13 more adaptive and accurate representations of image discontinuities. An image approximation technique using the QB-tree, is a hybrid structure of the binary and quad-trees. The QB-tree image decomposition is able to avoid excessive fine partitioning over complex linear features, e.g., junctions, corners, bars and ridges thereby obtaining a more efficient single scale representations of these features. The technique also improves visual representations by producing a more meaningful geometric description of images at coarser scales. The simulation results of this method shown that this method consistently outperforms other image approximation methods in subjective observations especially for images that contains significant geometrical structures and in low bit rates. The prune binary tree algorithm [38] is similar in spirit to the algorithm proposed in for searching the best wavelet packet bases. In this algorithm, each node of the tree is coded independently and, as anticipated before, each node approximates its signal segment with a polynomial. Finally the prune tree algorithm utilizes ratedistortion framework with an MSE distortion metric. One can observe that the prune tree scheme could not merge the neighboring nodes representing the same information. Since this coding scheme fails to exploit the dependency among neighbors in the pruned tree, it is bound to be suboptimal. The major drawback of pruning tree method is high computational complexity. Among the many segmentation techniques that have been developed, the Binary Space Partition (BSP) Scheme [72] is a simple and effective method of image compression. The concept of BSP for hidden surface removal [39] was published in the year 1996 by Hyder Radha et al. The binary space partition scheme is a simple and effective segmentation based image coding method. In this representation, the image is broken down into simple geometric regions in a recursive manner using partitioning lines. The image is divided until the pixels in the region are homogenous (for lossless mode) or similar enough (for lossy mode). The image signal within the different regions (resulting from the recursive partitioning) can be represented using low-order polynomials. The BSP scheme is used in the proposed work to achieve the balance between a small number of geometrically simple regions and the smoothness of the image signal within these regions by means of a simple, yet flexible description of the images. The algorithm is discussed in detail in Chapter 3 of this thesis. 29

14 Most of the signals in nature are piecewise smooth. Prandoni et al. [73] derived the rate distortion behaviour of an oracle method and proved that traditional bases such as wavelet and Fourier are not optimum for piecewise polynomial functions. They have also proposed a dynamic programming algorithm to implement their oracle method. But this algorithm suffers from a few problems. First, although Prandoni s algorithm uses dynamic programming, the computational complexity is still very high. Second, it cannot be extended to higher dimensional signals. Third, it uses some prior knowledge about the number of polynomial pieces which is usually not available. In order to address these problems in two dimensions quad-tree partitioning [74] algorithm with Lagrangian cost function has been proposed [38]. This algorithm has some advantages such as it is simple and efficient, some automatic methods have been proposed to set the value of parameters for a given achievable rate distortion [75], by choosing the dictionaries properly, they will result in higher dimensional coding schemes [76], they are very flexible in the sense that they can be easily modified to lead to new compression schemes etc. During the past decade, normal approximation has been investigated in several contexts [9], [10], [14], [77], motivated by their success in surface representation and compression. They allow sparse representations of surfaces approximating enormous amounts of data coming from scanning smooth objects [14], [77]. Daubechies et al. [78] investigated normal polyline approximation in the context of smooth curve approximation in the plane. They analyze convergence, regularity, and stability properties of normal multiresolution approximations for planar curves. Their mathematical analysis is based on the fact that normal approximation for smooth curves can be seen as perturbations of sequences produced by linear subdivision schemes. An edge-preserving image compression model based on subband coding is published in 2000 [79]. The extracted edge information from the source image used as a priori knowledge for the subsequent reconstruction. The edge information can be lossily conveyed. Subband coding is used to compress the source image. Vector quantization, a block-based lossy compression technique, is employed to compromise the bit rate incurred by the additional edge information and the target bit rate. Simulation results have shown that the approach could significantly improve both the objective and subjective quality of the reconstructed image by preserving more edge details. Specifically, the model incorporated with SPIHT outperformed 30

15 the original SPIHT with continuous-tone test images. In general, the model may be applied to any lossy image compression systems. This technique shows the possibility and influence of hybrid image coding algorithms that combine different classical methods to improve the performance as when compared to individual techniques Hybrid Approaches to Image Coding Though segmentation based image coding algorithms provide promising results, almost all of the proposed Second Generation algorithms are not competitive with state of the art (dyadic) wavelet coding. This led to the thought of hybrid approaches that combine the strengths of classical methods to improve the coding efficiency. The most common hybrid techniques are classified as Vector Quantization (VQ) - based hybrid techniques, wavelet - based hybrid techniques, segmentation - based hybrid techniques and the Artificial Neural Networks (ANN) - based hybrid techniques. In this section, literature on these hybrid techniques of image coding that have been developed over the past years is reviewed. Hybrid approaches to image compression deals with combining two or more traditional approaches to enhance individual methods and achieve better quality reconstructed images with higher compression ratio. The concept of hybrid coding was introduced in the early 1980 s itself by Clarke R. J [80] on transform coding combined with predictive coding. Since then, various hybrid techniques have evolved such as the vector quantization combined with DCT, Block Truncation Coding (BTC) with Hopfield neural networks, predictive coding with neural networks, wavelet coding with neural networks, segmentation based techniques with predictive coding techniques, fractal coding with neural networks, subband coding with arithmetic coding, segmentation coding with wavelet coding, DCT with DWT, SPIHT with fractal coding, cellular neural networks with wavelets etc. Some of the different successful hybrid coding techniques are discussed in the next few sections. Vector quantization (VQ) is a successful, effective, efficient, secure, and widely used compression technique over two decades. The strength of it lies in higher compression ratio and simplicity in implementation, especially of the decoder. The major drawback of VQ is that, decompressed image contains blockiness because of the loss of edges due to which the quality of image degrades. VQ has been combined with traditional techniques of image coding to achieve better performance 31

16 compared to the conventional VQ scheme. Some works related to VQ based hybrid approaches to image coding over the past two decades are discussed here. Vector Quantization of images based on a neural network clustering algorithm, namely the Kohenons Self Organising Maps proposed by Feng et al. [81] were one of the early works in hybrid VQ techniques. Daubechies et al. introduced the coding of images using vector quantization in the wavelet transform domain [82]. Here, a wavelet transform is first used in order to obtain a set of orthonormal subclasses of images; the original image is decomposed at different scales using pyramidal algorithm architecture. The decomposition is along the vertical and horizontal directions and maintains the number of pixels required to describe the image at a constant. Then according to Shannon's rate-distortion theory, the wavelet coefficients are vector quantized using a multiresolution codebook. To encode the wavelet coefficients, a noise-shaping bit-allocation procedure which assumes that details at high resolution are less visible to the human eye is proposed in the work. A hybrid BTC-VQ-DCT (Block Truncation Coding- Vector Quantization - Discrete Cosine Transform) [83] image coding algorithm was proposed by Wu et al. The algorithm combines the simple computation and edge preservation properties of BTC and the high fidelity and high-compression ratio of adaptive DCT with the high-compression ratio and good subjective performance of VQ. This algorithm and can be implemented with significantly lower coding delays than either VQ or DCT alone. The bit-map generated by BTC is decomposed into a set of vectors which are vector quantized. Since the space of the BTC bit-map is much smaller than that of the original 8-bit image, a lookup-table-based VQ encoder has been designed to `fast encode' the bit-map. Adaptive DCT coding using residual error feedback is implemented to encode the high-mean and low-mean subimages. The overall computational complexity of BTC-VQ-DCT coding is much less than either DCT or VQ, while the fidelity performance is competitive. The algorithm has strong edgepreserving ability because of the implementation of BTC as a precompress decimation. The total compression ratio achieved is about 10:1. A hybrid coding system that uses a combination of set partition in hierarchical trees (SPIHT) and vector quantisation (VQ) for image compression [84] was presented by Hsin et al. Here, the wavelet coefficients of the input image are rearranged to form the wavelet trees that are composed of the corresponding wavelet coefficients from all the subbands of the same orientation. A simple tree classifier 32

17 has been proposed to group wavelet trees into two classes based on the amplitude distribution. Each class of wavelet trees is encoded using an appropriate procedure, specifically either SPIHT or VQ. Experimental results show that advantages obtained by combining the superior coding performance of VQ and efficient cross-subband prediction of SPIHT are appreciable for the compression task, especially for natural images with large portions of textures. Arup Kumar Pal et al. have recently proposed a hybrid DCT-VQ based approach for efficient compression of colour images [85]. Initially DCT is applied to generate a common codebook with larger code word sizes. This reduces computation cost and minimizes blocking artifacts effect. Then VQ is applied for final compression of the images that increases PSNR. Table 1 shows the simulation results for two test images using the conventional VQ method and the proposed hybrid method. Better PSNR values are obtained for the hybrid technique compared to the conventional VQ process. So the proposed hybrid scheme improves the visual quality of the reconstructed image compare to the conventional VQ process. The simulation result also shows that the proposed scheme reduces the computation cost (including codebook construction, VQ encoding process and VQ decoding process time) compared to the conventional VQ process. Wavelet transforms have been combined with classical methods of image coding to obtain high quality compressed images with higher compression ratios. Some of the wavelet based hybrid techniques are discussed in this section. Durrani et al. [86] combined the run length encoding with the wavelet transforms to achieve better compression. The main attraction of this coding scheme is its simplicity in which no training and storage of codebooks are required. Also its high visual quality at high compression ratio outperforms the standard JPEG codec for low bitrate applications. Jin Li Kuo [87] proposed a hybrid Wavelet-Fractal Coder (WFC) for image compression. The WFC uses the fractal contractive mapping to predict the wavelet coefficients of the higher resolution from those of the lower resolution and then encode the prediction residue with a bitplane wavelet coder. The fractal prediction is adaptively applied only to regions where the rate saving offered by fractal prediction justifies its overhead. A rate-distortion criterion is derived to evaluate the fractal rate saving and used to select the optimal fractal parameter set for WFC. The superior performance of the WFC is demonstrated with extensive experimental results. 33

18 According to F. Madeiro et al., [88] wavelet based VQ is the best way of quantizing and compressing images. This methodology takes multiple stage discrete wavelet transform of code words and uses them in both search and design processes for the image compression. Accordingly, the codebook consists of a table, which includes only the wavelet coefficients. The key idea in the mechanism of this algorithm is finding representative code vectors for each stage. They are found by first combining n code words in k groups, where kn gives the codebook size. This technique has a major drawback in the amount of computations during the search for optimum code vector in encoding. This complexity can be reduced by using an efficient codebook design and wavelet based tree structure. Iano Y et al. [89] presents a new fast and efficient image coder that applies the speed of the wavelet transform to the image quality of the fractal compression. Fast fractal encoding using Fisher's domain classification is applied to the lowpass subband of wavelet transformed image and a modified set partitioning in hierarchical trees (SPIHT) coding, on the remaining coefficients. Furthermore, image details and wavelet progressive transmission characteristics are maintained, no blocking effects from fractal techniques are introduced, and the encoding fidelity problem common in fractal-wavelet hybrid coders is solved. The proposed scheme promotes an average of 94% reduction in encoding-decoding time comparing to the pure accelerated Fractal coding results. The simulations results show that, the new scheme improves the subjective quality of pictures for high-medium-low bitrates. Alani et al. proposed a well suited algorithm [59] for low bit rate image coding based on the Geometric Wavelets in Geometric wavelet is a recent development in the field of multivariate piecewise polynomial approximation which is the base algorithm for this thesis. Here the binary space partition scheme which is a segmentation based technique of image coding is combined with the wavelet technique [37]. The discrete wavelet transforms have the ability to solve the blocking effect introduced by the DCT. They also reduce the correlation between neighbouring pixels and gives multi scale sparse representation of the image. Wavelet based techniques provide excellent results in terms of rate distortion compression, but they do not take advantage of underlined geometry of the edge singularities in an image. The second generation coding techniques exploits the geometry of the edge singularities of the image. Among them the binary space partition scheme is a simple and efficient method of image coding, which is 34

19 combined with geometric wavelet tree approximation so as to efficiently capture edge singularities and provide a sparse representation of the image. The geometric wavelet method successfully competes with the state-of-the-art wavelet methods such as EZW, SPIHT and EBCOT algorithms. A gain of 0.4 db over SPIHT and EBCOT algorithms is reported. This method also outperforms other recent methods that are based on sparse geometric representation. For eg., this algorithm reports a gain of 0.27 db over the Bandelets algorithms at 0.1 bits per pixel. More about this technique and improvements made to this hybrid technique is discussed in the following chapters of the thesis. A hybrid compression method for integral images using Discrete Wavelet Transform and Discrete Cosine Transform [90] is proposed by Elharar et al. A compression method is developed for the particular characteristics of the digitally recorded integral image. The compression algorithm is based on a hybrid technique implementing a four-dimensional transform combining the discrete wavelet transform and the discrete cosine transform. The proposed algorithm outperforms the baseline JPEG compression scheme. The existing conventional image compression technology can be developed into various learning algorithms to build up neural networks for image compression. This will be a significant development and a wide research area in the sense that various existing image compression algorithms can actually be implemented by one neural network architecture empowered with different learning algorithms. Hence, the powerful parallel computing and learning capability of neural networks can be fully exploited to build up a universal test bed where various compression algorithms can be evaluated and assessed. Three conventional techniques are covered in this section, which include predictive coding, fractal coding, and wavelet transforms. Predictive coding [91] has been proved to be a powerful technique in decorrelating input data for speech and image compression where a high degree of correlation is embedded among neighbouring data samples. The autoregressive (AR) model, a classification of predictive coding, has been successfully applied to image compression. Predictive coding in terms of applications in image compression can be further classified into linear and non-linear AR models. Conventional technology provides a mature environment and well developed theory for predictive coding which is represented by LPC (Linear Predictive Coding) [92], PCM (Pulse Code 35

20 Modulation) [93], DPCM (Delta PCM) or their modified [94] variations. Non-linear predictive coding, however, is very limited due to the difficulties involved in optimizing the coefficients extraction to obtain the best possible predictive values. Under this circumstance, a neural network provides a very promising approach in optimizing non-linear predictive coding [96], [97]. Predictive performance with neural networks is claimed to outperform the conventional optimum linear predictors by about 4.17 db and 3.74 db for two test images [97]. Further research, especially for non-linear networks, is encouraged by the reported results to optimize their learning rules for prediction of those images whose contents are subject to abrupt statistical changes. Fractal configured neural networks [98], [99], based on Iterated Function System (IFS) codes [100], represent another example along the direction of combining existing image compression technology with neural networks. Its conventional counterpart involves representing images by fractals and each fractal is then represented by so called IFS, which consists of a group of affined transformations. To generate images from IFS, random iteration algorithm is used which is the most typical technique associated with fractal based image decompression [100]. Hence, fractal based image compression features lower speed in compression and higher speed in decompression. By establishing one neuron per pixel, two traditional algorithms of generating images using IFSs are formulated into neural networks in which all the neurons are organized as a topology with two dimensions [99]. Based on wavelet transforms, a number of neural networks are designed for image processing and representation [24], [101]. Wavelet networks are a combination of radial basis function (RBF) networks and wavelet decomposition, where radial basis functions were replaced by wavelets. Experiments reported [102] on a number of image samples support the wavelet neural network by finding out that Daubechie s wavelet produces a satisfactory compression with the smallest errors. Haar s wavelet produces the best results on sharp edges and low-noise smooth areas. A neuro-wavelet model [103] for compression of digital images which combines the advantage of wavelet transform and neural network was proposed by Vipula Singh et al. Here images are decomposed using wavelet filters into a set of sub bands with different resolution corresponding to different frequency bands. 36

21 Different quantization and coding schemes are used for different sub bands based on their statistical properties. The coefficients in low frequency band are compressed by differential pulse code modulation (DPCM) and the coefficients in higher frequency bands are compressed using neural network. Satisfactory reconstructed images with large compression ratio can be achieved by this method. Here, the image is first decomposed into different sub bands using wavelet transforms. Since the human visual system has different sensitivity to different frequency components, the scheme shown here may be adapted. The low frequency band (Band 1) is encoded with DPCM (Differential Pulse Code Modulation). After that these coefficients are scalar quantized. The remaining frequency bands are coded using neural networks. Band 2 and Band 3 contain the same frequency contents for different orientation. Same neural network is used to compress the data in these bands and a different neural network is used for both Band 5 and Band 6. Band4 coefficient is coded using a separate neural network as frequency characteristics of this band does not match with other bands. Band 7 is discarded as it contains little information to contribute to the image. This band has little effect on the quality of the reconstructed image. The output of the hidden layer of the neural network is then scalar quantized. Finally these quantized values are entropy coded. Huffman coding [46] is used here. This paper concludes that compared to the traditional neural network compression or the classical wavelet based compression applied directly on the original image; the neural network based approach improved the quality of the reconstructed image. It also enhances the overall processing time. Hannes investigated highly image-adaptive partitions in order to improve the rate-distortion performance of fractal coding [104]. This fractal coder can be seen as a combination of segmentation-based image coding and fractal compression. The partitions are derived in a bottom-up approach using region merging. The image is first uniformly partitioned, and then neighbouring range pairs are successively merged reducing the total number of partitioned blocks (ranges) one by one. Because of the large number of choices during the merging process, a heuristic strategy has to be applied which performs well. Moreover, an efficient coding scheme for the resulting partitions. The region merging strategy and the efficient partition coding have led to a much improved rate distortion performance compared to the results reported in [105], e.g., a gain of about 5 db PSNR is obtained for the Lena image at a compression ratio of 40. But compared to hierarchical tree-structured partitions, a 37

22 higher rate is required for encoding the irregular partitions. However, this investment pays off in terms of an improved rate-distortion performance. An efficient image compression algorithm based on Energy Clustering and Zero Quadtree Representation (ECZQR) in the wavelet transform domain is proposed [106]. In embedded coding, zeros within each subband are encoded in the framework of quadtree representation instead of zerotree representation. To use large rectangular blocks to represent zeros, it first uses morphological dilation to extract the arbitrarily shaped clusters of significant coefficients within each subband. This encoding method results in less distortion in the decoded image than the line-by-line encoding method. Experimental results show that this algorithm achieved high coding efficiency and fast encoding/decoding. A hybrid coding system that uses a combination of set partition in hierarchical trees (SPIHT) and Vector Quantization (VQ) for image compression is presented [107]. Here, the wavelet coefficients of the input image are rearranged to form the wavelet trees that are composed of the corresponding wavelet coefficients from all the subbands of the same orientation. A simple tree classifier has been proposed to group wavelet trees into two classes based on the amplitude distribution. Each class of wavelet trees is encoded using an appropriate procedure, specifically either SPIHT or VQ. Experimental results show that advantages obtained by combining the superior coding performance of VQ and efficient cross-subband prediction of SPIHT are appreciable for the compression task, especially for natural images with large portions of textures. This hybrid coding outperforms SPIHT algorithm. An efficient image segmentation algorithm developed by using the Discrete Wavelet Frame Transform (DWFT) and Multiresolution Markov Random Field (MMRF) [108]. This algorithm avoids the over-segmentation that is common in other segmentation algorithms. The experiments show that the proposed algorithm is very robust and it can be successfully used under noisy conditions. But the over segmentation problem can be avoided only by deliberately choose the level of DWFT for MMRF. A new approach of edge preserving and edge based segmentation for compression of images using Modified Fast Haar Wavelet transform (MFHW) and Bit Plane Encoder to elevate the compression ratio with high picture quality is presented [109]. The edges of an image are preserved to increase the PSNR, and then 38

23 the detected edges are used to segment the foreground and background images. The Foreground of the image is given more importance than the background images. A wavelet transform is used to extract the redundant information at low frequency and a matching Bit Plane encoder is used to code the segments of the image at different quality levels. This method highly preserves quality of the foreground image. Normal compression algorithms will not preserve the high frequency details such as edges, corners etc., in this method edges are preserved and used for segmenting the layers of the original image. The two level Fast Haar Wavelet transform is used to decompose the image at different frequency levels, which has high multi-resolution characteristics. This method increases both the compression ratio and PSNR. But this method only considered edge information in the image and lost remaining geometrical features of the image. Every approach is found to have its own merits and demerits. VQ based hybrid approaches to compression of the images helps in improving the PSNR along with reducing the computational complexity. It is seen that good quality reconstructed images are obtained, even at low bit-rates when wavelet based hybrid methods are applied to image coding. The powerful parallel processing and learning capability of neural networks can be fully exploited in the ANN based hybrid approaches to image compression. Predictive coding neural networks are found very suitable for compression of text files. High encoding efficiency and good reproduction quality are realized with this type of compression. Fractal neural networks reduce the computation time of encoding/decoding since the compression/ decompression process can be executed in parallel. Simulated results show that the neural network approach can obtain a high compression-ratio and a clear decompressed image. Wavelet networks applied to image compression provide improved efficiency compared to the classical neural networks. By combining wavelet theory and neural networks capability, significant improvements in the performance of the compression algorithm can be realised. Results have shown that the wavelet networks approach succeeded in improving performance and efficiency, especially for rates of compression lower than 75%. The scheme of neuro-wavelets can achieve good compression at low bit rates with good quality reconstructed images. It can be considered that the integration of classical with soft computing based image compression methods enables a new way of achieving higher compression ratio. 39

24 The existing conventional image compression technology can be developed by combining high performance coding algorithms in appropriate ways, such that the advantages of both techniques are fully exploited. This will be a significant development and a wide research area in the sense that various traditional image compression algorithms can be empowered with different successful algorithms to achieve better performance. The merits and demerits of various transform based, wavelet based, segmentation-based and hybrid image compression techniques are analyzed in detail in this section. The transform based image compression is preferred in real time because it's easy implementation. When there is need to preserve edge information for reducing visual artifacts we should preferred either segmentation based image compression or multidirectional transform, which improves compression ratio as well as PSNR. The penalty for this improved performance is computational complexity. Even then, to this day, almost all of the proposed Second Generation algorithms are not as successful as the state of the art (dyadic) wavelet coding. Hence hybrid transforms or techniques may be used to solve the computational complexity. A detailed review on hybrid approaches to image coding is published in the International Journal of Advanced Computer Science and Applications (IJACSA), 2011, vol. 2, no. 7, pp Inspired by a recent progress in multivariate piecewise polynomial approximation [110], we put together the advantages of the classical method of coding using wavelets and the segmentation based coding schemes to what can be described as a geometric wavelet approach. This thesis focuses on a hybrid technique of image coding that captures the geometry of edge singularities in the image and achieves better coding performance. 2.3 COMMON IMAGE COMPRESSION STANDARDS It is necessary to evolve coding standards so that there is compatibility and interoperability between the image communication and storage products manufactured by different vendors. Without the availability of standards, encoders and decoders cannot communicate with each other; the service providers will have to support a variety of formats to meet the needs of the customers and the customers will have to install a number of decoders to handle a large number of data formats. Towards the objective of setting upcoming standards, the international standardization agencies, such as International Standards Organization (ISO), 40

25 International Telecommunications Union (ITU), International Electro-technical Commission (IEC) etc. have formed expert groups and solicited proposals from industries, universities and research laboratories. This has resulted in establishing standards for bi-level (facsimile) images and continuous tone (gray scale) images. These standards use the coding and compression techniques both lossless and lossy. The international compression standards include the Fax [110], JBIG [111], JPEG [4], JPEG LS [112] and JPEG 2000 [20] and the industry standards [113] are the BMP, PDF, PNG, GIF, TIFF, PICT etc. The lossless image representation formats include the BMP, PNG, GIF, TIFF, JPEG LS etc. BMP (bitmap) [113] is a bitmapped graphics format used internally by the Microsoft Windows graphics subsystem (GDI), and used commonly as a simple graphics file format on that platform. It is an uncompressed format. PNG (Portable Network Graphics) [115] is a bitmap image format that employs lossless data compression. PNG was created to both improve upon and replace the GIF [116] format with an image file format that does not require a patent license to use. It uses the DEFLATE compression algorithm [117], that uses a combination of the LZ77 algorithm and Huffman coding [46]. The Graphics Interchange Format (GIF) [116] was created by CompuServe in It is widely used in web applications for lossless image compression and animations. Here, color images are limited to 256 colors and it uses lossless LZW coding [66]. TIFF (Tagged Image File Format) [113] is a file format for mainly storing images, including photographs and line art. It is one of the most popular and flexible of the current public domain raster file formats. Originally created by the company Aldus, jointly with Microsoft, for use with PostScript printing, TIFF is a popular format for high color depth images, along with JPEG and PNG. TIFF format is widely supported by image-manipulation applications, and by scanning, faxing, word processing, optical character recognition, and other applications. The JBIG (Joint Bi-level Image Experts Group) (ISO IEC, and CCITT) defined the JBIG standards [111] for compression of binary images uses context sensitive arithmetic coding [64], supports progressive image transmission and supports images using multiple bit planes. The JPEG LS [112] is a lossless image compression standard created by the JPEG group. It uses adaptive predictive coding, 41

26 context modeling (to update probability models) and Golomb coding and is not as popular as the lossy JPEG standard. The lossy image compression standard includes the common JPEG (Joint Photographic Experts Group. JPEG [4] is an algorithm designed to compress images with 24 bits depth or greyscale images. It is a lossy compression algorithm which employs a transform coding method, namely the DCT (Discrete Cosine Transform). One of the characteristics that make the algorithm very flexible is that the compression rate can be adjusted. If a lot of compression is done, more information will be lost, but the resultant image size will be smaller. With a smaller compression rate, a better quality will be achieved, but the size of the resulting image will be bigger. This compression consists of making the coefficients in the quantization matrix bigger when more compression is needed and smaller when less compression is needed. The algorithm is based on two visual effects of the human visual system. First, humans are more sensitive to the luminance than to the chrominance. Second, humans are more sensitive to changes in homogeneous areas, than in areas where there are more variations (higher frequencies). JPEG is the most used format for storing and transmitting images in Internet. The JPEG 2000 [20] is an updated version of the JPEG standard to increase compression rates and support a number of advanced features such as, area of interest coding and progressive coding. JPEG 2000 is a compression standard enabling both lossless and lossy storage. The design goal of JPEG 2000 was to provide a better rate-distortion tradeoff and improved subjective image quality. JPEG-2000 is based on the wavelet coding technique and provides an increased flexibility in both the compression of continuous-tone still images and access to the compressed data. Portions of the compressed image can be extracted for retranslation, storage, display, or editing. Coefficient quantization is adapted to individual scales and subbands and the quantized coefficients are arithmetically coded [64], [65], on a bit-plane basis. The JPEG 2000 standard will allow image resolutions greater than 64 K by 64 K without tiling. The current JPEG standard has 44 modes, many of which are application specific and not used by the majority of JPEG decoders. It can handle image size up to The standard will provide improved error resilience for transmission in noisy environments such as wireless networks and the Internet. It offers meta-data mechanisms for incorporating additional non-image data as part of the file. This might be useful for including text 42

27 along with imagery, as one important example. In addition, the JPEG2000 is able to handle up to 256 channels of information whereas the current JPEG standard is only able to handle three color channels. 2.4 RESEARCH GAP It is seen that wavelet-based coding [21] provides considerable improvements in picture quality at higher compression ratios. Their superior energy compaction properties and correspondence with the human visual system have made, wavelet compression methods produce subjective results. Although the basic wavelet theory and the application of wavelets to image compression have been well developed and widely accepted, modeling the joint behavior of wavelet coefficients along an edge offers a real challenge. In spite of providing exceptional results in terms of rate-distortion compression, the transform-based coding methods do not exploit the geometry of the edge singularities in an image. In most images the majority of information is contained in structured elements. Examples of such structures are lines making up edges or contours separating at colored regions. Natural images also contain these components, though mixed with unstructured texture components. Such images are termed as piecewise smooth objects or geometric objects. The structured elements in images are shown in Fig The smooth region in images where there is grayscale regularity is called texture and a smooth edge contour where there is geometric regularity is referred to as geometry. We must exploit both grayscale regularity and geometric regularity for maximizing the performance of compression algorithms. Fig. 2.1: Structured elements in images It was seen that the segmentation based image coding techniques [62] makes efficient use of the underlying geometry of edge singularities of an image. For 43

Modified SPIHT Image Coder For Wireless Communication

Modified SPIHT Image Coder For Wireless Communication Modified SPIHT Image Coder For Wireless Communication M. B. I. REAZ, M. AKTER, F. MOHD-YASIN Faculty of Engineering Multimedia University 63100 Cyberjaya, Selangor Malaysia Abstract: - The Set Partitioning

More information

CSEP 521 Applied Algorithms Spring Lossy Image Compression

CSEP 521 Applied Algorithms Spring Lossy Image Compression CSEP 521 Applied Algorithms Spring 2005 Lossy Image Compression Lossy Image Compression Methods Scalar quantization (SQ). Vector quantization (VQ). DCT Compression JPEG Wavelet Compression SPIHT UWIC (University

More information

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding. Project Title: Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding. Midterm Report CS 584 Multimedia Communications Submitted by: Syed Jawwad Bukhari 2004-03-0028 About

More information

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction Compression of RADARSAT Data with Block Adaptive Wavelets Ian Cumming and Jing Wang Department of Electrical and Computer Engineering The University of British Columbia 2356 Main Mall, Vancouver, BC, Canada

More information

Image Compression. CS 6640 School of Computing University of Utah

Image Compression. CS 6640 School of Computing University of Utah Image Compression CS 6640 School of Computing University of Utah Compression What Reduce the amount of information (bits) needed to represent image Why Transmission Storage Preprocessing Redundant & Irrelevant

More information

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures The Lecture Contains: Performance Measures file:///d /...Ganesh%20Rana)/MY%20COURSE_Ganesh%20Rana/Prof.%20Sumana%20Gupta/FINAL%20DVSP/lecture%2042/42_1.htm[12/31/2015 11:57:52 AM] 3) Subband Coding It

More information

Tutorial on Image Compression

Tutorial on Image Compression Tutorial on Image Compression Richard Baraniuk Rice University dsp.rice.edu Agenda Image compression problem Transform coding (lossy) Approximation linear, nonlinear DCT-based compression JPEG Wavelet-based

More information

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106 CHAPTER 6 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform Page No 6.1 Introduction 103 6.2 Compression Techniques 104 103 6.2.1 Lossless compression 105 6.2.2 Lossy compression

More information

Lecture 5: Compression I. This Week s Schedule

Lecture 5: Compression I. This Week s Schedule Lecture 5: Compression I Reading: book chapter 6, section 3 &5 chapter 7, section 1, 2, 3, 4, 8 Today: This Week s Schedule The concept behind compression Rate distortion theory Image compression via DCT

More information

Color Image Compression Using EZW and SPIHT Algorithm

Color Image Compression Using EZW and SPIHT Algorithm Color Image Compression Using EZW and SPIHT Algorithm Ms. Swati Pawar 1, Mrs. Adita Nimbalkar 2, Mr. Vivek Ugale 3 swati.pawar@sitrc.org 1, adita.nimbalkar@sitrc.org 2, vivek.ugale@sitrc.org 3 Department

More information

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P SIGNAL COMPRESSION 9. Lossy image compression: SPIHT and S+P 9.1 SPIHT embedded coder 9.2 The reversible multiresolution transform S+P 9.3 Error resilience in embedded coding 178 9.1 Embedded Tree-Based

More information

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform Torsten Palfner, Alexander Mali and Erika Müller Institute of Telecommunications and Information Technology, University of

More information

THE TRANSFORM AND DATA COMPRESSION HANDBOOK

THE TRANSFORM AND DATA COMPRESSION HANDBOOK THE TRANSFORM AND DATA COMPRESSION HANDBOOK Edited by K.R. RAO University of Texas at Arlington AND RC. YIP McMaster University CRC Press Boca Raton London New York Washington, D.C. Contents 1 Karhunen-Loeve

More information

CS 335 Graphics and Multimedia. Image Compression

CS 335 Graphics and Multimedia. Image Compression CS 335 Graphics and Multimedia Image Compression CCITT Image Storage and Compression Group 3: Huffman-type encoding for binary (bilevel) data: FAX Group 4: Entropy encoding without error checks of group

More information

Wavelet Transform (WT) & JPEG-2000

Wavelet Transform (WT) & JPEG-2000 Chapter 8 Wavelet Transform (WT) & JPEG-2000 8.1 A Review of WT 8.1.1 Wave vs. Wavelet [castleman] 1 0-1 -2-3 -4-5 -6-7 -8 0 100 200 300 400 500 600 Figure 8.1 Sinusoidal waves (top two) and wavelets (bottom

More information

MRT based Fixed Block size Transform Coding

MRT based Fixed Block size Transform Coding 3 MRT based Fixed Block size Transform Coding Contents 3.1 Transform Coding..64 3.1.1 Transform Selection...65 3.1.2 Sub-image size selection... 66 3.1.3 Bit Allocation.....67 3.2 Transform coding using

More information

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression Habibollah Danyali and Alfred Mertins University of Wollongong School of Electrical, Computer and Telecommunications Engineering

More information

Video Compression An Introduction

Video Compression An Introduction Video Compression An Introduction The increasing demand to incorporate video data into telecommunications services, the corporate environment, the entertainment industry, and even at home has made digital

More information

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES J. Oliver, Student Member, IEEE, M. P. Malumbres, Member, IEEE Department of Computer Engineering (DISCA) Technical University

More information

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm International Journal of Engineering Research and General Science Volume 3, Issue 4, July-August, 15 ISSN 91-2730 A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

More information

Fingerprint Image Compression

Fingerprint Image Compression Fingerprint Image Compression Ms.Mansi Kambli 1*,Ms.Shalini Bhatia 2 * Student 1*, Professor 2 * Thadomal Shahani Engineering College * 1,2 Abstract Modified Set Partitioning in Hierarchical Tree with

More information

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE Volume 4, No. 1, January 2013 Journal of Global Research in Computer Science RESEARCH PAPER Available Online at www.jgrcs.info IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE Nikita Bansal *1, Sanjay

More information

Keywords DCT, SPIHT, PSNR, Bar Graph, Compression Quality

Keywords DCT, SPIHT, PSNR, Bar Graph, Compression Quality Volume 3, Issue 7, July 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Image Compression

More information

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS SUBMITTED BY: NAVEEN MATHEW FRANCIS #105249595 INTRODUCTION The advent of new technologies

More information

( ) ; For N=1: g 1. g n

( ) ; For N=1: g 1. g n L. Yaroslavsky Course 51.7211 Digital Image Processing: Applications Lect. 4. Principles of signal and image coding. General principles General digitization. Epsilon-entropy (rate distortion function).

More information

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM Prabhjot kour Pursuing M.Tech in vlsi design from Audisankara College of Engineering ABSTRACT The quality and the size of image data is constantly increasing.

More information

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi 1. Introduction The choice of a particular transform in a given application depends on the amount of

More information

DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER

DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER DCT-BASED IMAGE COMPRESSION USING WAVELET-BASED ALGORITHM WITH EFFICIENT DEBLOCKING FILTER Wen-Chien Yan and Yen-Yu Chen Department of Information Management, Chung Chou Institution of Technology 6, Line

More information

A Review on Wavelet-Based Image Compression Techniques

A Review on Wavelet-Based Image Compression Techniques A Review on Wavelet-Based Image Compression Techniques Vidhi Dubey, N.K.Mittal, S.G.kerhalkar Department of Electronics & Communication Engineerning, Oriental Institute of Science & Technology, Bhopal,

More information

Image Compression Algorithm and JPEG Standard

Image Compression Algorithm and JPEG Standard International Journal of Scientific and Research Publications, Volume 7, Issue 12, December 2017 150 Image Compression Algorithm and JPEG Standard Suman Kunwar sumn2u@gmail.com Summary. The interest in

More information

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder Reversible Wavelets for Embedded Image Compression Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder pavani@colorado.edu APPM 7400 - Wavelets and Imaging Prof. Gregory Beylkin -

More information

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform ECE 533 Digital Image Processing- Fall 2003 Group Project Embedded Image coding using zero-trees of Wavelet Transform Harish Rajagopal Brett Buehl 12/11/03 Contributions Tasks Harish Rajagopal (%) Brett

More information

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover 38 CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING Digital image watermarking can be done in both spatial domain and transform domain. In spatial domain the watermark bits directly added to the pixels of the

More information

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION 31 st July 01. Vol. 41 No. 005-01 JATIT & LLS. All rights reserved. ISSN: 199-8645 www.jatit.org E-ISSN: 1817-3195 HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION 1 SRIRAM.B, THIYAGARAJAN.S 1, Student,

More information

Digital Image Processing

Digital Image Processing Imperial College of Science Technology and Medicine Department of Electrical and Electronic Engineering Digital Image Processing PART 4 IMAGE COMPRESSION LOSSY COMPRESSION NOT EXAMINABLE MATERIAL Academic

More information

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

ISSN (ONLINE): , VOLUME-3, ISSUE-1, PERFORMANCE ANALYSIS OF LOSSLESS COMPRESSION TECHNIQUES TO INVESTIGATE THE OPTIMUM IMAGE COMPRESSION TECHNIQUE Dr. S. Swapna Rani Associate Professor, ECE Department M.V.S.R Engineering College, Nadergul,

More information

Novel Lossy Compression Algorithms with Stacked Autoencoders

Novel Lossy Compression Algorithms with Stacked Autoencoders Novel Lossy Compression Algorithms with Stacked Autoencoders Anand Atreya and Daniel O Shea {aatreya, djoshea}@stanford.edu 11 December 2009 1. Introduction 1.1. Lossy compression Lossy compression is

More information

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) 5 MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ) Contents 5.1 Introduction.128 5.2 Vector Quantization in MRT Domain Using Isometric Transformations and Scaling.130 5.2.1

More information

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression IMAGE COMPRESSION Image Compression Why? Reducing transportation times Reducing file size A two way event - compression and decompression 1 Compression categories Compression = Image coding Still-image

More information

Wavelet Based Image Compression Using ROI SPIHT Coding

Wavelet Based Image Compression Using ROI SPIHT Coding International Journal of Information & Computation Technology. ISSN 0974-2255 Volume 1, Number 2 (2011), pp. 69-76 International Research Publications House http://www.irphouse.com Wavelet Based Image

More information

REGION-BASED SPIHT CODING AND MULTIRESOLUTION DECODING OF IMAGE SEQUENCES

REGION-BASED SPIHT CODING AND MULTIRESOLUTION DECODING OF IMAGE SEQUENCES REGION-BASED SPIHT CODING AND MULTIRESOLUTION DECODING OF IMAGE SEQUENCES Sungdae Cho and William A. Pearlman Center for Next Generation Video Department of Electrical, Computer, and Systems Engineering

More information

Multimedia Communications. Transform Coding

Multimedia Communications. Transform Coding Multimedia Communications Transform Coding Transform coding Transform coding: source output is transformed into components that are coded according to their characteristics If a sequence of inputs is transformed

More information

IMAGE COMPRESSION USING EMBEDDED ZEROTREE WAVELET

IMAGE COMPRESSION USING EMBEDDED ZEROTREE WAVELET IMAGE COMPRESSION USING EMBEDDED ZEROTREE WAVELET A.M.Raid 1, W.M.Khedr 2, M. A. El-dosuky 1 and Wesam Ahmed 1 1 Mansoura University, Faculty of Computer Science and Information System 2 Zagazig University,

More information

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M. 322 FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING Moheb R. Girgis and Mohammed M. Talaat Abstract: Fractal image compression (FIC) is a

More information

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS Farag I. Y. Elnagahy Telecommunications Faculty of Electrical Engineering Czech Technical University in Prague 16627, Praha 6, Czech

More information

A Review on Digital Image Compression Techniques

A Review on Digital Image Compression Techniques A Review on Digital Image Compression Techniques Er. Shilpa Sachdeva Yadwindra College of Engineering Talwandi Sabo,Punjab,India +91-9915719583 s.sachdeva88@gmail.com Er. Rajbhupinder Kaur Department of

More information

5.1 Introduction. Shri Mata Vaishno Devi University,(SMVDU), 2009

5.1 Introduction. Shri Mata Vaishno Devi University,(SMVDU), 2009 Chapter 5 Multiple Transform in Image compression Summary Uncompressed multimedia data requires considerable storage capacity and transmission bandwidth. A common characteristic of most images is that

More information

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS Television services in Europe currently broadcast video at a frame rate of 25 Hz. Each frame consists of two interlaced fields, giving a field rate of 50

More information

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Image Compression for Mobile Devices using Prediction and Direct Coding Approach Image Compression for Mobile Devices using Prediction and Direct Coding Approach Joshua Rajah Devadason M.E. scholar, CIT Coimbatore, India Mr. T. Ramraj Assistant Professor, CIT Coimbatore, India Abstract

More information

Image Segmentation Techniques for Object-Based Coding

Image Segmentation Techniques for Object-Based Coding Image Techniques for Object-Based Coding Junaid Ahmed, Joseph Bosworth, and Scott T. Acton The Oklahoma Imaging Laboratory School of Electrical and Computer Engineering Oklahoma State University {ajunaid,bosworj,sacton}@okstate.edu

More information

IMAGE COMPRESSION. Chapter - 5 : (Basic)

IMAGE COMPRESSION. Chapter - 5 : (Basic) Chapter - 5 : IMAGE COMPRESSION (Basic) Q() Explain the different types of redundncies that exists in image.? (8M May6 Comp) [8M, MAY 7, ETRX] A common characteristic of most images is that the neighboring

More information

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year Image compression Stefano Ferrari Università degli Studi di Milano stefano.ferrari@unimi.it Methods for Image Processing academic year 2017 2018 Data and information The representation of images in a raw

More information

Short Communications

Short Communications Pertanika J. Sci. & Technol. 9 (): 9 35 (0) ISSN: 08-7680 Universiti Putra Malaysia Press Short Communications Singular Value Decomposition Based Sub-band Decomposition and Multiresolution (SVD-SBD-MRR)

More information

Topic 5 Image Compression

Topic 5 Image Compression Topic 5 Image Compression Introduction Data Compression: The process of reducing the amount of data required to represent a given quantity of information. Purpose of Image Compression: the reduction of

More information

PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE

PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE Pardeep Singh Nivedita Dinesh Gupta Sugandha Sharma PG Student PG Student Assistant Professor Assistant Professor Indo

More information

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy JPEG JPEG Joint Photographic Expert Group Voted as international standard in 1992 Works with color and grayscale images, e.g., satellite, medical,... Motivation: The compression ratio of lossless methods

More information

CMPT 365 Multimedia Systems. Media Compression - Image

CMPT 365 Multimedia Systems. Media Compression - Image CMPT 365 Multimedia Systems Media Compression - Image Spring 2017 Edited from slides by Dr. Jiangchuan Liu CMPT365 Multimedia Systems 1 Facts about JPEG JPEG - Joint Photographic Experts Group International

More information

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform S. Aruna Deepthi, Vibha D. Kulkarni, Dr.K. Jaya Sankar Department of Electronics and Communication Engineering, Vasavi College of

More information

An embedded and efficient low-complexity hierarchical image coder

An embedded and efficient low-complexity hierarchical image coder An embedded and efficient low-complexity hierarchical image coder Asad Islam and William A. Pearlman Electrical, Computer and Systems Engineering Dept. Rensselaer Polytechnic Institute, Troy, NY 12180,

More information

Fractal Compression. Related Topic Report. Henry Xiao. Queen s University. Kingston, Ontario, Canada. April 2004

Fractal Compression. Related Topic Report. Henry Xiao. Queen s University. Kingston, Ontario, Canada. April 2004 Fractal Compression Related Topic Report By Henry Xiao Queen s University Kingston, Ontario, Canada April 2004 Fractal Introduction Fractal is first introduced in geometry field. The birth of fractal geometry

More information

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION 1 GOPIKA G NAIR, 2 SABI S. 1 M. Tech. Scholar (Embedded Systems), ECE department, SBCE, Pattoor, Kerala, India, Email:

More information

Biomedical signal and image processing (Course ) Lect. 5. Principles of signal and image coding. Classification of coding methods.

Biomedical signal and image processing (Course ) Lect. 5. Principles of signal and image coding. Classification of coding methods. Biomedical signal and image processing (Course 055-355-5501) Lect. 5. Principles of signal and image coding. Classification of coding methods. Generalized quantization, Epsilon-entropy Lossless and Lossy

More information

Fundamentals of Video Compression. Video Compression

Fundamentals of Video Compression. Video Compression Fundamentals of Video Compression Introduction to Digital Video Basic Compression Techniques Still Image Compression Techniques - JPEG Video Compression Introduction to Digital Video Video is a stream

More information

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION K Nagamani (1) and AG Ananth (2) (1) Assistant Professor, R V College of Engineering, Bangalore-560059. knmsm_03@yahoo.com (2) Professor, R V

More information

Enhanced Hybrid Compound Image Compression Algorithm Combining Block and Layer-based Segmentation

Enhanced Hybrid Compound Image Compression Algorithm Combining Block and Layer-based Segmentation Enhanced Hybrid Compound Image Compression Algorithm Combining Block and Layer-based Segmentation D. Maheswari 1, Dr. V.Radha 2 1 Department of Computer Science, Avinashilingam Deemed University for Women,

More information

Robust Image Watermarking based on Discrete Wavelet Transform, Discrete Cosine Transform & Singular Value Decomposition

Robust Image Watermarking based on Discrete Wavelet Transform, Discrete Cosine Transform & Singular Value Decomposition Advance in Electronic and Electric Engineering. ISSN 2231-1297, Volume 3, Number 8 (2013), pp. 971-976 Research India Publications http://www.ripublication.com/aeee.htm Robust Image Watermarking based

More information

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology Course Presentation Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology Image Compression Basics Large amount of data in digital images File size

More information

CHAPTER 2 LITERATURE REVIEW

CHAPTER 2 LITERATURE REVIEW CHAPTER LITERATURE REVIEW Image Compression is achieved by removing the redundancy in the image. Redundancies in the image can be classified into three categories; inter-pixel or spatial redundancy, psycho-visual

More information

AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES

AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES Dr.S.Narayanan Computer Centre, Alagappa University, Karaikudi-South (India) ABSTRACT The programs using complex

More information

On the Selection of Image Compression Algorithms

On the Selection of Image Compression Algorithms On the Selection of Image Compression Algorithms Chaur-Chin Chen Department of Computer Science National Tsing Hua University Hsinchu 300, Taiwan e-mail: cchen@cs.nthu.edu.tw Abstract This paper attempts

More information

Adaptive Quantization for Video Compression in Frequency Domain

Adaptive Quantization for Video Compression in Frequency Domain Adaptive Quantization for Video Compression in Frequency Domain *Aree A. Mohammed and **Alan A. Abdulla * Computer Science Department ** Mathematic Department University of Sulaimani P.O.Box: 334 Sulaimani

More information

Image Compression Algorithms using Wavelets: a review

Image Compression Algorithms using Wavelets: a review Image Compression Algorithms using Wavelets: a review Sunny Arora Department of Computer Science Engineering Guru PremSukh Memorial college of engineering City, Delhi, India Kavita Rathi Department of

More information

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding 593 Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding Janaki. R 1 Dr.Tamilarasi.A 2 1 Assistant Professor & Head, Department of Computer Science, N.K.R. Govt. Arts College

More information

SPIHT-BASED IMAGE ARCHIVING UNDER BIT BUDGET CONSTRAINTS

SPIHT-BASED IMAGE ARCHIVING UNDER BIT BUDGET CONSTRAINTS SPIHT-BASED IMAGE ARCHIVING UNDER BIT BUDGET CONSTRAINTS by Yifeng He A thesis submitted in conformity with the requirements for the degree of Master of Applied Science, Graduate School of Electrical Engineering

More information

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - ABSTRACT: REVIEW M.JEYAPRATHA 1, B.POORNA VENNILA 2 Department of Computer Application, Nadar Saraswathi College of Arts and Science, Theni, Tamil

More information

signal-to-noise ratio (PSNR), 2

signal-to-noise ratio (PSNR), 2 u m " The Integration in Optics, Mechanics, and Electronics of Digital Versatile Disc Systems (1/3) ---(IV) Digital Video and Audio Signal Processing ƒf NSC87-2218-E-009-036 86 8 1 --- 87 7 31 p m o This

More information

VC 12/13 T16 Video Compression

VC 12/13 T16 Video Compression VC 12/13 T16 Video Compression Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos Miguel Tavares Coimbra Outline The need for compression Types of redundancy

More information

A combined fractal and wavelet image compression approach

A combined fractal and wavelet image compression approach A combined fractal and wavelet image compression approach 1 Bhagyashree Y Chaudhari, 2 ShubhanginiUgale 1 Student, 2 Assistant Professor Electronics and Communication Department, G. H. Raisoni Academy

More information

Scalable Coding of Image Collections with Embedded Descriptors

Scalable Coding of Image Collections with Embedded Descriptors Scalable Coding of Image Collections with Embedded Descriptors N. Adami, A. Boschetti, R. Leonardi, P. Migliorati Department of Electronic for Automation, University of Brescia Via Branze, 38, Brescia,

More information

Stereo Image Compression

Stereo Image Compression Stereo Image Compression Deepa P. Sundar, Debabrata Sengupta, Divya Elayakumar {deepaps, dsgupta, divyae}@stanford.edu Electrical Engineering, Stanford University, CA. Abstract In this report we describe

More information

Image and Video Compression Fundamentals

Image and Video Compression Fundamentals Video Codec Design Iain E. G. Richardson Copyright q 2002 John Wiley & Sons, Ltd ISBNs: 0-471-48553-5 (Hardback); 0-470-84783-2 (Electronic) Image and Video Compression Fundamentals 3.1 INTRODUCTION Representing

More information

Compression II: Images (JPEG)

Compression II: Images (JPEG) Compression II: Images (JPEG) What is JPEG? JPEG: Joint Photographic Expert Group an international standard in 1992. Works with colour and greyscale images Up 24 bit colour images (Unlike GIF) Target Photographic

More information

New Perspectives on Image Compression

New Perspectives on Image Compression New Perspectives on Image Compression Michael Thierschmann, Reinhard Köhn, Uwe-Erik Martin LuRaTech GmbH Berlin, Germany Abstract Effective Data compression techniques are necessary to deal with the increasing

More information

DCT Based, Lossy Still Image Compression

DCT Based, Lossy Still Image Compression DCT Based, Lossy Still Image Compression NOT a JPEG artifact! Lenna, Playboy Nov. 1972 Lena Soderberg, Boston, 1997 Nimrod Peleg Update: April. 2009 http://www.lenna.org/ Image Compression: List of Topics

More information

Digital Image Processing

Digital Image Processing Digital Image Processing 5 January 7 Dr. ir. Aleksandra Pizurica Prof. Dr. Ir. Wilfried Philips Aleksandra.Pizurica @telin.ugent.be Tel: 9/64.3415 UNIVERSITEIT GENT Telecommunicatie en Informatieverwerking

More information

Fine grain scalable video coding using 3D wavelets and active meshes

Fine grain scalable video coding using 3D wavelets and active meshes Fine grain scalable video coding using 3D wavelets and active meshes Nathalie Cammas a,stéphane Pateux b a France Telecom RD,4 rue du Clos Courtel, Cesson-Sévigné, France b IRISA, Campus de Beaulieu, Rennes,

More information

JPEG 2000 compression

JPEG 2000 compression 14.9 JPEG and MPEG image compression 31 14.9.2 JPEG 2000 compression DCT compression basis for JPEG wavelet compression basis for JPEG 2000 JPEG 2000 new international standard for still image compression

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS)

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5. Index 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5. Literature Lossy Compression Motivation To meet a given target bit-rate for storage

More information

Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18

Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18 Image Compression using Discrete Wavelet Transform Preston Dye ME 535 6/2/18 Introduction Social media is an essential part of an American lifestyle. Latest polls show that roughly 80 percent of the US

More information

Digital Image Processing

Digital Image Processing Lecture 9+10 Image Compression Lecturer: Ha Dai Duong Faculty of Information Technology 1. Introduction Image compression To Solve the problem of reduncing the amount of data required to represent a digital

More information

Progressive Geometry Compression. Andrei Khodakovsky Peter Schröder Wim Sweldens

Progressive Geometry Compression. Andrei Khodakovsky Peter Schröder Wim Sweldens Progressive Geometry Compression Andrei Khodakovsky Peter Schröder Wim Sweldens Motivation Large (up to billions of vertices), finely detailed, arbitrary topology surfaces Difficult manageability of such

More information

JPEG: An Image Compression System

JPEG: An Image Compression System JPEG: An Image Compression System ISO/IEC DIS 10918-1 ITU-T Recommendation T.81 http://www.jpeg.org/ Nimrod Peleg update: April 2007 Basic Structure Source Image Data Reconstructed Image Data Encoder Compressed

More information

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels Theoretical size of a file representing a 5k x 4k colour photograph: 5000 x 4000 x 3 = 60 MB 1 min of UHD tv movie: 3840 x 2160 x 3 x 24 x 60 = 36 GB 1. Exploit coding redundancy 2. Exploit spatial and

More information

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform Archana Deshlahra 1, G. S.Shirnewar 2,Dr. A.K. Sahoo 3 1 PG Student, National Institute of Technology Rourkela, Orissa (India) deshlahra.archana29@gmail.com

More information

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM 74 CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM Many data embedding methods use procedures that in which the original image is distorted by quite a small

More information

An Embedded Wavelet Video Coder Using Three-Dimensional Set Partitioning in Hierarchical Trees (SPIHT)

An Embedded Wavelet Video Coder Using Three-Dimensional Set Partitioning in Hierarchical Trees (SPIHT) An Embedded Wavelet Video Coder Using Three-Dimensional Set Partitioning in Hierarchical Trees (SPIHT) Beong-Jo Kim and William A. Pearlman Department of Electrical, Computer, and Systems Engineering Rensselaer

More information

Reconstruction PSNR [db]

Reconstruction PSNR [db] Proc. Vision, Modeling, and Visualization VMV-2000 Saarbrücken, Germany, pp. 199-203, November 2000 Progressive Compression and Rendering of Light Fields Marcus Magnor, Andreas Endmann Telecommunications

More information

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction Yongying Gao and Hayder Radha Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48823 email:

More information

Part 1 of 4. MARCH

Part 1 of 4. MARCH Presented by Brought to You by Part 1 of 4 MARCH 2004 www.securitysales.com A1 Part1of 4 Essentials of DIGITAL VIDEO COMPRESSION By Bob Wimmer Video Security Consultants cctvbob@aol.com AT A GLANCE Compression

More information