Histogram Based Block Classification Scheme of Compound Images: A Hybrid Extension

Similar documents
Enhanced Hybrid Compound Image Compression Algorithm Combining Block and Layer-based Segmentation

USING H.264/AVC-INTRA FOR DCT BASED SEGMENTATION DRIVEN COMPOUND IMAGE COMPRESSION

ISSN: An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding

Mixed Raster Content for Compound Image Compression

IMAGE COMPRESSION TECHNIQUES

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Performance Comparison between DWT-based and DCT-based Encoders

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Image Compression Algorithm and JPEG Standard

Video Compression An Introduction

International Journal of Advancements in Research & Technology, Volume 2, Issue 9, September ISSN

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Compression of Stereo Images using a Huffman-Zip Scheme

946 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 19, NO. 4, APRIL /$ IEEE

A QUAD-TREE DECOMPOSITION APPROACH TO CARTOON IMAGE COMPRESSION. Yi-Chen Tsai, Ming-Sui Lee, Meiyin Shen and C.-C. Jay Kuo

CMPT 365 Multimedia Systems. Media Compression - Image

VC 12/13 T16 Video Compression

CS 335 Graphics and Multimedia. Image Compression

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

Design of a High Speed CAVLC Encoder and Decoder with Parallel Data Path

Multilayer Document Compression Algorithm

Image Segmentation Techniques for Object-Based Coding

Part 1 of 4. MARCH

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

signal-to-noise ratio (PSNR), 2

Topic 5 Image Compression

Fundamentals of Video Compression. Video Compression

Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Mixed Raster Content for Compound Image Compression

Image Error Concealment Based on Watermarking

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

An Intraframe Coding by Modified DCT Transform using H.264 Standard

Test Segmentation of MRC Document Compression and Decompression by Using MATLAB

Fast frame memory access method for H.264/AVC

H.264/AVC BASED NEAR LOSSLESS INTRA CODEC USING LINE-BASED PREDICTION AND MODIFIED CABAC. Jung-Ah Choi, Jin Heo, and Yo-Sung Ho

Modified SPIHT Image Coder For Wireless Communication

Frequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding

Differential Compression and Optimal Caching Methods for Content-Based Image Search Systems

JPEG Compression Using MATLAB

Module 7 VIDEO CODING AND MOTION ESTIMATION

Volume 2, Issue 9, September 2014 ISSN

AN EFFICIENT VIDEO WATERMARKING USING COLOR HISTOGRAM ANALYSIS AND BITPLANE IMAGE ARRAYS

Digital Image Representation Image Compression

Context based optimal shape coding

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

EE 5359 MULTIMEDIA PROCESSING SPRING Final Report IMPLEMENTATION AND ANALYSIS OF DIRECTIONAL DISCRETE COSINE TRANSFORM IN H.

Using Shift Number Coding with Wavelet Transform for Image Compression

A new predictive image compression scheme using histogram analysis and pattern matching

A Research Paper on Lossless Data Compression Techniques

STUDY AND IMPLEMENTATION OF VIDEO COMPRESSION STANDARDS (H.264/AVC, DIRAC)

Ramani A.V 2 HEAD OF CS & SRMV CAS, Coimbatore, Tamilnadu, India

HIGH LEVEL SYNTHESIS OF A 2D-DWT SYSTEM ARCHITECTURE FOR JPEG 2000 USING FPGAs

Journal of Computer Engineering and Technology (IJCET), ISSN (Print), International Journal of Computer Engineering

Research on Distributed Video Compression Coding Algorithm for Wireless Sensor Networks

Sparse Transform Matrix at Low Complexity for Color Image Compression

Image Compression - An Overview Jagroop Singh 1

Lecture 13 Video Coding H.264 / MPEG4 AVC

Statistical Image Compression using Fast Fourier Coefficients

Implementation and analysis of Directional DCT in H.264

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

Keywords DCT, SPIHT, PSNR, Bar Graph, Compression Quality

Analysis of Image and Video Using Color, Texture and Shape Features for Object Identification

Comparative Analysis on Medical Images using SPIHT, STW and EZW

Three Dimensional Motion Vectorless Compression

A Novel Compression for Enormous Motion Data in Video Using Repeated Object Clips [ROC]

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Comparative Study between DCT and Wavelet Transform Based Image Compression Algorithm

A New Compression Method Strictly for English Textual Data

A New Technique of Extraction of Edge Detection Using Digital Image Processing

Removing Spatial Redundancy from Image by Using Variable Vertex Chain Code

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

International Journal of Advance Engineering and Research Development. Improving the Compression Factor in a Color Image Compression

Image Compression Algorithm for Different Wavelet Codes

Image Coding and Data Compression

SCALABLE HYBRID VIDEO CODERS WITH DOUBLE MOTION COMPENSATION

Image Compression Using BPD with De Based Multi- Level Thresholding

Comparative and performance analysis of HEVC and H.264 Intra frame coding and JPEG2000

EXPLORING ON STEGANOGRAPHY FOR LOW BIT RATE WAVELET BASED CODER IN IMAGE RETRIEVAL SYSTEM

Video Compression System for Online Usage Using DCT 1 S.B. Midhun Kumar, 2 Mr.A.Jayakumar M.E 1 UG Student, 2 Associate Professor

H.264 Based Video Compression

Scalable Coding of Image Collections with Embedded Descriptors

Introduction to Video Coding

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Implication of variable code block size in JPEG 2000 and its VLSI implementation

JPEG 2000 compression

Rate-distortion Optimized Streaming of Compressed Light Fields with Multiple Representations

Fingerprint Image Compression

Low-Complexity Block-Based Motion Estimation via One-Bit Transforms

A Parallel Reconfigurable Architecture for DCT of Lengths N=32/16/8

COMPARATIVE STUDY OF HISTOGRAM SHIFTING ALGORITHMS FOR DIGITAL WATERMARKING

Testing HEVC model HM on objective and subjective way

Hybrid Image Compression Using DWT, DCT and Huffman Coding. Techniques

Optimization of Bit Rate in Medical Image Compression

Advanced Video Coding: The new H.264 video compression standard

Implementation and Analysis of Efficient Lossless Image Compression Algorithm

Transcription:

Histogram Based Block Classification Scheme of Compound Images: A Hybrid Extension Professor S Kumar Department of Computer Science and Engineering JIS College of Engineering, Kolkata, India Abstract The proposed paper aims in developing an efficient block classification using histogram for compressing compound images that contain graphics, text and picture images. In this paper, the given compound image is segmented into 8x8 blocks and these blocks are used for the image classification. The segmented blocks are classified into blocks of four different types: background blocks, text blocks, mixed blocks and picture blocks. The efficiency of block classification is observed to be of 97% for compound and computer generated images. The proposed block classification method is very simple and effective in compressing compound images. This paper discuss about the block classification of compound images using MATLAB. Index Terms Compound image, Image Segmentation, Block Classification, Image compression I. INTRODUCTION A picture can say more than a thousand words. Unfortunately, storing an image can cost more than a million words. This isn't always a problem, because many of today's computers are sophisticated enough to handle large amounts of data. Sometimes however you want to use the limited resources more efficiently. Digital cameras for instance often have a totally unsatisfactory amount of memory, and the internet can be very slow. Mostly in internet, it is necessary to send the digital type of images using digital camera, personal computers. It contains more and more compound images. While sending the compound images, it occupies more size and takes large amount of time to attach. In such conditions, compound image compression is needed and thus requires rethinking of our approach to compression. In this paper, the block based segmentation approach is considered and it gives the better result. In the case of object based approach, complexity is the main drawback, since image segmentation may require the use of very sophisticated segmentation algorithms. In layer based segmentation, the main drawbacks are mismatch between the compression method and the data types, and an intrinsic redundancy due to the fact that the same parts of the original image appear in several layers. But in the block based segmentation it gives the better mismatch between the region boundaries and the compression algorithms, and the lack of redundancy. The proposed block classification algorithm has low calculation complexity, which makes it very suitable for real-time application. From a practical point of view it is important to differentiate between the computer-generated images and a scanned or otherwise acquired images [2]. The main difference is that the acquired images will have a higher level of inherent noise. This will impact both the segmentation strategy, and the selection of the compression method. Blocks of different type are distinct in nature and have different statistics properties. Background blocks are very flat and dominated by one kind of color. Text blocks are more compact in spatial domain than that in DCT domain. The picture block is mainly concentrated on low frequency coefficients when they are DCT transformed. Mixed blocks, containing mixed text and picture images, cannot be compactly represented both in spatial and frequency domain. II. BLOCK BASED COMPRESSION Compressing the compound image is the hard problem because it contains the combination of text, picture, background and mixed types of blocks. It is well addressed in JPEG2000 standard. In the past, compression research has been on developing better algorithms, the future focus is likely to be on the methods of combining various algorithms to achieve the best compression performance for the given types of images. A lot of algorithms have been designed to compress compound images with different types. Run length coding is well suitable for compressing the background blocks. The Lempel-Ziv algorithm is designed to compress pure text images, which only have text on the pure color background in the whole images. As the text blocks are images itself rather than text, implementing Lempel-Ziv algorithm is not feasible and more complex. Wavelet compression is suitable to compress the text blocks. The JPEG algorithms are suitable for pure picture images. One popular video coding standard H.264 [1] gives better performance for the mixed blocks. Manuscript received September, 2014. Dr.S Kumar, Department of Computer Science and Engineering, JIS College of Engineering, Kolkata, India Mobile No. +91 7686993174 Fig.1 Block Based Compression The framework of the block-based compression scheme is shown in Fig.1. The compound image is first divided into 8x8 blocks. Then blocks are classified into four types: 3116

background, text, mixed and picture according to their different statistical characteristics. Blocks of different type will be compressed with different algorithms. The proposed scheme can effectively compress the mixed blocks, which are not well handled by some block-based algorithms. The proposed scheme achieves good coding performance on text images, picture images and compound images. It also outperforms DjVu on compound images at high bit rate [3]. The block type map is compressed using an arithmetic coder. III. BLOCK SEGMENTATION Segmentation subdivides an image into its constituent regions or objects. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics. The result of image segmentation is a set of segments that collectively cover the entire image, set of contours extracted from the image. Each of the pixels in a region is similar with respect to some characteristic or computed property, such as color, intensity or texture [4]. Adjacent regions are significantly different with respect to the same characteristics. General approach to compress the compound image includes the image segmentation into the regions of similar data types. Bandwidth is a very important limiting factor in application of image segmentation [5]. In this paper the given image is segmented into 8x8 blocks and then that blocks are used for the image classification process. IV. BLOCK CLASSIFICATION SCHEME Block classification is defined as to classify the blocks into individual blocks. Classification is performed by using the histogram values. A histogram is a graphical representation, showing a visual impression of the distribution of experimental data. Based upon the histogram, it is necessary to set the threshold values. The threshold values classify the blocks as text, picture, background and mixed blocks. A fast and effective classification algorithm based on three features: histogram, gradient of the block and the anisotropic values. The entire block classification flow is shown in Fig.2. Here the blocks are classified into four types: background, text, mixed and picture. Blocks of different type are distinct in nature and have different statistics properties. The background blocks contain only the low histogram pixels and show one peak at the low histogram pixels. Text blocks always shows several peaks in low histogram value (LHV) and the high histogram values (HHV) as shown in Fig.3. Only a few mid-histogram values (MHV) are observed in text blocks. If the block contains large numbers of high histogram and mid histogram values, it will be identified as mixed blocks. The block mainly consisting of mid histogram values are declared as picture blocks. Here thresholds T1-T4 is chosen to determine the block type. V. BLOCK CODING Block Coding consists of four algorithms to compress the individual blocks. They are Wavelet, Run length encoding, JPEG2000 algorithm, H.264 algorithm to compress the text, background, picture and mixed blocks. Then by using arithmetic coder all the individual blocks are get added [6]. Thus the compressed images are obtained. Then by using decompression algorithm the original images is obtained without any affects. Blocks of different type are distinct in nature and have different statistics properties. Background blocks are very flat and dominated by one kind of color (white color). Text blocks are more compact in spatial domain than that in DCT domain. The picture block is mainly concentrated on low frequency coefficients when they are DCT transformed. Mixed blocks containing mixed text and picture images cannot be compactly represented both in spatial and frequency domain. A. Background Block Coding Algorithm The coding of background blocks is straightforward. Background blocks are dominated in the white region at only one point and gray scale levels are limited to the given threshold values. All the values in the background block regions are quantized to the most frequent color, which is coded using run length encoder [7]. Run-length encoding (RLE) is a very simple form of data compression in which runs of data i.e. sequences in which the same data value occurs in many consecutive data elements are stored as a single data value and count, rather than as the original run. This is most useful on data that contains many such runs: for example, simple graphic images such as icons, line drawings, animations and white spaces. It is not useful with files that don't have many runs as it could greatly increase the file size. Run-length encoding performs lossless data compression and is well suited to palette-based iconic images. It does not work well at all on continuous-tone images such as photographs, although JPEG uses it quite effectively on the coefficients that remain after transforming and quantizing image blocks. B. Text Block coding Algorithm Wavelet coding is used to compress the text blocks. Wavelet theory intends to analyze and transform data. It can be used to make explicit the correlation between neighboring pixels of an image, and this explicit correlation can be exploited by compression algorithms to store the same image more efficiently [8]. Wavelets can even be used to transform an image in more and less important data items. By only storing the important ones the image can be stored in an amazingly more compact fashion, at the cost of introducing hardly noticeable distortions in the image. As the text blocks are images itself rather than text, implementing Lempel-Ziv algorithm is not feasible and more complex [9]. Wavelet based compression overcomes this problem and provides efficient compression of text blocks. C. Mixed Block coding Algorithm The latest video compression standard, H.264 (also known as MPEG-4 Part 10/AVC for Advanced Video Coding), is expected to become the video standard of choice in the coming years. H.264 is an open, licensed standard that supports the most efficient video compression techniques available today. Without compromising image quality, an H.264 encoder can reduce the size of a digital video file by more than 80% compared with the Motion JPEG format and as much as 50% more than with the MPEG-4 Part 2 standard. Context-adaptive binary arithmetic coding (CABAC) is a form of entropy coding used in H.264/MPEG-4 AVC video encoding. It is a lossless compression technique. It is notable for providing much better compression than most other 3117

encoding algorithms used in video encoding, and is one of the primary advantages of the H.264/AVC encoding scheme [10]. Implementation of H.264 CABAC coding makes efficient compression of mixed blocks. Fig.2 Block Classification Scheme D. Picture Block coding Algorithm The aim of JPEG 2000 is not only improving compression performance over JPEG but also adding (or improving) features such as scalability and editability [9]. In fact, JPEG 2000's improvement in compression performance relative to the original JPEG standard is actually rather modest and should not ordinarily be the primary consideration for evaluating the design. Very low and very high compression rates are supported in JPEG 2000. In fact, the graceful ability of the design to handle a very large range of effective bit rates is one of the strengths of JPEG 2000. For example, to reduce the number of bits for a picture below a certain amount, the advisable thing to do with the first JPEG standard is to reduce the resolution of the input image before encoding it [11]. That's unnecessary when using JPEG 2000, because JPEG 2000 already does this automatically through its multi-resolution decomposition structure. Compared to the previous JPEG standard, JPEG 2000 delivers a typical compression gain in the range of 20%, depending on the image characteristics. Higher-resolution images tend to benefit more, where JPEG-2000's spatial-redundancy prediction can contribute more to the compression process [12-14]. In very low-bitrate applications, studies have shown JPEG 2000 to be outperformed by the intra-frame coding mode of H.264. Implementation of JPEG2000 makes compression of picture blocks more easy and effective. Fig.3 Three histograms of Picture block (top), Text block (middle) and Mixed block (bottom) VI. EXPERIMENTAL RESULT The famous toy store compound image as shown in Fig.4 is taken as the input image to our proposed block classification scheme. The segmented image is shown in the Fig. 5. The block classification for text, mixed, background and picture blocks are shown in the Fig. 6,7,8,9 respectively. The proposed system has been simulated in MATLAB SIMULINK environment. 3118

VII. CONCLUSION The block classification scheme was tested for many such compound and computer generated images. It was observed from table1 that the proposed block classification scheme is 97% efficient for compound images. However it failed to make the same consistency for other type of images. Practically there is no need to classify other type of images as they can be effectively compressed as a whole. Considering the fact that sensitivity for human eyes can negotiate this 3% mismatch of block classification, our block classification scheme can be argued to be an efficient block classification scheme for compressing compound images. Our block classification scheme is very simple and effective, reducing the computational complexity. Fig.7 Classified mixed blocks Fig.4 Input image Fig.8 Classified background blocks Fig.5 Segmented Image Fig.9 Classified Picture Blocks ACKNOWLEDGEMENT I sincerely thank the Management, Director, Deputy Director, General Manager of JIS Group for their extended support in completing the project successfully. REFERENCES [1] Cuiling lan, Guangming Shi and Feng wu, Compress Compound Images in H.264/MPEG-4 AVC by exploiting Spatial Correlation IEEE Transactions on Image Processing, vol.19 no.4, pp. 946-957, April 2010. [2] R.Aparna, D.Maheshwari and V.Radha, Performance Evaluation of H.264/AVC Compound Image Compression System International Journal of Computer Applications, Vol.1 no.10, pp. 48-54, Feb.2010. Fig.6 Classified text blocks [3] Wenpeng ding, Yan lu and Feng wu, Enable Efficient Compound Image Compression in H.264/AVC Intra Coding IEEE Transactions on Image Processing, Vol.10 no.3, pp. 337-340, Sep.2009. 3119

[4] Jagannath D.J and Shanthini Pandiaraj, Lossless Compression of a Desktop Image for Transmission International Journal of Recent Trends in Engineering, Vol.2 no.3, pp. 27-29, Nov.2009. [5] A. Said and A. Drukarev, Simplified segmentation for compound image compression, Proceeding of ICIP 2009, pp.229-233. [6] P. Haffner, L. Bottou, P.G. Howard, P. Simard, Y. Bengio, Y. LeCun, High Quality document image compression with DjVu, Journal of Electronic Imaging, pp. 410-425 July 2008. [7] J.Ziv and A. Lempel, A universal algorithm for data compression, IEEE Trans. on Information Theory, IT-23(3), pp.337-343, May 2006. [8] B.-f Wu, C.-C Chiu and Y. L Chen Algorithms for compressing compound document images with large text/background overlap, IEEE Proc.Vis. Image signal Process, Vol. 151 No. 6, pp.453-459 December 2008. [9] D.S. Taubman, and M.W. Marcellin, JPEG2000: Image Compression Fundamentals, Standards and Practice, Kluwer Academic Publishers, Dordrecht, Netherlands, 2001. [10] H. Cheng and C.A. Bouman, Multiscale Bayesian segmentation using a trainable context model IEEE Trans.Image Processing, vol. 10, pp. 511 525, April 2001. [12] S Kumar, Neural Network Based Efficient Block Classification of Computer Screen Images for Desktop Sharing, IJARCSSE, pp. 703-711, vol.4, issue 8, August 2014. [13] S Kumar, Wavelet Sub-band block coding based lossless High Speed Compression of Compound Image, IJARCST, vol.2, issue 3, pp. 259-264, Sept. 2014. [14] D. Mukherjee, C. Chrysafis, and A. Said, Low complexity guaranteed fit compound document compression, Proc. IEEE Int. Conf. Image Processing, vol. 1, pp.225 228, Sept. 2002. AUTHOR PROFILE Professor S Kumar, Department of Computer Science and Engineering, JIS College of Engineering, Kolkata is one of the renowned academicians in the field of engineering education. He has two decade of teaching and six years of research experience. He received his B.E, M.E and Ph.D from premier and reputed universities of engineering in India. He is a member of many standard engineering societies and review committee member of various international journals. He has completed many research projects in digital image processing, network security and digital signal processing. [11] D. Mukherjee, N. Memon, and A. Said, JPEG-matched MRC compression of compound documents, Proc. IEEE Int. Conf. Image Processing, vol. 3, pp. 434 437, Oct. 2001. S.No Compound Background Block Text Block Picture Block Mixed Block Efficiency Image Actual Identified Actual Identified Actual Identified Actual Identified 1. Comp1 12 12 7 7 5 4 40 41 96.9% 2. Comp2 20 20 13 12 8 10 23 24 93.8% 3. Comp3 14 14 16 16 17 16 17 18 96.9% 4. Comp4 15 15 9 9 10 9 30 31 96.9% 5. Slide1 10 10 17 17 15 15 22 22 100% 6. Slide2 11 11 18 18 12 14 23 21 93.8% 7. Poster1 15 15 10 9 11 11 28 29 96.9% 8. Poster2 18 18 8 8 8 9 30 29 96.9% 9. Desktop1 29 29 10 10 10 9 15 16 96.9% 10. Desktop2 23 23 11 11 9 9 21 21 100% Overall Efficiency 96.9% Table.1 Comparison between the actual and identified background, text, picture and mixed blocks of proposed block classification scheme for different compound images 3120