Compression of line-drawing images using vectorizing and feature-based filtering

Similar documents
LOSSLESS COMPRESSION OF LARGE BINARY IMAGES IN DIGITAL SPATIAL LIBRARIES

LOSSLESS COMPRESSION OF LARGE BINARY IMAGES IN DIGITAL SPATIAL LIBRARIES

Lossless compression of large binary images in digital spatial libraries

Compression of Large Binary Images in Digital Spatial Libraries

Compression of map images by multi-level context tree modeling

DIGITAL maps are usually stored as vector graphics in a

Lossy Compression of Scanned Map Images

Interactive Progressive Encoding System For Transmission of Complex Images

Bi-Level Image Compression

A Simple Lossless Compression Heuristic for Grey Scale Images

IMAGE COMPRESSION TECHNIQUES

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

Binary Shape Characterization using Morphological Boundary Class Distribution Functions

SHAPE ENCODING FOR EDGE MAP IMAGE COMPRESSION. Demetrios P. Gerogiannis, Christophoros Nikou, Lisimachos P. Kondi

Context-based compression of binary images in parallel

Compressing 2-D Shapes using Concavity Trees

Digital Image Representation Image Compression

Morphological Image Processing

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

Lossless Image Compression having Compression Ratio Higher than JPEG

Digital Image Processing Fundamentals

Volume 2, Issue 9, September 2014 ISSN

A Methodology to Detect Most Effective Compression Technique Based on Time Complexity Cloud Migration for High Image Data Load

Morphological Image Processing

EE795: Computer Vision and Intelligent Systems

Bit error recovery in Internet facsimile without retransmission

Compression of 3-Dimensional Medical Image Data Using Part 2 of JPEG 2000

Biomedical Image Analysis. Mathematical Morphology

Dictionary Based Compression for Images

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

CS 335 Graphics and Multimedia. Image Compression

Golomb Coding Implementation in FPGA

So, what is data compression, and why do we need it?

JPEG: An Image Compression System

Keywords: Thresholding, Morphological operations, Image filtering, Adaptive histogram equalization, Ceramic tile.

Morphological track 1

Engineering Mathematics II Lecture 16 Compression

Statistical Modeling of Huffman Tables Coding

Skeletonization Algorithm for Numeral Patterns

Mathematical Morphology and Distance Transforms. Robin Strand

Babu Madhav Institute of Information Technology Years Integrated M.Sc.(IT)(Semester - 7)

Video Compression An Introduction

Bit Error Recovery in MMR Coded Bitstreams Using Error Detection Points

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

FPGA Implementation of 2-D DCT Architecture for JPEG Image Compression

Research Article Image Segmentation Using Gray-Scale Morphology and Marker-Controlled Watershed Transformation

Context based optimal shape coding

Extracting Layers and Recognizing Features for Automatic Map Understanding. Yao-Yi Chiang

Multimedia Communications ECE 728 (Data Compression)

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

A Parallel Reconfigurable Architecture for DCT of Lengths N=32/16/8

Topic 6 Representation and Description

Symbol Detection Using Region Adjacency Graphs and Integer Linear Programming

Stereo Image Compression

SINCE arithmetic coding [1] [12] can approach the entropy

Mathematical morphology... M.1 Introduction... M.1 Dilation... M.3 Erosion... M.3 Closing... M.4 Opening... M.5 Summary... M.6

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Comparative Analysis on Various Compression Techniques on Map Images

Reversible Data Hiding VIA Optimal Code for Image

JPEG IMAGE CODING WITH ADAPTIVE QUANTIZATION

A HYBRID DPCM-DCT AND RLE CODING FOR SATELLITE IMAGE COMPRESSION

Image Processing, Analysis and Machine Vision

Advanced low-complexity compression for maskless lithography data

A Novel Image Compression Technique using Simple Arithmetic Addition

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Enhanced Hybrid Compound Image Compression Algorithm Combining Block and Layer-based Segmentation

Mixed Raster Content for Compound Image Compression

A Fast Algorithm of Neighborhood Coding and Operations in Neighborhood Coding Image. Synopsis. 1. Introduction

Wireless Communication

Image Compression Algorithm and JPEG Standard

A combined fractal and wavelet image compression approach

Artifacts and Textured Region Detection

JBEAM: Coding Lines and Curves via Digital Beamlets

AN ALGORITHM USING WALSH TRANSFORMATION FOR COMPRESSING TYPESET DOCUMENTS Attila Fazekas and András Hajdu

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Mixed Raster Content for Compound Image Compression

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

JPEG: An Image Compression System. Nimrod Peleg update: Nov. 2003

Albert M. Vossepoel. Center for Image Processing

Lossless Compression Algorithms

2D image segmentation based on spatial coherence

A Research Paper on Lossless Data Compression Techniques

Morphological Image Processing

DYNAMIC HIERARCHICAL DICTIONARY DESIGN FOR MULTI-PAGE BINARY DOCUMENT IMAGE COMPRESSION**

Computer and Machine Vision

Digital Image Processing Lecture 7. Segmentation and labeling of objects. Methods for segmentation. Labeling, 2 different algorithms

The packing problem: A divide and conquer algorithm on cellular automata

A Survey of Problems of Overlapped Handwritten Characters in Recognition process for Gurmukhi Script

Image Compression Standard: Jpeg/Jpeg 2000

Chapter 3. Image Processing Methods. (c) 2008 Prof. Dr. Michael M. Richter, Universität Kaiserslautern

ISSN: An Efficient Fully Exploiting Spatial Correlation of Compress Compound Images in Advanced Video Coding

Lossless and Lossy Minimal Redundancy Pyramidal Decomposition for Scalable Image Compression Technique

Removing Spatial Redundancy from Image by Using Variable Vertex Chain Code

IJSRD - International Journal for Scientific Research & Development Vol. 3, Issue 10, 2015 ISSN (online):

CMPT 365 Multimedia Systems. Media Compression - Image

SECTION 5 IMAGE PROCESSING 2

Proposed Lossy Compression Method Using RLE and Segmentation

Transcription:

Compression of line-drawing images using vectorizing and feature-based filtering Pasi Fränti, Eugene I. Ageenko Department of Computer Science, University of Joensuu P.O. Box 111, FIN-80101 Joensuu, FINLAND Alexander Kolesnikov, Igor O. Chalenko Institute of Automation and Electrometry, Russian Academy of Sciences Pr-t Ak. Koptyuga, 1, Novosibirsk-90, 630090, RUSSIA Abstract: A three-stage method for compressing bi-level line-drawing images is proposed. In the first stage, the raster image is vectorized using a combination of skeletonizing and line tracing algorithm. A feature image is then reconstructed from the extracted vector elements. In the second stage, the original image is processed by a feature-based filter for removing noise near the borders of the extracted line elements. This improves the image quality and results in more compressible raster image. In the final stage, the filtered raster image is compressed using the baseline JBIG algorithm. For a set of test images, the method achieves a compression ratio of 40:1, in comparison to 32:1 of the baseline JBIG. Keywords: image compression, JBIG, raster-to-vector conversion, context modeling, feature based filtering. 1. INTRODUCTION Lossless compression of bi-level images has been well studied in the literature and several standards already exist [1]. In the baseline JBIG the image is coded pixel by pixel in scan raster order using context-based probability model and arithmetic coding [2]. The combination of already coded neighboring pixels defines the context. In each context the probability distribution of the black and white pixels are adaptively determined. The current pixel is then coded by QM-coder [3], the binary arithmetic coder adopted in JBIG. The baseline JBIG achieves compression ratios from 10 to 50 for typical A4-size images. The pixelwise dependencies are well utilized and there is no much room for improvement. Remarkable improvement has been achieved only by specializing to some known image types and exploiting global dependencies, or extending the methods to lossy compression. For example, the methods in [4, 5] includes pattern matching technique to extract symbols from text images. The compressed file consists of bitmaps of the library symbols coded by a JBIG-style compressor, location of the extracted marks as offsets, and a pixelwise coding of the matched symbols using two-layer context template. We study similar approach for line-drawing images by utilizing global dependencies in images such as engineering drawings, cartographic maps, architectural and urban plans, schemes, and circuits (radio electrical and topological). These kinds of images contain mainly of straight-line elements. Global information can be gathered by extracting line features from the image and utilizing them in the compression of the raster image. We propose a three-stage compression method as outlined in Figure 1. In the first stage (vectorizing), vector elements are extracted from the image using raster-to-vector conversion. Equal size feature image is created from the extracted line segments to approximate the input image. In the second stage (filtering), the original raster image is preprocessed by a feature-based filtering for improving image quality. The feature image is used as a semantic model of the image, consisting of non-local information from the image, which cannot be utilized using local spatial filters. In the third stage (compression), the filtered image is compressed by the baseline JBIG. The feature file is used only in the compression phase and therefore it is not needed to store in the compressed file. The feature extraction and filtering are considered as preprocessing and they are invisible in the decompression phase. The method uses standard image compression component the baseline JBIG. The resulting output files are therefore standard JBIG files and the decompression is exactly the same as the baseline JBIG. The method can thus be easily integrated into existing compression systems. The vectorizing and filtering parts can be implemented as optional components, and used on-demand only.

COMPRESSION DECOMPRESSION 1. Vectorizing,QSXW,PDJH 2. Filtering 3. JBIG compression JBIG decompression 2XWSXW,PDJH Figure 1: Block diagram of the three-stage compression method. The method is near-lossless because only isolated groups of noise pixels can be inverted. Moreover, undetected objects (such as text characters) are untouched allowing their lossless reconstruction. Uncontrolled loss of image quality cannot therefore appear. The vectorizing is an essential part of the new method but the method itself is independent on the chosen vectorizing component. The quality of the vectorizing affects only on the amount of compression improvement that can be achieved. The quality of the output image is controlled by the filtering method. The details of the three stages are discussed next in the following sections. 2. FEATURE EXTRACTION USING VECTORIZING The vectorizing process is outlined in Figure 2. We apply the method described in [6]. The motivation is to find rigid fixed-length straight lines from the image. Each line segment is represented as its two end-points and as the width of the line. A feature image is then reconstructed from the line segments and utilized in the filtering. The vector features are not stored in the compressed files but the vectorizing is considered as an intelligent preprocessing stage. The details of the vectorizing process are described in the following subsections. 2.1 Skeletonization The black-and-white raster image is processed by a distance transform (defined by 4-connectivity). The resulting widthlabeled image is then skeletonized using the algorithm in [7]. It proceeds the pixels layer by layer starting from contour pixels (pixels with distance value 1). The 3 3 neighborhood of each pixel is checked. The pixels satisfying one of the so-called multiplicity condition are marked as skeletal pixels. The process is then iterated for the pixels of the next layer (pixels with distance value 2) and so on until all layers are processed. The result of the algorithm is a width-labeled skeletal image. Fast and memory efficient implementation of the algorithm was constructed using the ideas presented in [8]. 2.2 Extraction of elementary vectors Vector primitives are extracted from the skeletal image using a fast and simple line-tracing algorithm. We trace all branches of the skeleton, one pixel at a time, from one delimiter (line end or crossroad) to another. At this stage, no digression from the current direction is allowed during the traversal. The length of the vector primitives is limited to be at most five pixels. The extracted lines are stored as single vector elements using the average width of the skeletal pixel as the line width. The actual implementation uses LUT and cellular logic. Each pixel is processed by examining its neighborhood in a 3 3 window. An index is constructed from neighboring pixel values and a precalculated look-up table (of size 2 9 ) is,qsxw,pdjh VECTORIZING Skeletonization Skeletal pixels Vector element extraction Elementary vectors Pruning & analysis Final vectors elements Feature reconstruction Figure 2: Block diagram of the vectorizing.

,QSXW LPDJH,QSXW LPDJH NOISE REMOVAL FILTERING XOR Noise removal Mismatch pixels Isolated pixel detection Dilation Noise removal Isolated mismatch pixels XOR Erosion Noise removal 2XWSXW,PDJH 2XWSXW,PDJH Figure 3: Block diagram of the noise removal procedure. accessed. The current direction is the second index for the LUT. Actions for all situations are precalculated in advance and stored. The LUT gives a binary answer whether the current pixel is accepted or rejected. The algorithm works extremely fast in practice. 2.3 Pruning and analysis The vector primitives are further analyzed for constructing larger elements. There are four classes of the combined vector elements, each described by the two end-points and the width of the line: Single point: (x 1, y 1, w 1 ). Single vector: (x 1, y 1, w 1 ), (x 2, y 2, w 2 ). Chain of n vectors: {(x k, y k, w k ) k = 1,, n+1}. Ring of n vectors: {(x k, y k, w k ) k = 1,, n+1} where x 1 =x n and y 1 =y n. Vector elements are combined (pruned) from primitives having a common end-point and same orientation (within a certain error marginal). Small gaps between the lines are filled and false branches are removed. The remaining vector chains are then classified either as good (linear) or bad (noise and non-linear). The good chains are stored by their coordinate differentials using a variable-length code. The bad chains are symbols and spots, and they are stored as raster objects. Figure 4: Block diagram of the three-stage filtering procedure. 3. FEATURE-BASED FILTERING In the second stage, the original image is processed by a feature-based filter for removing noise near the borders of the extracted line elements. This improves the image quality and results in more compressible raster image. The filtering is based on a simple noise removal procedure, as shown in Figure 3. Mismatch image is constructed from the differences between the original and the feature image. Isolated mismatch pixels (and pixel groups up to two pixels) are detected and the corresponding pixels in the original image are inverted. This removes random noise and smoothes edges along the detected line segments. The noise removal procedure is successful if the feature image is accurate. The vectorizing method, however, does not always provide exact width of the lines. The noise removal procedure is therefore iterated three times as shown in Figure 4. In the first stage the feature image is applied as such, in the 2nd stage the feature image is dilated, and in the 3rd stage it is eroded before input into the noise removal procedure. This compensates most of the inaccuracies in the width detection. See [9] for the details of the morphological dilation and erosion. The stepwise process is demonstrated in Figure 5 for a small image sample. Most of the noise is detected and removed in the first phase. However, in some cases there are too many mismatch pixels grouped together because of a wrong estimation of the line width and therefore no pixels can be filtered. Even if these inaccuracies may have visually

1 Feature image Mismatch pixels Filtered pixels Filtered image 2 3 Figure 5: Illustration of the three-stage filtering procedure. The first line corresponds to the first stage, the second line when the feature image is dilated, and the last line when the feature image is eroded. Input image Feature image Output image Filtered pixels Figure 6: Overall illustration of the feature-dependent filtering process. unpleasant appearance in the feature image, they do not necessarily prevent effective filtering. For example, the rightmost diagonal line in the feature image is too wide and the pixels are not filtered in the first two stages. The eroded version, however, gives a more accurate version of the line and the noise pixels can be detected and filtered in the 3rd stage. The result of the entire filtering process is illustrated in Figure 6. 4. CONTEXT-BASED COMPRESSION We apply the baseline JBIG as the compression component, but basically there is no restrictions to use any other compression method. For example, the progressive mode of JBIG [2] could also be utilizes. In the baseline JBIG the pixels are coded by arithmetic coding in scan raster order using context-based probability model. The combination of already coded neighboring pixels defines the context. In each context the probability distribution of the black and

Bolt Module Plus 1765 1437 317,590 bytes 1480 2053 379,805 bytes Figure 7: The set of test images. 2293 1787 512,199 bytes Table 1: Summary of the compression results (in bytes). Original raster image Compressed vector file ITU Group 3 ITU Group 4 Baseline JBIG Our method Bolt 317,590 28,842 26,409 17,203 12,966 10,210 Module 379,805 12,430 18,979 10,801 7,671 5,798 Plus 512,199 38,837 32,269 21,461 17,609 14,581 TOTAL: 1,209,043 80,109 77,657 49,465 38,246 30,589 white pixels are adaptively determined. The current pixel is then coded by QM-coder [3, 10], the binary arithmetic coder adopted in JBIG. Statistics start from scratch and they are updated after the coding of each pixel. The model thus dynamically adapts to the image statistics during the coding process. The probability estimation and update are performed using the state automaton of QM-coder. 5. TEST RESULTS We compare the proposed method with the existing compression standards. The following methods are considered: Uncompressed raster file Compressed vector file ITU Group 3 ITU Group 4 Baseline JBIG Our method The compressed vector file represents the result of the vectorizing when the chain-coded elements are compressed by ZIP (commonly used universal data compression method). Baseline JBIG is the basic implementation of the JBIG binary image compression standard. ITU Group 3 and Group 4 are the older facsimile standards based on runlength encoding and two-dimensional READ-code [11, 12]. The methods are evaluated by compressing the test images shown in Figure 7. The compression results are summarized in Table 1. The vector representation is not space efficient because it cannot represent small objects efficiently. At the same time, our method and JBIG can compress the raster images using less than half of the size required by the vector file. The corresponding compression ratios (in total) are 15:1 for the vector file, 32:1 for the JBIG, and 40:1 for our method. To sum up, the proposed compression method outperforms the Group 4 method by 40 % and the baseline JBIG by 20 %. At the same time, the quality of the decompressed images is visually the same as the original since only isolated mismatch pixels were reversed. The quality is sometimes even better because the reversed pixels are mainly random noise or scanning noise near the line segments. 6. CONCLUSIONS A three-stage compression method for compressing bi-level line-drawing images was proposed. In the first stage, the raster image is vectorized and a feature image is reconstructed from the extracted vector elements. In the second stage, the original image is processed by a featurebased filtering for removing noise from the original image. It improves the image quality and therefore results in better

compression performance. The actual compression is performed by the baseline JBIG. For a set of test images, the method achieves a compression ratio of 40:1, in comparison to 32:1 of the baseline JBIG. A drawback of the method is that the compression phase is now more complex and the method needs several passes over the image. Fortunately the vectorizing can be performed rather quickly. Moreover, the vector features are not stored in the compressed file and the process is therefore invisible in the decompression phase. The method thus can be considered as a preprocessing step to JBIG and the standard decompression routines can be applied. The method could be developed further by improving the quality of the vectorizing. The width of the lines was sometimes quite badly predicted resulting in a visually disturbing feature image. This does not effect on the output image quality but it may weaken the compression performance. This was partially compensated by the threestage filtering process, which makes the noise removal less sensitive to the inaccuracies in the vectorizing part. 7. ACKNOWLEDGEMENTS The work of Pasi Fränti was supported by a grant from the Academy of Finland. 8. REFERENCES [1] R.B. Arps, T.K. Truong, Comparison of international standards for lossless still image compression. Proceedings of the IEEE, 82, 889-899, June 1994. [2] JBIG, Progressive Bi-level Image Compression, ISO/IEC International Standard 11544, ITU Recommendation T.82, 1993. [3] W.B. Pennebaker, J.L. Mitchell, JPEG Still Image Data Compression Standard. Van Nostrand Reinhold, 1993. [4] P.G. Howard, Text image compression using soft pattern matching, The Computer Journal, 40 (2/3), 146-156, 1997. [5] I.H. Witten, A. Moffat and T.C. Bell, Managing Gigabytes: Compressing and Indexing Documents and Images. Van Nostrand Reinhold, New York, 1994. [6] A.N. Kolesnikov, V.I. Belekhov and I.O. Chalenko, Vectorization of raster images, Pattern Recognition and Image Analysis, 6 (4), 786-794, 1996. [7] C. Arcelli and G.S. di Baja, A one-pass two-operation process to detect the skeletal pixels on the 4-distance transform, IEEE Trans. on Pattern Analysis and Machine Intelligence, 11 (4), 411-414, 1989. [8] A.N. Kolesnikov and E.V. Trichina, The parallel algorithm for the binary images thinning, Optoelectronics, Instrumentation and Data Processing, No. 6, 7-13, 1995. [9] J. Serra, Image Analysis and Mathematical morphology. Academic Press, London, 1982. [10] W.B. Pennebaker, J.L. Mitchell, G.G. Langdon, R.B. Arps, An overview of the basic principles of the Q-coder adaptive binary arithmetic coder. IBM Journal of Research and Development, 32 (6), 717-726, 1988. [11] CCITT, Standardization of Group 3 Facsimile Apparatus for Document Transmission, ITU Recommendation T.4, 1980. [12] CCITT, Facsimile Coding Schemes and Coding Control Functions for Group 4 Facsimile Apparatus, ITU Recommendation T.6, 1984. Authors: Pasi Fränti, research fellow Eugene I. Ageenko, PhD student Dept. of Computer Science, University of Joensuu P.O. Box 111, FIN-80101 Joensuu, FINLAND E-mail: franti@cs.joensuu.fi Alexander Kolesnikov, senior scientist Igor O. Chalenko, graduate student Institute of Automation and Electrometry Pr-t Ak. Koptyuga, 1, Novosibirsk-90, 630090, RUSSIA E-mail: Kolesnikov@iae.nsk.su