Compressing 2-D Shapes using Concavity Trees

Similar documents
Hierarchical Representation of 2-D Shapes using Convex Polygons: a Contour-Based Approach

Robust Shape Retrieval Using Maximum Likelihood Theory

Lossless Compression Algorithms

Machine vision. Summary # 6: Shape descriptors

Morphological Image Processing

CS443: Digital Imaging and Multimedia Binary Image Analysis. Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University

A New Approach to Computation of Curvature Scale Space Image for Shape Similarity Retrieval

Module 4: Index Structures Lecture 13: Index structure. The Lecture Contains: Index structure. Binary search tree (BST) B-tree. B+-tree.

Morphological Image Processing

Morphological track 1

Professor Louay Bazzi

EECS490: Digital Image Processing. Lecture #23

Spatial Data Structures for Computer Graphics

Fingerprint Image Compression

Direction-Length Code (DLC) To Represent Binary Objects

morphology on binary images

Biomedical Image Analysis. Mathematical Morphology

2D Grey-Level Convex Hull Computation: A Discrete 3D Approach

Computing Covering Polyhedra of Non-Convex Objects

Mixed Raster Content for Compound Image Compression

Lecture 8 Object Descriptors

A Research Paper on Lossless Data Compression Techniques

Morphological Image Processing

An Efficient Visual Hull Computation Algorithm

Dictionary Based Compression for Images

Compression of 3D Objects with Multistage Color-Depth Panoramic Maps

Multimedia Networking ECE 599

Removing Spatial Redundancy from Image by Using Variable Vertex Chain Code

A New Technique of Lossless Image Compression using PPM-Tree

EECS490: Digital Image Processing. Lecture #17

AHD: The Alternate Hierarchical Decomposition of Nonconvex Polytopes (Generalization of a Convex Polytope Based Spatial Data Model)

Efficient View-Dependent Sampling of Visual Hulls

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Fast Distance Transform Computation using Dual Scan Line Propagation

Context based optimal shape coding

Department of electronics and telecommunication, J.D.I.E.T.Yavatmal, India 2

Image retrieval based on region shape similarity

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

Coarse-to-Fine Search Technique to Detect Circles in Images

Image Rotation Using Quad Tree

Digital Image Processing Fundamentals

Computer Graphics. The Two-Dimensional Viewing. Somsak Walairacht, Computer Engineering, KMITL

Lofting 3D Shapes. Abstract

A Simplex based Dimension Independent Approach for Convex Decomposition of Nonconvex polytopes

CITS 4402 Computer Vision

IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 14, NO. 3, MARCH

Compression of 3-Dimensional Medical Image Data Using Part 2 of JPEG 2000

Orthogonal Range Search and its Relatives

CHAPTER 9 INPAINTING USING SPARSE REPRESENTATION AND INVERSE DCT

Code Transformation of DF-Expression between Bintree and Quadtree

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Trees. Q: Why study trees? A: Many advance ADTs are implemented using tree-based data structures.

Trees : Part 1. Section 4.1. Theory and Terminology. A Tree? A Tree? Theory and Terminology. Theory and Terminology

Mixed Raster Content for Compound Image Compression

Boundary descriptors. Representation REPRESENTATION & DESCRIPTION. Descriptors. Moore boundary tracking

1/60. Geometric Algorithms. Lecture 1: Introduction. Convex Hulls

Compression of line-drawing images using vectorizing and feature-based filtering

CS 335 Graphics and Multimedia. Image Compression

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

(Refer Slide Time 00:17) Welcome to the course on Digital Image Processing. (Refer Slide Time 00:22)

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

A Vertex Chain Code Approach for Image Recognition

Rate-Distortion Optimized Tree Structured Compression Algorithms for Piecewise Polynomial Images

(Refer Slide Time: 00:02:00)

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

Mesh Quality Tutorial

CLASSIFICATION OF BOUNDARY AND REGION SHAPES USING HU-MOMENT INVARIANTS

MPEG-7 Visual shape descriptors

Problem definition Image acquisition Image segmentation Connected component analysis. Machine vision systems - 1

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS

Chapter 3. Sukhwinder Singh

Digital Image Processing

Image-Based Competitive Printed Circuit Board Analysis

Properties of red-black trees

Data Structures and Algorithms

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Self-Balancing Search Trees. Chapter 11

EXAM ELEMENTARY MATH FOR GMT: ALGORITHM ANALYSIS NOVEMBER 7, 2013, 13:15-16:15, RUPPERT D

BST Deletion. First, we need to find the value which is easy because we can just use the method we developed for BST_Search.

CS 171: Introduction to Computer Science II. Binary Search Trees

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Lecture 3: Art Gallery Problems and Polygon Triangulation

ECEN 447 Digital Image Processing

A Fast Convex Hull Algorithm for Binary Image

Page 1. Area-Subdivision Algorithms z-buffer Algorithm List Priority Algorithms BSP (Binary Space Partitioning Tree) Scan-line Algorithms

Tracking and Recognizing People in Colour using the Earth Mover s Distance

Chapter 11 Representation & Description

Realtime Object Recognition Using Decision Tree Learning

Introduction. Computer Vision & Digital Image Processing. Preview. Basic Concepts from Set Theory

Computational Geometry. Lecture 17

SHAPE ENCODING FOR EDGE MAP IMAGE COMPRESSION. Demetrios P. Gerogiannis, Christophoros Nikou, Lisimachos P. Kondi

9 length of contour = no. of horizontal and vertical components + ( 2 no. of diagonal components) diameter of boundary B

Wavelet Based Image Compression Using ROI SPIHT Coding

Lecture 06. Raster and Vector Data Models. Part (1) Common Data Models. Raster. Vector. Points. Points. ( x,y ) Area. Area Line.

DIFFERENTIAL IMAGE COMPRESSION BASED ON ADAPTIVE PREDICTION

EE67I Multimedia Communication Systems Lecture 4

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

CS 231A Computer Vision (Winter 2014) Problem Set 3

Qin-Zhong Ye. Dept. of Electrical Engineering Linkiiping University S LinkGping, Sweden. 2. The signed Euclidean distance transform

Decision trees. Decision trees are useful to a large degree because of their simplicity and interpretability

Transcription:

Compressing 2-D Shapes using Concavity Trees O. El Badawy 1 and M. S. Kamel 2 1 Dept. of Systems Design Engineering. 2 Dept. of Electrical and Computer Engineering. Pattern Analysis and Machine Intelligence Research Group, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada. Abstract. Concavity trees have been known for quite some time as structural descriptors of 2-D shape; however, they haven t been explored further until recently. This paper shows how 2-D shapes can be concisely, but reversibly, represented during concavity tree extraction. The representation can be exact, or approximate to a pre-set degree. This is equivalent to a lossless, or lossy compression of the image containing the shape. This paper details the proposed technique and reports nearlossless compression ratios that are 1% better than the JBIG standard on a test set of binary silhouette images. 1 Introduction and Background A concavity tree is a data structure used for describing non-convex two dimensional shapes. It was first introduced by Sklansky [1] and has since been further researched by others [2 9]. A concavity tree is a rooted tree in which the root represents the whole object whose shape is to be analysed/represented. The next level of the tree contains nodes that represent concavities along the boundary of that object. Each of the nodes on the following levels represents one of the concavities of its parent, i.e., its meta-concavities. If an object or a concavity is itself convex, then the node representing it does not have any children. Figure 1 shows an example of a shape (a), its convex hull, concavities, and meta-concavities (b), and its corresponding concavity tree (c). The shape has f ive concavities as reflected in level one of the tree. The four leaf nodes in level one correspond to the highlighted triangular concavities shown in (d), whereas the non-leaf node corresponds to the (non-convex) concavity shown in (e). Similarly, the nodes in levels two and three correspond to the meta-concavities highlighted in (f) and (g), respectively. Typically, each node in a concavity tree stores information pertinent to the part of the object the node is describing (a feature vector for example), in addition to tree meta-data (like the level of the node; the height, number of nodes, and number of leaves in the subtree rooted at the node). We recently proposed an efficient (in terms of space and time) contour-based algorithm for concavity tree extraction [9] and we showed how it surpasses other concavity tree extraction methods [6] in terms of speed and the accuracy of the reconstructed image as a function of the number of nodes used in the reconstruction. In this paper, we explore the space efficiency of the method and compare

2 O. El Badawy and M. S. Kamel 20 40 0 60 1 80 2 120 140 3 (a) 0 20 40 60 80 120 140 160 (b) (c) 1 0 20 40 60 80 120 140 160 1 0 20 40 60 80 120 140 160 1 0 20 40 60 80 120 140 160 1 0 20 40 60 80 120 140 160 (d) (e) (f) (g) Fig. 1. An object (a), its convex hull and concavities (b), the corresponding concavity tree (c), and contour sections corresponding to concavities (d-g). it to the JBIG standard compression algorithm. With some modifications to the base algorithm, we are able to achieve near-lossless compression with ratios 1% better than that of JBIG, and a subjectively imperceptible error of around 0.006. The resulting compact representation is not the tree, but rather it is a sequence of vertices generated while the tree is extracted. The accuracy of the representation, and consequently the compression ratio, is controlled by specifying the minimum depth a concavity has to be in order to be taken into consideration. One direct advantage of this compressed representation is that the shape at hand can be analysed without the need to fully decompress the image. The resulting representation can as well be interpreted as a user-controlled polygonal approximation method whose degree of matching the original shape is also controlled by the same parameter as that controlling the compression ratio. The next section explains the methods while Section 3 discusses experimental results. 2 The Proposed Algorithm Consider a 256x256 binary image containing a 128x128 black (filled) square, this image would have an uncompressed size of 8 KB (1 bpp). JBIG is able to losslessly compress it 80:1. We note however that if we only store the four corners of the square in a vector-graphics fashion (which is enough information to losslessly reconstruct it), we should achieve much higher ratios (around 800:1). The proposed method generalizes this concept to any binary image (but it is particularly suited to silhouettes images, single or multiple, with or without holes).

Compressing 2-D Shapes using Concavity Trees 3 Algorithm 1 Concavity Tree Extraction and Compression Notation: I is the input image. F is the set of foreground ( 1 ) pixels (representing the shape in I). B is the set of background ( 0 ) pixels. C is the contour of F. T is a rooted tree (the concavity tree of the shape in I). S is the output sequence. N is a node in T. Require: I is bilevel, F is 8-connected, and B is 4-connected. 1: C contour of F 2: T, S = fct( C ) Function T, S = fct( C ) 3: S [] {Initialise sequence S} 4: H convex hull of C 5: Re-arrange H so that it is a subsequence of C 6: T NIL 7: New N {Instantiate a new tree node} 8: N.data H {In addition to any features as necessary} 9: T N {T now points to N} 10: for { each pair of consecutive points p 1 and p 2 in H } do 11: C 2 subsequence of C bounded between p 1 and p 2 {C 2 is a concave section along contour C} 12: S 2 [] 13: if depth(c 2) > mindepth then 14: T 2, S 2 = fct( C 2 ) 15: N.newchild T 2 {T has a new subtree T 2} 16: end if 17: S S, p 1, S 2, p 2 {such that no two consecutive elements are identical} 18: end for We focus on the case of an image containing a single object (no holes). The extension to multiple objects (with or without holes) is based on it (an example will be presented in Section 3; however, due to space constraints, the details are omitted). The main steps of the compression algorithm are shown in Algorithm 1. The input image I is a binary image. The condition that the set of foreground pixels F is 8-connected and the set of background pixels B is 4-connected will ensure that there is only one object with no holes in I (provided that F does not intersect the boundary of I). The output of the algorithm is a sequence S of pixels along the contour of F. S is generated during the concavity tree extraction process. If, for example, F is a rectangle, S will be the clockwise (or anti-clockwise) sequence of the four corner pixels. The algorithm basically computes the convex hull of F and makes it the output sequence. It then iterates on each pair of consecutive vertices and inserts, between each pair in the sequence, all the resulting subsequences generated by recursively calling the main function on the section of the contour bounded between the two points at hand. The sequence is only updated if the vertex is

4 O. El Badawy and M. S. Kamel Fig.2. Test set - originals different from the one just before it. The number of rows and columns in the image as well as the sequence S will be linearly stored on disk. (When the best bit-per-pixel resolution is used, it was found that RLE compression will result in no additional size reduction; an indication that the resulting file is quite compact and has a maximum entropy.) The complexity of the algorithm is O(nh) where n is the number of contour pixels and h is the height of the resulting tree. More details can be found in [9] with regard to the underlying base algorithm. (We note that the convex hull of a contour can be computed in O(n).) The reconstruction is done by a polygon filling algorithm applied to the resulting sequence of vertices S. Even though the pixels in S are just a (usually small) subset of the pixels in C, they are always enough for an exact (lossless) reconstruction of the original set F. By controlling the parameter mindepth (line 13 of Algorithm 1), shallow concavities along C can be ignored, consequently reducing the length of S, and therefore increasing the compression ratio. A mindepth value of zero will result in a lossless compression. A mindepth value of one, on the other hand, will result in a near-lossless compression with ratios that are usually much higher than the lossless case (for an n n image, where approximately 32 < n < 256). The method also allows for shape information to be extracted, possibly for shape retrieval and matching purposes, from the compressed domain; that is, without the need to fully decompress the image. This can be done since the

Compressing 2-D Shapes using Concavity Trees 5 0.03 0.025 0.02 Error 0.015 0.01 0.005 0 5 10 15 20 Compression Ratio Fig.3. Compression ratio versus error rate for the 37 images shown in Figure 2 concavity tree of the shape can be easily extracted from the compressed domain, without the need to reconstruct the image, and then find its contour(s), which can then be used for shape representation, matching, and retrieval as per [7,8], for example. 3 Experimental Results We test the method on a set of 37 binary trademark images (see Figure 2) and compare the resulting compression ratio with that of JBIG. Figure 3 shows the plot of the reconstruction error as a function of the compression ratio averaged for the 37 images. The average compression ratio for JBIG for the 37 images was 11.5:1. For a lossless reconstruction, our method achieved a compression ratio of 5.7:1. However, with a near lossless reconstruction (examples are shown in Figures 4 and 5), the compression ratio averages 17.4:1. The average error was then 0.006. The method can simply be extended to multi-silhouette images, with or without holes, as shown in Figure 6. In addition, the resulting sequence of vertices that is used in the polygon filling operation can be used as a polygonal approximation of the object, either in the lossless or lossy case. Figure 7 shows some examples. 4 Summary and Conclusions This paper presents a concise shape representation generated during the extraction of concavity trees. The representation is reversible and is equivalent to a lossless, near lossless, or lossy compression. When compared to the JBIG standard, compression ratios that are on average 1% better are obtained with

6 O. El Badawy and M. S. Kamel JBIG/CCITT ratios:17/2.5/2.6/8.2 Error:0.00313 JBIG/CCITT ratios:18/3.2/3.6/8.8 Error:0.00462 JBIG/CCITT ratios:12/2.6/2.6/6.7 Error:0.00256 JBIG/CCITT ratios:14/2.9/2.9/7.6 Error:0.00485 CT ratio:24 CT ratio:23 CT ratio:16 CT ratio:21 (a) (b) (c) (d) Fig. 4. Four examples of original (top) and compressed/reconstructed images (bottom). Note the almost imperceptible pixel error in the images in the bottom. JBIG as well as CCITT, group III, and group IV fax compression ratios are indicated below original. Concavity tree compression ratios are below. a near-lossless error of 0.6%. The method is thus suitable for shape representation and matching in the compressed domain; polygonal approximation; and vector-based image compression. References 1. Sklansky, J.: Measuring concavity on a rectangular mosaic. IEEE Transactions on Computers C-21 (1972) 1355 1364 2. Batchelor, B.: Hierarchical shape description based upon convex hulls of concavities. Journal of Cybernetics 10 (1980) 205 210 3. Batchelor, B.: Shape descriptors for labeling concavity trees. Journal of Cybernetics 10 (1980) 233 237 4. Borgefors, G., Sanniti di Baja, G.: Methods for hierarchical analysis of concavities. In: Proceedings of the International Conference on Pattern Recognition. Volume 3. (1992) 171 175 5. Borgefors, G., Sanniti di Baja, G.: Analyzing nonconvex 2D and 3D patterns. Computer Vision and Image Understanding 63 (1996) 145 157 6. Xu, J.: Hierarchical representation of 2-D shapes using convex polygons: A morphological approach. Pattern Recognition Letters 18 (1997) 9 1017 7. El Badawy, O., Kamel, M.: Shape retrieval using concavity trees. In: Proceedings of the International Conference on Pattern Recognition. Volume 3. (2004) 111 114 8. El Badawy, O., Kamel, M.: Matching concavity trees. In: Proceedings of the Joint IAPR International Workshops on Structural, Syntactic, and Statistical Pattern Recognition. (2004) 556 564 9. El Badawy, O., Kamel, M.: Hierarchical representation of 2-D shapes using convex polygons: a contour-based approach. Pattern Recognition Letters 26 (2005) 865 877

Compressing 2-D Shapes using Concavity Trees 7 Original Error:0.0062 Error:0.00645 Error:0.0112 JBIG: 14 CT ratio:13 CT ratio:17 CT ratio:20 Fig.5. This figure shows the effects of increasing the compression ratio for a given image. JBIG ratio is 14:1 Error:0.00315 JBIG/CCITT ratios:18/2.6/3.3/8.3 CT ratio:22 Fig.6. Extensibility to multi-object shapes with holes.

8 O. El Badawy and M. S. Kamel 10 20 30 40 60 70 (a) 20 10 0 10 20 30 40 60 70 (b) 10 10 20 20 30 30 40 40 60 60 70 70 20 10 0 10 20 30 40 60 70 (c) 20 10 0 10 20 30 40 60 70 (d) Fig.7. The representation as a polygonal approximation of original (a) corresponding to a reconstruction error of 0.0058 (b), 0.008 (c), and 0.01 (d)