ARTIFICIAL INTELLIGENCE LABORATORY. A.I. Memo No November, K.P. Horn

Similar documents
How and what do we see? Segmentation and Grouping. Fundamental Problems. Polyhedral objects. Reducing the combinatorics of pose estimation

Comparison between Various Edge Detection Methods on Satellite Image

Context based optimal shape coding

Neighbourhood Operations

University of Cambridge Engineering Part IIB Module 4F12 - Computer Vision and Robotics Mobile Computer Vision

(10) Image Segmentation

The performance of xed block size fractal coding schemes for this model were investigated by calculating the distortion for each member of an ensemble

( ) ; For N=1: g 1. g n

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Tracking Multiple Objects in 3D. Coimbra 3030 Coimbra target projection in the same image position (usually

Figure 1: Representation of moving images using layers Once a set of ane models has been found, similar models are grouped based in a mean-square dist

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

LOCALIZATION OF FACIAL REGIONS AND FEATURES IN COLOR IMAGES. Karin Sobottka Ioannis Pitas

Biomedical signal and image processing (Course ) Lect. 5. Principles of signal and image coding. Classification of coding methods.

FOR EFFICIENT IMAGE PROCESSING. Hong Tang, Bingbing Zhou, Iain Macleod, Richard Brent and Wei Sun

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Wavelet Based Image Compression Using ROI SPIHT Coding

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

Depth Estimation for View Synthesis in Multiview Video Coding

Image Edge Detection

A new predictive image compression scheme using histogram analysis and pattern matching

Detection of Edges Using Mathematical Morphological Operators

EECS490: Digital Image Processing. Lecture #19

Using Local Trajectory Optimizers To Speed Up Global. Christopher G. Atkeson. Department of Brain and Cognitive Sciences and

Modified SPIHT Image Coder For Wireless Communication

Hybrid image coding based on partial fractal mapping

CS334: Digital Imaging and Multimedia Edges and Contours. Ahmed Elgammal Dept. of Computer Science Rutgers University

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Motion Estimation. Original. enhancement layers. Motion Compensation. Baselayer. Scan-Specific Entropy Coding. Prediction Error.

Key properties of local features

SURVEY ON IMAGE PROCESSING IN THE FIELD OF DE-NOISING TECHNIQUES AND EDGE DETECTION TECHNIQUES ON RADIOGRAPHIC IMAGES

Partition definition. Partition coding. Texture coding

Reduction of Blocking artifacts in Compressed Medical Images

Automatic Video Caption Detection and Extraction in the DCT Compressed Domain

University of Erlangen-Nuremberg. Cauerstrasse 7/NT, D Erlangen, Germany. ffaerber stuhl

Egemen Tanin, Tahsin M. Kurc, Cevdet Aykanat, Bulent Ozguc. Abstract. Direct Volume Rendering (DVR) is a powerful technique for

Outline. Advanced Digital Image Processing and Others. Importance of Segmentation (Cont.) Importance of Segmentation

Compression of Stereo Images using a Huffman-Zip Scheme

Mesh Based Interpolative Coding (MBIC)

An Embedded Wavelet Video Coder. Using Three-Dimensional Set. Partitioning in Hierarchical Trees. Beong-Jo Kim and William A.

Using Game Theory for Image Segmentation

CS 664 Segmentation. Daniel Huttenlocher

An Embedded Wavelet Video. Set Partitioning in Hierarchical. Beong-Jo Kim and William A. Pearlman

The only known methods for solving this problem optimally are enumerative in nature, with branch-and-bound being the most ecient. However, such algori

Digital Halftoning. Prof. George Wolberg Dept. of Computer Science City College of New York

Motion-Compensated Subband Coding. Patrick Waldemar, Michael Rauth and Tor A. Ramstad

Topic 4 Image Segmentation

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

A Very Low Bit Rate Image Compressor Using Transformed Classified Vector Quantization

Digital Image Processing (CS/ECE 545) Lecture 5: Edge Detection (Part 2) & Corner Detection

2. CNeT Architecture and Learning 2.1. Architecture The Competitive Neural Tree has a structured architecture. A hierarchy of identical nodes form an

EE795: Computer Vision and Intelligent Systems

DEPTH LESS 3D RENDERING. Mashhour Solh and Ghassan AlRegib

REDUCTION OF CODING ARTIFACTS IN LOW-BIT-RATE VIDEO CODING. Robert L. Stevenson. usually degrade edge information in the original image.

BMVC 1996 doi: /c.10.41

MRT based Adaptive Transform Coder with Classified Vector Quantization (MATC-CVQ)

Fuzzy Inference System based Edge Detection in Images

CSEP 521 Applied Algorithms Spring Lossy Image Compression

Very Low Bit Rate Color Video

Dense Motion Field Reduction for Motion Estimation

An Iterative Joint Codebook and Classifier Improvement Algorithm for Finite- State Vector Quantization

Perception. Autonomous Mobile Robots. Sensors Vision Uncertainties, Line extraction from laser scans. Autonomous Systems Lab. Zürich.

Differential Compression and Optimal Caching Methods for Content-Based Image Search Systems

APPLICATION OF ORIENTATION CODE MATCHING FOR STRUCTURE FROM MOTION

MRT based Fixed Block size Transform Coding

Accurate Internal Camera Calibration using Rotation, with. Analysis of Sources of Error. G. P. Stein MIT

FEATURE EXTRACTION USING FUZZY RULE BASED SYSTEM

R-D points Predicted R-D. R-D points Predicted R-D. Distortion (MSE) Distortion (MSE)

Image Segmentation Techniques for Object-Based Coding

3.1. Solution for white Gaussian noise

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Edge detection. Stefano Ferrari. Università degli Studi di Milano Elaborazione delle immagini (Image processing I)

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Document Image Restoration Using Binary Morphological Filters. Jisheng Liang, Robert M. Haralick. Seattle, Washington Ihsin T.

3D data merging using Holoimage

Consistent Logical Checkpointing. Nitin H. Vaidya. Texas A&M University. Phone: Fax:

Image Compression. -The idea is to remove redundant data from the image (i.e., data which do not affect image quality significantly)

CHAPTER 3 IMAGE ENHANCEMENT IN THE SPATIAL DOMAIN

CSE237A: Final Project Mid-Report Image Enhancement for portable platforms Rohit Sunkam Ramanujam Soha Dalal

Sharpening through spatial filtering

3.1 Problems. 3.2 Solution

signal-to-noise ratio (PSNR), 2

For the hardest CMO tranche, generalized Faure achieves accuracy 10 ;2 with 170 points, while modied Sobol uses 600 points. On the other hand, the Mon

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

DCT Based, Lossy Still Image Compression

VIDEO DENOISING BASED ON ADAPTIVE TEMPORAL AVERAGING

A MORPHOLOGY-BASED FILTER STRUCTURE FOR EDGE-ENHANCING SMOOTHING

Fast Color-Embedded Video Coding. with SPIHT. Beong-Jo Kim and William A. Pearlman. Rensselaer Polytechnic Institute, Troy, NY 12180, U.S.A.

Video Compression An Introduction

Filtering of impulse noise in digital signals using logical transform

Texture Image Segmentation using FCM

High Efficiency Video Coding. Li Li 2016/10/18

Noise Reduction in Image Sequences using an Effective Fuzzy Algorithm

Compression of Light Field Images using Projective 2-D Warping method and Block matching

EXAM SOLUTIONS. Image Processing and Computer Vision Course 2D1421 Monday, 13 th of March 2006,

2D and 3D Far-Field Radiation Patterns Reconstruction Based on Compressive Sensing

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Proceedings of the 6th Int. Conf. on Computer Analysis of Images and Patterns. Direct Obstacle Detection and Motion. from Spatio-Temporal Derivatives

Transcription:

MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY A.I. Memo No. 1584 November, 1996 Edge and Mean Based Image Compression Ujjaval Y. Desai, Marcelo M. Mizuki, Ichiro Masaki, and Berthold K.P. Horn This publication can be retrieved by anonymous ftp to publications.ai.mit.edu. Abstract In this paper, we present a static image compression algorithm for very low bit rate applications. The algorithm reduces spatial redundancy present in images by etracting and encoding edge and mean information. Since the human visual system is highly sensitive to edges, an edge-based compression scheme can produce intelligible images at high compression ratios. We present good quality results for facial as well as tetured, 256 256 color images at 0.1 to 0.3 bpp. The algorithm described in this paper was designed for high performance, keeping hardware implementation issues in mind. In the net phase of the project, which is currently underway, this algorithm will be implemented in hardware, and new edge-based color image sequence compression algorithms will be developed to achieve compression ratios of over 100, i.e., less than 0.12 bpp from 12 bpp. Potential applications include low power, portable video telephones. Copyright c Massachusetts Institute of Technology, 1996 This report describes research done at the Articial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for this research was provided in part by the Sharp Corporation. Marcelo Mizuki was supported by an NSF Graduate Fellowship.

1 Introduction Image data compression plays an important role in the transmission and storage of image information. The goal of image compression is to obtain a representation that minimizes bit rate with respect to some distortion constraint. Typical compression techniques achieve bit rate reduction by eploiting correlation between piel intensities. Standard compression methods include predictive orwaveform coding, in which piel correlation is reduced directly in the spatial domain, and transform coding, in which theintensity values are transformed into another domain such that most information is contained in a few parameters. Most of these methods use the mean square error (MSE) distortion measure so as to make mathematical analysis tractable. For low bit rate applications, standard compression methods fail to provide satisfactory performance because of their inadequate distortion criterion (MSE). As the number of bits to be transmitted is reduced, it becomes increasingly crucial to preserve the perceptually signicant parts of the image. Since the mean square error measure is not based on properties of the human visual system, algorithms using this measure tend to produce unacceptable results at low bit rates. One of the most interesting properties of human visual perception is its sensitivity to edges [1, 2]. This suggests that edges can provide an ecient image representation, making edge-based compression techniques very useful, even at high compression ratios. There has been some previous work on edge-based compression, in particular, by Graham, Kocher, and Carlsson. Graham [3] considers an image as being composed of low and high frequency parts, which are encoded separately. The high frequency part corresponds to contours in the image, and the low frequency part corresponds to smooth areas between contours. Kocher et al. [4, 5] use region growing to segment the image into regions. The teture of each of these regions is encoded separately. Carlsson [6] proposes a sketch-based coding scheme for grey level images. The work described in this paper is based on the ideas presented by Carlsson. The representation, however, has been modied, and the encoding process has been simplied to suit hardware implementation, while providing similar performance. In particular, our algorithm uses: (a) 1-D linear interpolation, as opposed to iterative minimum variation interpolation, for ecient implementation (b) intensity line-tting for improved performance (c) ecient codingofcontour intensity data and (d) mean coding, as opposed to Laplacian pyramid coding, for encoding the residual error. Detailed description of the algorithm can be found in the net section. Another important advantage of our compression approach is that it can be used with edge-based motion estimation to encode moving images. The reader is referred to [7] for preliminary results on edge-based image sequence coding. 2 Compression Algorithm The edge-based representation of an image consists of its binary edge map and the intensity information on both sides of the contours in the edge map. This representation is sucient for obtaining an intelligible reconstructed image. The reconstruction is performed by simply interpolating the intensity information between contours of the edge map. As with many compression methods, the quality of reconstruction can be improved by residual coding, which in this case involves encoding the dierence between the input image and the interpolated image. The compression algorithm, therefore, consists of edge detection, contour coding, contour intensity data etraction and coding, and residual coding. This section describes the algorithmic details of all these operations. It is important to point out that our overall approach has two phases. In the rst phase, which is presented in this paper, a new algorithm is developed with hardware implementation in mind, and in the second phase, which is currently being studied, the algorithm is modied for ecient hardware implementation. Even if the algorithm is modied in the second phase to suit hardware implementation, the research described in this paper should remain useful. 2.1 Edge Detection Various edge detection methods can be found in the image processing literature. One of the simplest and most widely-used methods for digital implementation is the Sobel operator. Since in this phase of the project, we are not interested in optimizing hardware implementation, we chose the Sobel operator for edge detection just because of its simplicity and good performance. Any other ecient edge detection method can be used without signicantly aecting the performance of the overall algorithm. In the Sobel edge detection method, the input image is convolved with two lters: h and h y. h = ;1 0 1 ;2 0 2 ;1 0 1! h y = ;1 ;2 ;1 0 0 0 1 2 1 The lter h corresponds to horizontal dierentiation (and responds to vertical edges), while h y corresponds to vertical dierentiation (and responds to horizontal edges). The results of the two convolutions are combined in the following way to obtain the Sobel edge-intensity map(s mag ):! 1

S = I h S y = I h y (1) q S mag = S 2 2 + S y (2) where I is the input image. S mag is then thresholded to create the edge map for the input image. The edges obtained using this algorithm are usually very sharp. In order to obtain slowly-varying edges, we use Level edge detection in conjunction with the Sobel edge detection algorithm. Level edges are detected in eactly the same way as Sobel edges, but with dierent lters: where ^h = ;1 0 0 0 1 ;2 0 0 0 2 ;1 0 0 0 1! L = I ^h L y = I ^h y (3) q L mag = L 2 2 + L y (4) ^h y = 0 B@ ;1 ;2 ;1 0 0 0 0 0 0 0 0 0 1 2 1 The nal edge-intensity map is obtained by weighted averaging of the two individual edge-intensity maps. The weight w is chosen empirically. I mag = w S mag +(1; w) L mag (5) To facilitate simple implementation of our edge detection algorithm, the epressions for S mag and L mag have to be modied because they contain the square root operation, which is dicult to implement. We have found that acceptable edge maps are obtained if the epressions for S mag and L mag are approimated by using the absolute value operation. This modication simplies the implementation without sacricing the edge map quality. The resultant epression for I mag is given below. I mag = I + I y (6) where 1 CA I = w js j +(1; w) jl j (7) I y = w js y j +(1; w) jl y j (8) Notice that the above epression for I mag requires 4 convolutions. The numberofconvolutions can be reduced by realizing that in most cases, S has the same sign as L, and S y has the same sign as L y. Therefore, I and I y can be written as: where I = ji g j I y = ji g y j (9) g = w h +(1; w) ^h (10) g y = w h y +(1; w) h ^ y (11) 2.1.1 Thresholding and Thinning The edge-intensity map produced by the aforementioned ltering operations has to be tested against a threshold in order to create a binary edge map. Since the number of edge piels in the edge map is directly dependent on the threshold level, the bit rate can be controlled by varying the threshold. It turns out that the binary edge map obtained by thresholding the edge-intensity map is thick and hence cannot be encoded eciently. A thinning operation must be performed to produce edges that are single piel wide. We use a very simple thinning algorithm in which every piel in the edge-intensity map that is over the threshold is checked for a local maimum in the horizontal and vertical directions. If the current piel is a local maimum in the horizontal direction but not the vertical, then it is considered an edge piel only if I >I y. Similarly, if the current piel is a local maimum in the vertical direction but not the horizontal, then it is considered an edge piel only if I y >I. This processing eliminates false edges [9]. 2

The complete thinning algorithm is illustrated in gure 1 below. Figure 6 shows the binary edge maps for the images in gure 5. u l c r d Imag Let c, u, d, l, andr represent the edge-intensities at the current, up, down, left, and right piels, respectively. The current piel is an edge piel if [c >uand c d AND c>land c r] OR [(c >uand c d AND I y >I )OR(c>lAND c r AND I >I y )] Figure 1: Illustration of thinning 2.2 Contour Coding The edge map produced by edge detection has to be encoded for ecient transmission. The encoding process involves three steps: 8-connected map to 4-connected map Contour ltering Chain coding 2.2.1 8-connected map to 4-connected map An 8-connected map is a binary graph in which each piel has 8 neighbors, as shown in gure 2(a). Our analysis has shown that reduction in bit rate can be achieved if the 8-connected map is transformed into a 4-connected map. Also, the 4-connected map makes the implementation of the subsequent modules much simpler. Due to these reasons, our contour coding module converts the input edge map into a 4-connected map and then performs the encoding. An illustration of the conversion process is shown in gure 2(b). (a) (b) 1 3 2 (c) Figure 2: (a) An 8-connected edge map (b) 8-connected to 4-connected transformation (c) Possible directionals for a 4-connected map. 3

2.2.2 Contour ltering The edge map for an image contains many contours, each representing an object present in the image. Typically, long contours correspond to important features, while short contours are usually produced by noise. Therefore, in order to encode the edge map eciently, short contours must be eliminated. This simple step is referred to as contour ltering. 2.2.3 Chain coding Chain coding has been shown to be very ecient for encoding binary contours [8]. Each contour is represented with a starting point and a sequence of directionals that gives the direction of travel as the contour is traced. The starting point is found by simply scanning the edge map, row by row, for the rst non-zero piel. Once the starting point has been found, the following procedure is used: Find the net piel on the contour by searching the neighbors clockwise for a non-zero piel. Output the directional that represents the direction of the net piel relative to the current direction. Delete the current contour piel, go to the net piel, and update the current direction. This process is continued until the whole contour is traced. Figure 2(c) shows the possible directionals for a 4-connected edge map. Each directional can be represented with log 2 (3) bits. If we use statistical coding on the sequence of directionals, an even lower bit rate of about 1.3 bits/contour point can be obtained [6]. There is also an overhead of 9 bits/contour to encode the starting point. 2.3 Contour Intensity Data Etraction and Coding Edge-based image representation requires encoding of the binary edge map and the contour intensity data of the image. The contour intensity data typically consists of intensity values of piels to the left and right of each contour piel. The receiver reconstructs the image intensity by interpolating the contour intensity data. Since the number of contour piels is much less than the total number of piels in the image, edge-based representation can be used for low bit rate image compression. The reconstructed image is obtained by averaging the results of horizontal and vertical interpolation. In horizontal interpolation, the binary edge map is scanned row by row, and the intensity values between two consecutive edge points on a row are obtained by linearly interpolating the corresponding contour intensity data. The same process is repeated for vertical interpolation, but this time the image is scanned column by column. We use linear interpolation, as opposed to other sophisticated interpolation techniques like iterative interpolation with minimum variation criterion [6], in order to simplify the hardware implementation of the algorithm. As mentioned above, we would like to represent the image intensity between contours with a linear approimation. For good reconstruction, the contour intensity data must be chosen so as to obtain a near-optimal linear approimation. We could use the left and right intensities of contour piels as contour intensity data, however, in many instances this leads to poor reconstruction, as illustrated in gures 3 (a) and (b). (a) (b) (c) Figure 3: 1-D illustration of linear interpolation. (a) Original signal, 'o' marks the location of an edge (b) reconstructed signal if the left and right intensities are chosen as contour intensity data (c) reconstructed signal with line-tting. 4

2.3.1 Intensity Line-tting A better approach, as shown in gure 3(c), is to nd the best linear approimation for the image intensity between contours and then use these parameters as contour intensity data. The approimation is done horizontally as well as vertically. The horizontal approimation is obtained by tting a line to the intensity values between two consecutive contour piels on a row, for each row of the image. The end points of these lines are used as horizontal contour intensity data. Similar processing is done on the columns for vertical approimation. Finally, the results of the two approimations are combined, by averaging if needed, to obtain the required contour intensity data. Our simulation results indicate that the quality of reconstruction is improved signicantly by the line-tting approach. Since there are closed-form solutions available for tting a line through data, this approach is simple to implement. 2.3.2 Contour Intensity Data Coding Once the contour intensity data is obtained, it has to be compressed for transmission. Ecient compression can be achieved if we eploit the redundancy in the contour intensity data for a contour. Since the intensity between contours is assumed to be smooth, there should be little variation in the contour intensity data on each side of a contour. This assumption suggests that the data on each side of a contour could be represented with just one parameter, namely, their average. In reality, however, this assumption does not hold because the edge etraction process is not perfect. Eperimental results indicate that a more reasonable coding strategy is to partition the data for a contour into ed-length pieces and then encode each piece with its average value. 2.4 Residual Coding As described in the previous section, the reconstructed image is obtained by interpolating the contour intensity data. Because of imperfections in edge etraction, which lead to open contours, the interpolation of the contour intensity data diuses the intensity of one object into its neighboring regions, causing reconstruction error. In order to reduce this error, the residual image, which is obtained by subtracting the interpolated image from the original image, must be encoded. Standard methods employed for residual coding include waveform coding techniques like Laplacian pyramid coding [6]. These coding techniques, however, are not feasible for very low bit rate applications. We perform mean coding on the input image to reduce the reconstruction error and to keep the bit rate down. In mean coding, the input image is partitioned into 10 10 blocks, and the mean of each block is encoded and transmitted. The receiver uses the mean information and the interpolated image to obtain an acceptable reconstruction of the input image. One simple way to use the mean information is to modify the mean of the interpolated image so that it matches the mean of the input image. Unfortunately, this method can not correct the line artifacts present in the interpolated image. Our eperiments indicate that these artifacts can be reduced considerably if the intensity of each non-edge block in the reconstructed image is set to the mean of the corresponding block in the input image (see gures 7 and 8). As a result of this processing, all non-edge blocks in the reconstructed image become DC (constant intensity) blocks, and thereby lose their teture. This loss of teture in non-edge blocks is, however, not that crucial because teture information is not necessary in low bit rate applications. Another signicant advantage of mean coding over other residual coding techniques is that in mean coding, the encoder does not have to compute the interpolated image, and therefore, has a simpler structure. 2.5 Color Image Encoding The algorithm described above can be easily etended for encoding color images. The input color image is assumed to be in the YIQ format. Edge detection and contour coding is performed on the luminance (Y) component alone, while contour intensity data etraction and coding is performed independently on all three components. The luminance component is then mean coded using 10 10 blocks, while the chrominance components are mean coded using 20 20 blocks. The complete block diagram of the encoder is shown in gure 4. 3 Bit Rate Estimation Unlike ed-bit-rate compression schemes like Vector Quantization (VQ), the bit rate of the edge-based compression technique depends on the content of the input image, of which we have no a priori knowledge. In this section, we rst present the rate controlling parameters of our algorithm and then use these parameters for bit rate estimation. Based on the compression algorithm described in the previous section, one can clearly see that the following parameters control the bit rate: 5

Y Edge Detection Contour Coding Input Image [YIQ] YIQ Intensity Etraction (line fitting) Intensity Coding YIQ Mean Coding Figure 4: Block Diagram of Encoder Edge detection threshold, T Edge weight, w, for combining Sobel and level edges Minimum contour length, L min, for contour coding Maimum contour length, L ma, for intensity coding. The edge weight,w, and minimum contour length, L min,vary from image to image. But for most images, w =0:5 and L min = 6 to 10 produce good results. Similarly, L ma must be chosen eperimentally. Our simulation results indicate that a reasonable value for L ma is 15 for the Y component of the input. This implies that the contour intensity data is encoded by partitioning each contour into 15 piel long pieces and transmitting the average intensity on each side. Since the I and Q components are perceptually less important and have much less information than the Y component, they are encoded with L ma = 1. As opposed to the other parameters, T can not be chosen a priori. The only way toachieve a given ed bit rate is to start with a predetermined T and then ne-tune it until the desired bit-rate is obtained. The ne-tuning, however, does not have a serious impact on the speed of the overall algorithm because our simulations suggest that the correct threshold level can be reached with very few iterations, each involving thresholding and thinning of the edge-intensity map and then estimating the bit rate. It is important to note that the edge-intensity map has to be computed only once. Having looked at the rate controlling parameters, we now focus on the problem of estimating bit rate for measuring the performance of the compression algorithm. There are three main modules in the algorithm that contribute to the total bit rate: contour coding, intensity coding, and mean coding. The performance of contour coding has been studied etensively by many researchers [6, 8]. From their results, we can estimate the number of bits required for contour coding to be 1:3L c +9N c,wherel c is the total number of contour piels in the image and N c is the number of distinct contours. For details on these estimates, the reader is referred to [6]. Intensity coding is performed by breaking each contour in the image into L ma piel long pieces and then average encoding the intensity on each side. Our results indicate that for L ma = 15, which isthevalue chosen for the Y component, the total numberofcontour pieces equals 2N c on the average. This implies that intensity coding for the Y component requires 2N c 2 parameters. On the other hand, since L ma is chosen to be 1 for the I and Q components, each of them requires N c 2 parameters. If each parameter is quantized to 4 bits, the total number of bits needed for intensity coding becomes 32N c. Recall from the discussion on mean coding that the Y component is encoded with a block size of 10 10, while the I and Q components are encoded with 20 20 blocks. If the image size is N M, thenumberofmeanvalues to be transmitted equals (1:5NM)=100. In our simulations, statistical coding of quantized mean values resulted in about 2.7 bits per mean. Thus, the number of bits required for mean coding can be estimated to be (NM)=25. 6

Combining all the above estimates, the total bit rate of our compression algorithm can be epressed as: 4 Results 1:3L c +41N c + NM 25 : The edge-based static compression algorithm was evaluated using the four images shown in gure 5. The images are 256 256 and have 24 bits/piel. Figure 6 shows the binary edge maps obtained by edge detection. The rate controlling parameters and the resulting bit rates for each of the images are given in table 4. In order to illustrate the signicance of mean coding, the interpolated images (without mean coding) are shown in gure 7 and the nal reconstructed images are displayed in gure 8. These gures clearly show the eectiveness of mean coding in reducing line artifacts of an interpolated image. As one can see from these results, our edge and mean based algorithm produces good quality images at very low bit rates. Since sharp edges are very well preserved, the resulting images convey almost all information that is perceptually important. There is, however, some loss of teture for images like Sailboat and Cablecar because the algorithm does not perform any teture coding. Tests were also performed to determine the lowest possible bit rates for which the algorithm could produce acceptable quality results. These bit rates were found to be 0.16 bpp and 0.08 bpp for the Lenna and Girl2 images, respectively. Figures 9 (a) and (b) show the reconstructed Lenna image at 0.2 bpp and 0.16 bpp. Figures 9 (c) and (d) show the reconstructed Girl2 image at 0.1 bpp and 0.08 bpp. Color image Threshold(T) L min L c N c bit rate Lenna 180 6 5110 220 0.275 bpp Girl2 75 7 2970 115 0.175 bpp Cablecar 225 10 5100 215 0.275 bpp Sailboat 230 9 4550 230 0.275 bpp Table 1: Rate controlling parameters and bit rate estimates (a) Lenna (b) Girl2 (c) Cablecar (d) Sailboat Figure 5: Original color images displayed in monochrome (a) (b) (c) (d) Figure 6: Binary edge maps 7

(a) (b) (c) (d) Figure 7: Interpolated color images (without mean coding) displayed in monochrome (a) (b) (c) (d) Figure 8: Reconstructed color images (with mean coding) displayed in monochrome (a) (b) (c) (d) Figure 9: Reconstructed color images (with mean coding) displayed in monochrome (a) Lenna at 0.2 bpp (b) Lenna at 0.16 bpp (c) Girl2 at 0.1 bpp (d) Girl2 at 0.08 bpp 8

5 Conclusions Most traditional compression techniques eploit the statistical characteristics of images to reduce bit rate. These techniques fail to provide acceptable quality atvery low bit rates because they do not take into account the properties of human visual perception. We propose a compression algorithm that represents an image in terms of its binary edge map, mean information, and the intensity information on both sides of the edges. An edge-based representation is chosen because the human visual system is highly sensitive to edges. The proposed algorithm is capable of delivering acceptable quality images at 0.1 to 0.3 bpp for 256 256 color images. Our simulation results suggests that the algorithm works well for images with limited amount of teture, such as facial images. This, however, is not a drawback in the case of many low bit rate applications, for eample videophones, where the input images can be assumed to be facial. The research presented here is part of a project whose goal is to develop a new edge-based image sequence compression technique for use in low power, portable videophones. Edge-based motion estimation and detailed hardware implementation of the static algorithm are currently being studied. References [1] T. Cornsweet, Visual perception, Academic Press, New York, 1970. [2] F. Ratli, Contour and contrast, Sci. Am., Vol. 226, No. 6, June 1972, pp. 99-101. [3] D. Graham, Image transmission by two-dimensional contour coding, Proc. IEEE, Vol. 55, No. 3, March 1967, pp. 336-346. [4] M. Kocher and M. Kunt, Image data compression by contour teture modelling, SPIE Int. Conf. on the Applications of Digital Image Processing, Geneva, Switzerland, April 1983, pp. 131-139. [5] M. Kunt, A. Ikonomopoulos, and M. Kocher, Second generation image coding techniques, Proc. IEEE, Vol. 73, No. 4, April 1985, pp. 549-574. [6] S. Carlsson, Sketch based coding of grey level images, Signal Processing, Vol. 15, 1988, pp. 57-83. [7] M. Mizuki, U. Desai, I. Masaki, and A. Chandrakasan,A binary block matching architecture with reduced power consumption and silicon area requirement, Submitted to ICASSP 1996. [8] M. Eden and M. Kocher, On the performance of a contour coding algorithm in the contet of image coding: Part 1. Contour segment coding, Signal Processing, Vol. 8, No. 4, 1985, pp. 381-386. [9] J. Lim, Two-dimensional signal and image processing, Prentice Hall, 1990. 9