ISSN : (Print) ISSN : (Online) A Novel VLSI Architecture of SOC for Image Compression Model for Multimedia Applications

Similar documents
IMAGE COMPRESSION TECHNIQUES

Department of electronics and telecommunication, J.D.I.E.T.Yavatmal, India 2

2-D SIGNAL PROCESSING FOR IMAGE COMPRESSION S. Venkatesan, Vibhuti Narain Rai

Sparse Transform Matrix at Low Complexity for Color Image Compression

Study of Image Compression Techniques

A STUDY OF VARIOUS IMAGE COMPRESSION TECHNIQUES

Fundamentals of Video Compression. Video Compression

CS 335 Graphics and Multimedia. Image Compression

Review of Image Compression Techniques

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Image Compression - An Overview Jagroop Singh 1

A Survey and Study of Image Compression Methods

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Statistical Image Compression using Fast Fourier Coefficients

AN OPTIMIZED LOSSLESS IMAGE COMPRESSION TECHNIQUE IN IMAGE PROCESSING

Video Compression An Introduction

A NEW ENTROPY ENCODING ALGORITHM FOR IMAGE COMPRESSION USING DCT

Video Compression System for Online Usage Using DCT 1 S.B. Midhun Kumar, 2 Mr.A.Jayakumar M.E 1 UG Student, 2 Associate Professor

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Topic 5 Image Compression

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

ISSN V. Bhagya Raju 1, Dr K. Jaya Sankar 2, Dr C.D. Naidu 3

A Comprehensive Review of Data Compression Techniques

Lecture 5: Compression I. This Week s Schedule

AN ANALYTICAL STUDY OF LOSSY COMPRESSION TECHINIQUES ON CONTINUOUS TONE GRAPHICAL IMAGES

Image Compression Algorithm and JPEG Standard

Volume 2, Issue 9, September 2014 ISSN

A Novel Image Compression Technique using Simple Arithmetic Addition

Three Dimensional Motion Vectorless Compression

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

VC 12/13 T16 Video Compression

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

MRT based Fixed Block size Transform Coding

A Parallel Reconfigurable Architecture for DCT of Lengths N=32/16/8

Chapter 1. Digital Data Representation and Communication. Part 2

High Quality Image Compression

DigiPoints Volume 1. Student Workbook. Module 8 Digital Compression

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

Hybrid Image Compression Using DWT, DCT and Huffman Coding. Techniques

Multimedia Communications. Transform Coding

Module 9 AUDIO CODING. Version 2 ECE IIT, Kharagpur

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

So, what is data compression, and why do we need it?

International Journal of Computer & Organization Trends Volume 3 Issue 2 March to April 2013

JPEG Compression Using MATLAB

A Review on Digital Image Compression Techniques

Optimization of Bit Rate in Medical Image Compression

HIGH LEVEL SYNTHESIS OF A 2D-DWT SYSTEM ARCHITECTURE FOR JPEG 2000 USING FPGAs

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

NOVEL TECHNIQUE FOR IMPROVING THE METRICS OF JPEG COMPRESSION SYSTEM

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Optical Storage Technology. MPEG Data Compression

Huffman Coding and Position based Coding Scheme for Image Compression: An Experimental Analysis

Part 1 of 4. MARCH

VLSI DESIGN APPROACH FOR IMAGE COMPRESSION USING WAVELET

AUDIO COMPRESSION USING WAVELET TRANSFORM

Efficient Block Matching Algorithm for Motion Estimation

[Singh*, 5(3): March, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

Compression of Stereo Images using a Huffman-Zip Scheme

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

H.264 Based Video Compression

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Image Coding and Compression

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

A combined fractal and wavelet image compression approach

An Optimum Novel Technique Based on Golomb-Rice Coding for Lossless Image Compression of Digital Images

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

Stereo Image Compression

Interactive Progressive Encoding System For Transmission of Complex Images

In the first part of our project report, published

Lecture 6: Compression II. This Week s Schedule

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

HARDWARE IMPLEMENTATION OF LOSSLESS LZMA DATA COMPRESSION ALGORITHM

Image coding and compression

ISSN Vol.06,Issue.10, November-2014, Pages:

Digital Image Representation Image Compression

STUDY AND IMPLEMENTATION OF VIDEO COMPRESSION STANDARDS (H.264/AVC, DIRAC)

Module 7 VIDEO CODING AND MOTION ESTIMATION

DIGITAL IMAGE COMPRESSION TECHNIQUES

A New Approach to Compressed Image Steganography Using Wavelet Transform

Digital Image Processing

IMAGE COMPRESSION. Chapter - 5 : (Basic)

An Analytical Review of Lossy Image Compression using n-tv Method

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

Design and Implementation of Effective Architecture for DCT with Reduced Multipliers

Fingerprint Image Compression

Image Compression Using BPD with De Based Multi- Level Thresholding

VIDEO SIGNALS. Lossless coding

Lossless Compression Algorithms

A Miniature-Based Image Retrieval System

ROI Based Image Compression in Baseline JPEG

Transcription:

A Novel VLSI Architecture of SOC for Image Compression Model for Multimedia Applications 1 Dr. P. John Paul, 2 S. Koteswari, 3 B. Kezia Rani 1 Dept. of CSE, GATES Engineering College, Gooty, Ananthapur, A.P., India. 2 Dept. of ECE, D.N.R Engineering College, Bhimavaram, W.G. Dist, A.P., India. 3 Dept. of CSE, The Engineering Academy, Hyderabad, A.P., India. Abstract Advanced technologies have increased demands for visual information and higher quality video frames, as with 3-D movies, games, and HDTV. This taxes the available technologies and creates a gap between the huge amount of visual data required for multimedia applications and the still-limited hardware capabilities. Image and Video Compression for Multimedia engineering bridges the gap with concise, up-to-date video and image coding information. With the growing popularity of the applications that use large amounts of visual data, image and video coding is an active and dynamic field.. Image and Video Compression for Multimedia Engineering builds a basis for future study, research, and development. This paper proposes novel and high performance architecture for image compression which is based on the frequency domain representation. The digitized image is compressed using Discrete Hartley Transform (DHT), Discrete Walsh Transform (DWT), Discrete Fourier Transform (DFT), and Discrete Radon Transform (DRT) and their combinations with DHT. Discrete Hartley Transform is used as basic transform because of its reversibility, hence other transform kernels can developed from DHT. The proposed architecture is developed using Verilog Hardware Descriptive Language and has been tested for still images. Simulation Results show the hybrid transforms DHT+DCT and DHT+DRT give better results with respect to compression ratios. Keywords VLSI, Discrete Cosine Transform, JPEG, Hartley transform. I. Introduction The rapid growth of digital imaging application, including desktop publishing, multimedia, teleconferencing and high definition television (HDTV) has increased the need for effective and standardized image compression techniques. This has become more popular with the help of System-On-Chip (SOC) technology, which gives low power, area, delay, cost requirements. Visual systems are rapidly evolving field for telecommunications, computer and media industries. The progress in this field is supported by the availability of digital transmission channels and digital storage media. In addition to the available narrowband ISDN new digital transmission channels such as broadband ISDN, digital satellite channels and digital terrestrial TV broadcasting channels are also playing major role in video communications. Moreover, personal computers and workstations have become important platforms for multimedia interactive applications. Recent developments in the technology of digital storage media, image displays, and desktop computing and communication networks have made digital video and audio economically viable for the envisaged applications. In order to reduce transmission and storage cost bit rate compression schemes are employed. Bit rate reduction can be achieved by source coding schemes such as predictive coding, transform coding, sub-band coding and interpolative coding [1-6]. To achieve the requirements on bit rate reduction and remaining picture quality advanced coding schemes as 130 International Journal of Computer Science and Technology combinations of the basic schemes are required. To facilitate world wide interchange of digitally encoded audio-visual data there is a demand for international standards for the coding methods and transmission-formats. International standardization committees have been working on the specification of several compression algorithms. The Joint Photographic Experts Group (JPEG) of the International Standards Organization (ISO) has specified an algorithm for compression of still images [7]. The ITU (formerly known as CCIIT) proposed the H.261 standard for video telephony and video conference [l]. The Motion Pictures experts Group (MPEG) of ISO has completed its first standard MPEG- 1, which will be used for interactive video and provides a picture quality comparable to VCR quality [3]. MPEG made substantial progress for the second phase of standards, MPEG-2, which will provide audio-visual quality of both broadcast TV and HDTV. Because of the wide field of applications MPEG-2 is a family of standards with different profiles and levels [4]. The envisaged mass application of the discussed video communication services calls for coding equipment of low manufacturing cost and small size. The source coding schemes recommended bythe international standardization groups are very sophisticated in order to achieve high bit rate reduction under the constraint of the highest possible picture quality. These coding schemes will result in hardware systems of high complexity. Thus cost effective implementation of these high complexity systems rely on VLSI. Recent advances in VLSI technology relieve the hardware problems but the high demands of video coding require special architectural approaches adapted to the video signal processing schemes. This paper is organized as follows. Section II describes overview of image compression both lossy and lossless compressions techniques. Section III explains the proposed scheme which reduces the compression ratio. Simulation results and synthesis are explained in section IV, V. And finally conclusions are made in section VI. II. Overview of Image Compression Image compression addresses the problem of reducing the amount of data required to represent a digital image. It is a process intended to yield a compact representation of an image, thereby reducing the image storage/transmission requirements. Compression is achieved by the removal of one or more of the three basic data redundancies, Coding Redundancy, Interpixel Redundancy, Psychovisual Redundancy. Coding redundancy is present when less than optimal code words are used. Interpixel redundancy results from correlations between the pixels of an image. Psychovisual redundancy is due to data that is ignored by the human visual system (i.e. visually non essential information). Image compression techniques reduce the number of bits required to represent an image by taking advantage of these redundancies. An inverse process called decompression (decoding) is applied to the compressed data to get there constructed image. The objective of compression is to reduce the number of bits as much as possible, while keeping the resolution and the visual quality of the reconstructed image as close to the original image as possible. Image compression systems are composed of two

distinct structural blocks: an encoder and a decoder. Image f (x,y) is fed into the encoder, which creates a set of symbols form the input data and uses them to represent the image. If we let n1 and n2 denote the number of information carrying units( usually bits ) in the original and encoded images respectively, the compression that is achieved can be quantified numerically via the compression ratio, CR = n1 /n2. As shown in the figure, the encoder is responsible for reducing the coding, interpixel and psychovisual redundancies of input image. In first stage, the mapper transforms the input image into a format designed to reduce interpixel redundancies. The second stage, qunatizer block reduces the accuracy of mapper s output in accordance with a predefined criterion. In third and final stage, a symbol decoder creates a code for quantizer output and maps the output in accordance with the code. These blocks perform, in reverse order, the inverse operations of the encoder s symbol coder and mapper block. As quantization is irreversible, an inverse quantization is not included. Fig. 1: Image compression framework A. Benefits of image compression 1. It provides a potential cost savings associated with sending less data over switched telephone network where cost of call is really usually based upon its duration 2. It not only reduces storage requirements but also overall execution time. 3. It also reduces the probability of transmission errors since fewer bits are transferred. 4. It also provides a level of security against illicit monitoring. B. Image compression techniques The image compression techniques are broadly classified into two categories depending whether or not an exact replica of the original image could be reconstructed using the compressed image. These are Lossless technique, Lossy technique [8]. 1. Lossless compression technique In lossless compression techniques, the original image can be perfectly recovered form the compressed (encoded) image. These are also called noiseless since they do not add noise to the signal (image).it is also known as entropy coding since it use statistics/ decomposition techniques to eliminate/minimize redundancy. Lossless compression is used only for a few applications with stringent requirements such as medical imaging. Following techniques are included in lossless compression, Run length encoding, Huffman encoding, LZW coding, Area coding. 2. Lossy compression technique Lossy schemes provide much higher compression ratios than lossless schemes. Lossy schemes are widely used since the quality of the reconstructed images is adequate for most applications.by this scheme, the decompressed image is not identical to the original image, but reasonably close to it. In this prediction transformation decomposition process is completely reversible. The quantization process results in loss of information. The entropy coding after the quantization step, however, is lossless. The decoding is a reverse process. Firstly, entropy decoding is applied to compressed data to get the quantized data. Secondly, dequantization is applied to it &finally the inverse transformation to get the reconstructed image. Major performance considerations of a lossy compression scheme include, Compression ratio, Signal - to noise ratio, Speed of encoding & decoding. Lossy compression techniques includes the schemes, Transformation coding, Vector quantization, Fractal coding, Block Truncation Coding, Sub band coding. Lossless Compression Techniques (i). Run Length Encoding This is a very simple compression method used for sequential data. It is very useful in case of repetitive data. This technique replaces sequences of identical symbols (pixels), called runs by shorter symbols. The run length code for a gray scale image is represented by a sequence {Vi, Ri} where Vi is the intensity of pixel and Ri refers to the number of consecutive pixels with the intensity Vi.If both Vi and Ri are represented by one byte, this span of 12 pixels is coded using eight bytes yielding a compression ration of 1: 5. (ii). Huffman Encoding This is a general technique for coding symbols based on their statistical occurrence frequencies (probabilities). The pixels in the image are treated as symbols. The symbols that occur more frequently are assigned a smaller number of bits, while the symbols that occur less frequently are assigned a relatively larger number of bits. Huffman code is a prefix code. This means that the (binary) code of any symbol is not the prefix of the code of any other symbol. Most image coding standards use lossy techniques in the earlier stages of compression and use Huffman coding as the final step. (iii). LZW Coding LZW (Lempel- Ziv Welch) is a dictionary based coding. Dictionary based coding can be static or dynamic. In static dictionary coding, dictionary is fixed during the encoding and decoding processes. In dynamic dictionary coding, the dictionary is updated on fly. LZW is widely used in computer industry and is implemented as compress command on UNIX. (iv). Area Coding Area coding is an enhanced form of run length coding, reflecting the two dimensional character of images. This is a significant advance over the other lossless methods. For coding an image it does not make too much sense to interpret it as a sequential stream, as it is in fact an array of sequences building up a two dimensional object. The algorithms for area coding try to find rectangular regions with the same characteristics. These regions are coded in a descriptive form as an element with two points and a certain structure. This type of coding can be highly effective but it bears the problem of a nonlinear method, which cannot be implemented in hardware. Therefore, the performance in terms of compression time is not competitive, although the compression ratio is. Lossy Compression Techniques IJCST Vol. 2, Issue 3, September 2011 (i). Transformation Coding In this coding scheme, transforms such as DFT (Discrete Fourier International Journal of Computer Science and Technology 131

Transform) and DCT (Discrete Cosine Transform) are used to change the pixels in the original image into frequency domain coefficients (called transform coefficients).these coefficients have several desirable properties. One is the energy compaction property that results in most of the energy of the original data being concentrated in only a few of the significant transform coefficients. This is the basis of achieving the compression. Only those few significant coefficients are selected and the remaining are discarded. The selected coefficients are considered for further quantization and entropy encoding. DCT coding has been the most common approach to transform coding. It is also adopted in the JPEG image compression standard. (ii). Vector Quantization The basic idea in this technique is to develop a dictionary of fixed-size vectors, called code vectors. A vector is usually a block of pixel values. A given image is then partitioned into nonoverlapping blocks (vectors) called image vectors. Then for each in the dictionary is determined and its index in the dictionary is used as the encoding of the original image vector. Thus, each image is represented by a sequence of indices that can be further entropy coded. (iii). Fractal Coding The essential idea here is to decompose the image into segments by using standard image processing techniques such as color separation, edge detection, and spectrum and texture analysis. Then each segment is looked up in a library of fractals. The library actually contains codes called iterated function system (IFS) codes, which are compact sets of numbers. Using a systematic procedure, a set of codes for a given image are determined, such that when the IFS codes are applied to a suitable set of image blocks yield an image that is a very close approximation of the original. This scheme is highly effective for compressing images that have good regularity and self-similarity. (iv). Block truncation coding In this scheme, the image is divided into non overlapping blocks of pixels. For each block, threshold and reconstruction values are determined. The threshold is usually the mean of the pixel values in the block. Then a bitmap of the block is derived by replacing all pixels whose values are greater than or equal (less than) to the threshold by a 1 (0). Then for each segment (group of 1s and 0s) in the bitmap, the reconstruction value is determined. This is the average of the values of the corresponding pixels in the original block. (v). Sub band coding In this scheme, the image is analyzed to produce the components containing frequencies in well-defined bands, the sub bands. Subsequently, quantization and coding is applied to each of the bands. The advantage of this scheme is that the quantization and coding well suited for each of the sub bands can be designed separately. III. VLSI Architecture In the following sections the hybrid transforms are discussed. The coefficients of Hartley transform are expressed in terms of other transforms and the combined coefficients are used for image compression. The different transforms FFT, FCT, FST, and FHT are studied for similarities and the conversions has been proposed [9-14]. A single chip converged VLSI architecture for hybrid transforms for image compression has been developed as 132 International Journal of Computer Science and Technology shown in Fig.2. Fig. 2 : A. DFT/FFT Definition The Discrete Fourier Transform (DFT) of a sampled signal x(n), n=0 N-1 is given by for k = 0,1,2,. (N 1) The inverse Fourier Transform is defined as FFT is fast algorithm used to evaluate the DFT B. Hartley Transform The Discrete Hartley Transform (DHT) of a real signal x (n) is defined as follows For k = 0, 1, 2 (N 1). The complex multiplications in the DFT formula are replaced with real multiplications in the DHT. This reduces the computation time. As shown in [7], we achieve the inverse DHT by C. DFT and FHT Hybrid Transform After studying the similarities of FHT and DFT we can propose the architecture for DFT from FHT as follows.

D FHT and FFT Hybrid Transform The relation between the FHT coefficients and DFT coefficients is obtained as follows E. DCT and DST Definition The Discrete Cosine and Discrete Sine Transform are defined as F. Hartley and DCT/DST Hybrid Transform The similarities of FHT and FCT/FST can be related as follows G. DCT and FHT Hybrid Transform To implement the FHT from FCT the following equation is used IV. Simulation Results The similarities among each transform are studied and the hybrid transforms are developed. Each transform is simulated using Active HDL 8. But compression ratio have been found out for each transform and Hybrid transform using MATLAB simulation because as Verilog can not give the numerical numbers with regarding to compression ratio. All these results are tabulated in the following sections for comparison and to approve the novelty of the above mentioned architecture. The delays are measured using Synopsys tools and tabulated in the following tables 1, and 2. In all these tables CR represents Compression Ratio Table 1: Compression ration for individual transform Transform CR (%) Discrete Cosine transform(dct) 72 Discrete Sine Transform(DST) 61 Discrete Fourier Transform(DFT) 45 Discrete Hartley Transform(DHT) 75 Discrete Walsh Transform(DWT) 69 Table 2 shows the compression ratio of hybrid transform and it can be observed that DCT+DHT give better efficiency compared to other hybrid transform. For this analysis only a still image is considered. This table also shows that DRT+DHT can also be used for compression of still images. Table 2 : Hybrid Transform for various inputs. Transform Type of the CR (%) Image Still Image 81 DCT+DHT Text 56 Video 34 Still Image 45 DST+DHT Text 22 Video 31 Still Image 30 DFT+DHT Text 13 Video 23 Still Image 56 DWT+DHT Text 42 Video 63 V. Synthesis Results They are synthesized using Synopsys tools. The Delay and Area of each Transform are obtained and for each Hybrid transform from FHT which is considered to be the main transform for rest of the transforms. Table 3 indicates the Delays and area of the individual and Hybrid transform Table 3 : Delay and area of the individual and Hybrid Transform Area Transform Delay 761 DHT 14.81ns 656 DFT 34.12ns 688 DCT 23.15ns 890 DST 31.65ns 1341 DHT+DFT 45.35ns 1123 DHT+DCT 32.76ns 1457 DHT+DST 49.19ns 1670 DHT+DWT 41.90ns The DCT has been synthesized; the floor planning of DCT is shown in fig.3 and the lay out of DHT is shown in fig.4 these cells are considered for Magma tools to synthesize. The floorplanning of combined transform is shown in fig.5 It is observed from layout that the more number gates required to implement DHT+DCT which is true when compared with single kernel DCT.Noise levels have been observed using Magma Blast Noise. It is required to use an filter to reduce the noise levels. Fig. 3 : The floor planning of DCT IJCST Vol. 2, Issue 3, September 2011 International Journal of Computer Science and Technology 133

Fig. 4: The floor planning of DHT Fig. 5: The floor planning for DCT +DHT VI. Conclusions A new hybrid transforms for image compression are developed, for all these combined transform Hartley transform is used because of its less complexity. Hartley transform has been considered as basis for all hybrid transforms and transformation coefficients have been generated from Hartley transform kernel. Hybrid transform is calculated from other transforms by deriving the transformation coefficients in terms of Hartley transform. Simulations have been conducted for all hybrid transforms and it is proved that DHT+DCT is better compression ratio when compared to all other transform techniques. Because, DCT performs only real operations and as same as DHT therefore for still images hybrid transform DCT+DHT is used and for video processing DRT+DHT is used and the proposed hybrid transform can be used for multimedia applications. This work may be extended to derive a new single kernel transform which may have less complexity than the proposed hybrid transform and to be expected to give better compression results and faster so that it may occupy less area and less power dissipation. References [1] CCITT Study Group XV: Recommendation H.261, "Video Codec for Audiovisual service at p x 64 kbitls", Rep. R37, Genf, July 1990. [2] H. G. Musmann, P. Pinch, H.-J. Grallert, "Advances in picture coding", Proc. IEEE, vol. 73, no. 4, pp. 523-548, 1985. [3] ISO-IEC JTC/SC2/WGll MPEG-90/176 Rev. 2, 1990. [4] ISO-IEC JTCI/SC29/WGll MPEG-93/225, 1993. [5] A. N. Netravali, B. C. Haskell, "Digital Pictures, Representation and Compression". New York: Plenum, 1988. [6] Tzou Kou-Hu, _Video coding techniques: An overview, in VU1 Implementations for Image Communications. Amsterdam: Elsevier, pp. 1-47, 1993,. [7] G. K. Wallace, "The JPEG still picture compression standard", Comm. ACM, vol. 34, no. 4, pp. 3-14, April, 1991. [8] Sonal, Dinesh Kumar, "A study of various image compression techniques. [9] C. Cafforio, F. Rocca, "The differential method for image scene analysis", in Image Processing and Dynamic Scene Analysis, T. S. Huang, New York Springer-Verlog, pp. 104-134 International Journal of Computer Science and Technology 124, March, 1983. [10] T. Koga, K. Iinuma, A. Hirano, Y. Iijima, T. Ishiguro, "Motion compensated interframe coding for video conferencing", Proc. Nat. Telecom. Conf, New Orleans, pp. G5.3.1-5.3.5, Nov/Dec 1981. [11] J. R. Jain, A. K. Jain, "Displacement measurement and its application in interframe image coding", IEEE Trans. Commun., vol. COM-29, pp. 1799-1808, December. 1981. [12] R. Srinivasan, K. R. Rao, "Predictive coding based on efficient motion estimation, IEEE Trans. Commun., vol. COM-33, pp. 888-896, August. 1985. [13] [Online] Available : http://www.mentor.com [14] [Online] Available : http://www.vlsitechnology.org/ [15] P.John Paul, P.N.Girija, "Review of Multimedia", in Proceedings of Journal of The Institution of Electronics and Telecommunication Engineers, August, 2007. [16] P.John Paul, P.N.Girija, A High Performance Novel Image Compression Technique using Hybrid Transform for Multimedia Applications IJCSNS International Journal of Computer Science and Network Security, VOL.11 No.4, April 2011.pp119-125. Dr. John Paul graduated (B.E., Electronics and Communication Engineering) from Nagarjuna University in the year 1986 with 1st Class. He had completed his post graduateion in the year 1994 (M.E., Digital Systems) with first class from Osmania University. He also did his research in Computer Science (VLSI Architecture for image compression) and achieved doctorate in the year 2008 from Hyderabad Central University. He has published several papers in National and International conferences and journals. Also authored books in electronics published by New Age International, New Delhi, Galgotia publications, New Delhi and guiding students for Ph.D. in the areas of parallel computing. G3 mobile technologies and Ss7 of signaling. He worked as professor, dean and Principal of several engineering colleges for 25 years. S. Keteswari, She is graduated in B.Tech from Grad IETE in 2004, Post graduate in M.tech from JNTU Kakinada in 2007 and presently pursing Ph.D from Rayalaseema University. She has been actively guiding Students in the area of VLSI and image processing. Currently, she has been working as Assistant professor in Electronics and Communication Engineering Dept. DNR Engineering College, Bhimavaram, Andhra Pradesh, India. B. Kezia Rani, graduated B.TEch (ECE) with First Class from Osmania University, PDGCE (JNTU), M.Tech (CSE) with Distinction from Osmania Universtity. She worked as Assistant Professor and Associate Professor in Engineering Colleges for 10 years and working as Faculty in The Engineering Academy from 7 year