A Dictionary based Efficient Text Compression Technique using Replacement Strategy

Similar documents
Journal of Computer Engineering and Technology (IJCET), ISSN (Print), International Journal of Computer Engineering

Dr.Hazeem Al-Khafaji Dept. of Computer Science, Thi-Qar University, College of Science, Iraq

Abstract. Key Words: Image Filters, Fuzzy Filters, Order Statistics Filters, Rank Ordered Mean Filters, Channel Noise. 1.

A {k, n}-secret Sharing Scheme for Color Images

Pipelined Multipliers for Reconfigurable Hardware

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering

A Novel Bit Level Time Series Representation with Implication of Similarity Search and Clustering

Approximate logic synthesis for error tolerant applications

Extracting Partition Statistics from Semistructured Data

Design of High Speed Mac Unit

the data. Structured Principal Component Analysis (SPCA)

The Minimum Redundancy Maximum Relevance Approach to Building Sparse Support Vector Machines

We don t need no generation - a practical approach to sliding window RLNC

A Novel Validity Index for Determination of the Optimal Number of Clusters

KERNEL SPARSE REPRESENTATION WITH LOCAL PATTERNS FOR FACE RECOGNITION

Algorithms, Mechanisms and Procedures for the Computer-aided Project Generation System

3-D IMAGE MODELS AND COMPRESSION - SYNTHETIC HYBRID OR NATURAL FIT?

Reduced-Complexity Column-Layered Decoding and. Implementation for LDPC Codes

HEXA: Compact Data Structures for Faster Packet Processing

What are Cycle-Stealing Systems Good For? A Detailed Performance Model Case Study

Performance of Histogram-Based Skin Colour Segmentation for Arms Detection in Human Motion Analysis Application

Partial Character Decoding for Improved Regular Expression Matching in FPGAs

A DYNAMIC ACCESS CONTROL WITH BINARY KEY-PAIR

Acoustic Links. Maximizing Channel Utilization for Underwater

A Partial Sorting Algorithm in Multi-Hop Wireless Sensor Networks

Multiple-Criteria Decision Analysis: A Novel Rank Aggregation Method

A scheme for racquet sports video analysis with the combination of audio-visual information

On - Line Path Delay Fault Testing of Omega MINs M. Bellos 1, E. Kalligeros 1, D. Nikolos 1,2 & H. T. Vergos 1,2

1 The Knuth-Morris-Pratt Algorithm

Performance Improvement of TCP on Wireless Cellular Networks by Adaptive FEC Combined with Explicit Loss Notification

Australian Journal of Basic and Applied Sciences. A new Divide and Shuffle Based algorithm of Encryption for Text Message

Cross-layer Resource Allocation on Broadband Power Line Based on Novel QoS-priority Scheduling Function in MAC Layer

Uplink Channel Allocation Scheme and QoS Management Mechanism for Cognitive Cellular- Femtocell Networks

System-Level Parallelism and Throughput Optimization in Designing Reconfigurable Computing Applications

Incremental Mining of Partial Periodic Patterns in Time-series Databases

Gray Codes for Reflectable Languages

Contents Contents...I List of Tables...VIII List of Figures...IX 1. Introduction Information Retrieval... 8

CleanUp: Improving Quadrilateral Finite Element Meshes

An Efficient and Scalable Approach to CNN Queries in a Road Network

Gradient based progressive probabilistic Hough transform

Cluster-Based Cumulative Ensembles

Capturing Large Intra-class Variations of Biometric Data by Template Co-updating

XML Data Streams. XML Stream Processing. XML Stream Processing. Yanlei Diao. University of Massachusetts Amherst

Analysis of input and output configurations for use in four-valued CCD programmable logic arrays

Learning Convention Propagation in BeerAdvocate Reviews from a etwork Perspective. Abstract

Improved flooding of broadcast messages using extended multipoint relaying

Accommodations of QoS DiffServ Over IP and MPLS Networks

International Journal of Advancements in Research & Technology, Volume 3, Issue 3, March-2014 ISSN

Graph-Based vs Depth-Based Data Representation for Multiview Images

New Channel Allocation Techniques for Power Efficient WiFi Networks

NONLINEAR BACK PROJECTION FOR TOMOGRAPHIC IMAGE RECONSTRUCTION. Ken Sauer and Charles A. Bouman

High Speed Area Efficient VLSI Architecture for DCT using Proposed CORDIC Algorithm

Chemical, Biological and Radiological Hazard Assessment: A New Model of a Plume in a Complex Urban Environment

A Hybrid Neuro-Genetic Approach to Short-Term Traffic Volume Prediction

A Descriptive Framework for the Multidimensional Medical Data Mining and Representation

COMBINATION OF INTERSECTION- AND SWEPT-BASED METHODS FOR SINGLE-MATERIAL REMAP

Cluster Centric Fuzzy Modeling

Automatic Physical Design Tuning: Workload as a Sequence Sanjay Agrawal Microsoft Research One Microsoft Way Redmond, WA, USA +1-(425)

Tackling IPv6 Address Scalability from the Root

Definitions Homework. Quine McCluskey Optimal solutions are possible for some large functions Espresso heuristic. Definitions Homework

RAC 2 E: Novel Rendezvous Protocol for Asynchronous Cognitive Radios in Cooperative Environments

Facility Location: Distributed Approximation

DECODING OF ARRAY LDPC CODES USING ON-THE FLY COMPUTATION Kiran Gunnam, Weihuang Wang, Euncheol Kim, Gwan Choi, Mark Yeary *

A RAY TRACING SIMULATION OF SOUND DIFFRACTION BASED ON ANALYTIC SECONDARY SOURCE MODEL

New Fuzzy Object Segmentation Algorithm for Video Sequences *

Chapter 2: Introduction to Maple V

The AMDREL Project in Retrospective

Smooth Trajectory Planning Along Bezier Curve for Mobile Robots with Velocity Constraints

Self-Adaptive Parent to Mean-Centric Recombination for Real-Parameter Optimization

A Comprehensive Review of Overlapping Community Detection Algorithms for Social Networks

Torpedo Trajectory Visual Simulation Based on Nonlinear Backstepping Control

COST PERFORMANCE ASPECTS OF CCD FAST AUXILIARY MEMORY

Model Based Approach for Content Based Image Retrievals Based on Fusion and Relevancy Methodology

Flow Demands Oriented Node Placement in Multi-Hop Wireless Networks

arxiv: v1 [cs.gr] 10 Apr 2015

Multi-Channel Wireless Networks: Capacity and Protocols

Parallelizing Frequent Web Access Pattern Mining with Partial Enumeration for High Speedup

Outline: Software Design

Methods for Multi-Dimensional Robustness Optimization in Complex Embedded Systems

Figure 1. LBP in the field of texture analysis operators.

INTERPOLATED AND WARPED 2-D DIGITAL WAVEGUIDE MESH ALGORITHMS

Multi-hop Fast Conflict Resolution Algorithm for Ad Hoc Networks

13.1 Numerical Evaluation of Integrals Over One Dimension

DoS-Resistant Broadcast Authentication Protocol with Low End-to-end Delay

A Load-Balanced Clustering Protocol for Hierarchical Wireless Sensor Networks

Parallel Block-Layered Nonbinary QC-LDPC Decoding on GPU

timestamp, if silhouette(x, y) 0 0 if silhouette(x, y) = 0, mhi(x, y) = and mhi(x, y) < timestamp - duration mhi(x, y), else

Detection and Recognition of Non-Occluded Objects using Signature Map

A Dual-Hamiltonian-Path-Based Multicasting Strategy for Wormhole-Routed Star Graph Interconnection Networks

Cell Projection of Convex Polyhedra

An Interactive-Voting Based Map Matching Algorithm

CA Unified Infrastructure Management 8.x Implementation Proven Professional Exam (CAT-540) Study Guide Version 1.1

Transition Detection Using Hilbert Transform and Texture Features

Cluster-based Cooperative Communication with Network Coding in Wireless Networks

Particle Swarm Optimization for the Design of High Diffraction Efficient Holographic Grating

Naïve Bayesian Rough Sets Under Fuzziness

A Multi-Head Clustering Algorithm in Vehicular Ad Hoc Networks

An Optimized Approach on Applying Genetic Algorithm to Adaptive Cluster Validity Index

Using Game Theory and Bayesian Networks to Optimize Cooperation in Ad Hoc Wireless Networks

Unsupervised Stereoscopic Video Object Segmentation Based on Active Contours and Retrainable Neural Networks

Transcription:

A based Effiient Text Compression Tehnique using Replaement Strategy Debashis Chakraborty Assistant Professor, Department of CSE, St. Thomas College of Engineering and Tehnology, Kolkata, 700023, India Debajyoti Ghosh Department of CSE, UG Student,St. Thomas College of Engineering and Tehnology, Kolkata, 700023, India Piyali Ganguly Department of CSE,UG Student,St. Thomas College of Engineering and Tehnology, Kolkata, 700023, India ABSTRACT The onept of ompression omes from the need to store data using as less spae as possible and to ease transfer of data through a hannel. The proposed algorithm deals with ompression of text files using harater replaement tehnique. For every string of length six, it is ompressed by assigning a single harater to it, maintaining a ditionary. The ditionary is used to deompress the enoded file. This gives a good ompression ratio irrespetive of the ontent of the text file. Keywords Lossless Compression, Charater Replaement, Compression Ratio, 1. INTRODUCTION Data ompression is a tehnique by whih the same amount of data is transmitted by using a smaller number of bits[1,2,3, 4,6]. Data ompression offers an attrative approah to reduing ommuniation osts by using available bandwidth effetively.data ompression tehniques an be divided into two major lasses lossy and lossless. In Lossless data ompression tehnique bits are redued by identifying and eliminating statistial redundany. The main feature of lossless data ompression tehnique is, it onsists of those tehniques whih are able to generate an exat dupliate of the input data stream after a ompress or expand yle. Lossless ompression is possible beause most real world data has statistial redundany. Lossy data ompression onedes a ertain loss of auray in exhange for greatly inreased ompression. Lossy ompression proves effetive when applied to graphis image and digitized voie. In general, data ompression onsists of taking a stream of symbols and transforming them into odes. If the ompression is effetive the resulting stream of odes will be smaller than the original symbols. The deision to output a ertain ode for a ertain symbol or set of symbols based on a model. The model is simply a olletion of data and rules used to proess input symbols and determine whih ode to output. This is a new data ompression algorithm, based on ditionary based text ompression tehnique. The effiieny of that ditionary based text ompression is, it provides good ompression ratio as well as fast deompression mehanism. In this paper the main fous is on ompression of Text data by lossless ompression tehnique. The development of a new algorithm to ompress to ompress and deompress text data having a high ompression ratio.[3] is what this paper primarily deals with the proposed algorithm aims at produing a minimum of 70 % ompression irrespetive of the ontent and the size of the text data. 2. ILLUSTRATIONS 2.1 Algorithm Strategy The proposed algorithm deals with the replaement of string of haraters [4] by a single harater, thus reduing the effetive length of the string. This is done by using printable and non-printable ASCII haraters. As per ASCII standard, the first 256 ASCII haraters are given 1 byte of memory spae. The next group of ASCII haraters, ranging from 256 to 4095, is eah given 2 bytes of memory spae. The next group of ASCII haraters, ranging from 4096 to 65535, is eah given 2 bytes of memory spae. The algorithm is aided by the use of a dynami ditionary [5] whih the algorithm reates. Taking a string of length six from the input file, we divide it into two parts, eah of length three, and put the first part in the row and the seond part in the olumn of the ditionary respetively. At their intersetion, we plae one of those printable or non-printable ASCII haraters. For the next set of string of length six from the input file, we divide it into two parts again and hek the existene of the substrings of length three in the row and olumn of the ditionary to find a math [4].If both the row and olumn entry doesn t math, we insert them in the ditionary and add a new symbol. If only the first substring is mathed with a row entry, we insert the seond substring as a olumn entry and add a new symbol. If the first substring doesn t math any row entry but the seond substring mathes an entry of the olumn then the first substring is entered in the row and a symbol is assigned at their intersetion. If both the substrings mathes a row entry and a olumn entry respetively then no symbol is inserted, the next set of string of length six is taken from the input file. The proess ontinues until the end of file is reahed. Now, sine the harater replaement is done in a fashion that it is one harater against six, and instead of 6 bytes it is now either 1 or 2 bytes, on an average a minimum of 65% ompression is ahieved. Sine it is more unlikely that the frequeny of ourrene of similar 6 onseutive letters will be high, it is broken into two halves, eah of length three and mathing is done for better results. Also the size of the ditionary remains manageable. For example, let us onsider a string: This$is$a$Compression$Algorithm$1234 19

For our onveniene, we assume that spae is denoted by $. We take the input string, read the first 6 haraters This$i and break it into two substrings of length three eah as: Thi and s$i. We then form the ditionary as per the algorithm and assign a unique harater at their intersetion. Table 1: Formation (1 st step) Thi s$i Proeeding in this manner we form the ditionary until end of string is reahed. Table 2: Formation (2 nd step) s$a Ф $Co Ξ Table 3: Formation (3 rd step) Ess Mpr Table 4: Formation (4 th step) Ion $Al Table 5: Formation (5 th step) Gor Ith Table 6: Formation (6 th step) 234 m$1 After forming the ditionary, the file is ompressed by harater replaement tehnique with the help of the ditionary. After ompression the ompressed file will be: Фξ Using the ditionary we an again generate the original file without any loss of information. Thus the algorithm proposed is lossless. 2.2 Algorithm A text file is taken as input and the following ompression algorithm is used on it to ompress the original file. Steps for Compression: 1. Open the text file. 2. Read the first six haraters of the file. 3. Generate a ditionary with two fields, row and olumn and a third field, symbol, at their intersetion. 4. Divide the string into two parts and plae them in row and olumn respetively. 5. Alloate a symbol/harater value to the field symbol. 6. Read the next sequene of six haraters from the file. 7. Divide the string into two parts and put the parts in two separate string variables, first part and seond part respetively. 8. Compare first part with row and seond part with olumn in the ditionary. 9. If an entry is found in the ditionary, 9.1. If only row value math, make a new olumn entry for the ditionary and put the string in it. Assign a harater to the intersetion of the existing row value and new olumn value. 9.2. If only olumn value math, make a new row entry for the ditionary and put the string in it. Assign a harater to the intersetion of the new row value and existing olumn value. 10. Repeat steps 6 to 9 until end of file is reahed. 11. Store the ditionary in the ompressed file. 12. Rewind to the start of the soure file. 13. Read six haraters from the file and plae the haraters in two string files first part and seond part sequentially 3 haraters eah. 14. Compare first part with row and seond part with olumn in the ditionary. 15. When a math is found, plae symbol value in ompressed file. 16. Repeat steps 13 to 15 till end of file is reahed. After the ompression of the original file is done, the ompressed file is deompressed using the deompression algorithm. The tehnique is lossless beause after deompression the deompressed file will be idential to that of the original file. Steps for Deompression: 1. Open the ompressed file. 2. Reonstrut the ditionary from the stored values in the ompressed file. 3. Read one harater at a time from the ompressed file. 4. Searh the symbol ontents of the ditionary for a math. 5. One a math is obtained, 5.1 Read the row and olumn value at the intersetion where a symbol math has been obtained. 5.2 Plae the row and the olumn strings respetively in the deompressed file. 5.3 Repeat steps 3 to 5 till end of file is reahed. 20

2.3 Measuring Compression Performanes Performane measure [1, 4,9] determines whether a ompression tehnique is effiient or not depending upon ertain riteria. Depending on the nature of appliation there are various riteria to measure the performane of ompression algorithms. The two most important fators are time omplexity and spae omplexity. There is always a trade-off between the two. The ompression behavior depends on the ategory of ompression algorithm: lossy or lossless. Following are some measurements to alulate the performanes of lossless algorithms. Compression Ratio: The ratio between size of ompressed file and the size of soure file. size after ompression Compression Ratio = (1) Compression Fator: The inverse of Compression Ratio is the Compression fator. It is the ratio between the size of soure file and the size of ompressed file. Compression Fator = (2) size after ompression It alulates the shrinkage of the soure file as a perentage. ( size after ompression) Perentage = % (3) Based on the above three metris we alulate the ompression performane and effiieny. The smaller the ompression ratio the better would be the performane of the ompression algorithm. Again for better ompression algorithms, the ompression fator must be high. The saving perentage [3, 5, 7, 8 ] gives the idea about the perentage of ompression atually done. Using these parameters we measure the performane and effiieny of our proposed algorithm and ompare it with other standard algorithms. We an show that for the proposed algorithm in ase of worst ase analysis also we get a minimum ompression with saving perentage 70%. For other ases, the saving perentage may reah as high as 80% as well. That is where the proposed algorithm sores more even though there is a trade-off with the spae omplexity that the algorithm faes. 2.4 Calulation For the proposed algorithms, the performane of the same for the worst ase ompression is analyzed here. We take the worst ase analysis where no mathing is found and every time a new entry is inserted in the ditionary. The analysis done is independent of the ontent of the file. 1 st part: For first 256 symbols Size taken up by 1st 256 bytes in the ompressed file: 256*1=256 bytes Original size of the file for those 256 symbols representation: 256 * 6=1536 bytes. Hene, Compression ratio: 256/1536 = 0.167 (1536-256)/1536 = 83.33%. 2 nd part:for next 3840 symbols Size taken up by symbols from256 to 4095 in the ompressed file: 3840*1=3840 bytes. Size of the symbols from256 to 4095 in the original file: 3840 *6=23040 bytes. Thus, Compression ratio: 3840/23040 = 0.167 (23040-3840)/23040 = 83.33% 3 rd part:for next 61440 symbols Size taken up by symbols from4096 to 65535 in the ompressed file: 61440*2=122880 bytes. Size of the symbols from 4096 to 65535 in the original file: 61440 *6=368640 bytes. Thus, Compression ratio: 122880/368640 = 0.333 (368640-122880)/368640 = 66.67% For alulating the worst ase senario, we have to onsider that all the entries of the ditionary have been filled, i.e. the ditionary ontains all possible entries and every string ombination is present in the file one. In doing so we onsider the total size of the ompressed file with respet to that of the original file. In suh situation, the ompression measure will be as follows: Original file size: (256*6) + (3840*6) + (61440*6) =393216 bytes. Compressed file size: (256*1) + (3840*1) + (61440*2) =126976 bytes. Final Compression ratio: 126976/393216 = 0.323 (393216-126976)/393216 = 67.7% Thus, in the worst ase analysis we an say that approximately the ompression will be more than that of 65% irrespetive of the ontent of the file. The storage of the ditionary in the ompressed file will inrease the size of the ompressed file affeting the ompression ratio for smaller files. However, in larger files, the ditionary size should beome relatively insignifiant. 2.5 Experimental Results The proposed algorithm when implemented on any arbitrary file size shows ompression ratio and saving fators mathing the one shown in alulation theoretially. The algorithm works on different file sizes irrespetive of the ontent of the file and yields the same results. Sine the deompressed files are rebuild using the ditionary so there is no loss of information and hene the algorithm proves itself to be lossless. The figure below shows the ompression ratio graphially where the plotting is done as original file size vs. the ompressed file size. The ratio between the ompressed file size and the original file size gives the ompression ratio whih an be easily depited from the hart. The experimental 21

Compressed size International Journal of Computer Appliations (0975 8887) result shows the saving perentage to be above 70% always. The hart given below displays the stability of the ompression ratio [3, 5] ahieved irrespetive of the file size. We observe that as the file size inreases the saving perentage dereases. But theoretially it is shown that it is always above 70% even if worst ase ompression is done, whih is better than or omparable with existing standard algorithms. The omparison with two of the existing standard algorithm for text ompression is also done in order to ompare the ompression ratio and saving perentage. 180 160 140 120 100 80 60 40 20 0 Fig.1. Graph of Original Size vs. Compressed Size A brief analysis of the result of proposed algorithm shows that a stable ompression ratio is ahieved irrespetive of the file size and the ontent of the file. Table 7 shows the original size and size after ompression of the text files taken under onsideration. It also shows a omparative study of the ompression ratio of the existing standard algorithms and the proposed algorithm. The algorithms taken under onsideration are Huffman Enoding and LZW enoding algorithms. [11, 12, 13, 14 ] Name sample1.ja va sample2.ja va sample4.do sample5.do sample6.do sample8.ht m sample9.ht m Plot: Original size vs. Compressed Size 828 1019 978 867 714 578 922 Table 7: Comparative study of algorithms Origi-nal Size Original size Compr-essed Size for Proposed Algo-rithm Compressed Size for Winrar Compressi on Compressed Size for Arithmeti Compressi on 828 132 223 315 1019 163 265 387 978 156 264 371 867 139 234 321 714 116 179 264 578 93 156 219 922 147 224 350 Table 8: Comparisons of Compression Performane Name Perentage of Proposed Algorithm Perentage of Winrar Compression Perentage of Arithmeti Compression sample1.txt 84.05 73.06 61.95 sample2.txt 84.00 73.99 62.02 sample4.do 84.04 73.00 62.06 sample5.do 83.96 73.01 62.97 sample6.do 83.75 74.93 63.02 sample8.htm 83.91 73.01 62.11 sample9.htm 84.05 75.70 62.03 The Compression performane is shown in Table 8 with the help of saving perentage and ompression ratio. The omparative study shows that for the input text files, the saving fator for the proposed algorithm is in the range of 84% approximately whih is higher than that of two existing standard algorithms: Huffman enoding and LZW enoding. The ompression ratios an also be learly depited from the table above. This algorithm gives better result than winrar and arithmeti ompression also. The stable ompression ratio is the area where the proposed algorithm sores over the existing standard algorithms in omparison. 3. CONCLUSION A new algorithm for text ompression has been reommended in this paper, where the key element is the usage of printable and non-printable ASCII haraters to replae strings of haraters. Charater Replaement is the main feature that is highlighted. The reation of a dynami ditionary whih omes in handy while deompressing, is another striking feature whih in turns makes the algorithm a lossless one. The stable ompression ratio and saving fator being more than 70% even in the worst ase analysis makes this algorithm effiient than most of the existing standard algorithms. The omparative study in tabular form supports the argument. The high saving fator ahieved regardless of the file size and the ontent of the file makes this algorithm useful. The basi purposes of data ompression are to redue the file size for storage and for transmission of the same over a hannel. With the ompression ratio ahieved using the proposed algorithm, both the purposes an be served. With proper implementation the spae-time tradeoff an also be handled effetively. The proposed algorithm has an overhead, the ditionary, but in ase of lager files the overhead beomes negligible. The spae omplexity may be redued by seletion of proper data struture. With better implementation tehniques these issues an be handled easily. However, the algorithm is reommended for text ompression beause of its high saving perentage, stable ompression ratio and lossless nature. 4. ACKNOWLEDGEMENT First, we would like to thank Professor Subarna Bhattaharjee, [15] for her valuable advie, provision and onstant enouragement. Her onstant assessments and 22

reviews gave us the muh needed theoretial larity. We owe a substantial lot to all the faulty members of the Department of Computer Siene and Engineering and the Department of Information Tehnology. We would also like to thank our friends for tolerantly listening to our explanations. Their reviews and omments were exeptionally helpful. And of ourse, we owe our apability to omplete this projet to our families whose love and enouragement onsumes remained our ornerstone. 5. REFERENCES [1] Debra A. Lelewer and Daniel S. Hirshberg, Data Compression, Journal ACM Computing Surveys (CSUR), Vol. 19 Issue 3, Sept. 1987, pp. 261-296 [2] Khalid Sayood, An Introdution to Data Compression, Aademi Press, 1996. [3] David Solomon, Data Compression: The Complete Referene, Springer Publiation, 2000. [4] Mark Nelson, The Data Compression Book. [5] Debashis Chakraborty, Sandipan Bera, Anil Kumar Gupta and Soujit Mondal,, Effiient Data Compression using Charater Replaement through Generated Code, IEEE NCETACS 2011, Shillong, India, Marh 4-5,2011,pp 334. [6] Mark Nelson and Jean-Loup Gaily, The Data Compression Book, Seond Edition, M&T Books. [7] Gonzalo Navarro and Mathieu A Raffinot, General Pratial Approah to Pattern Mathing over Ziv-Lempel Compressed Text, Pro. CPM 99, LNCS 1645, Pages14-36. [8] M. Atallah and Y. Genin, Pattern mathing text ompression: Algorithmi and empirial results, International Conferene on Data Compression, vol II: pp. 349-352, Lausanne, 1996. [9] Shrusti Porwal, Yashi Chaudhary, Jitendrajoshi, Manish Jain, Data Compression Methodologies for Lossless Data and Comparision between Algorithms, International Journal of Engineering Siene and Innovative Tehnology (IJESIT), Vol. 2 Issue 2, Marh 2013. [10] Timothy C. Bell, Text Compression, Prentie Hall Publishers, 1990. [11] J.G. Cleary and I.H.Witten, Data Compression using adaptive oding and partial string mathing, IEEE Trans. Commun., Vol. COM-32, no. 4, pp. 396-402, Apr.1984. [12] Pankaj Kumar Ankur Kumar Varshney, Double Huffman Coding,International Journal [13] of Advaned Researh in Computer Siene and Software Engineering, Volume 2, Issue 8, August 2012. [14] Nishad PM, R.ManikaChezian, Enhaned LZW (Lempel-Ziv-Welh) Algorithm by Binary Searh with Multiple to Redue Time Complexity for Creation in Enoding and Deoding, International Journal of Advaned Researh in Computer Siene and Engineeering. [15] Dheemanth H N, LZW Data Compression, Amerian Journal of Engineering Researh (AJER), Volume-03, Issue-02, pp-22-26. [16] S. Bhattaharjee, J. Bhattaharya, U. Raghavendra, D.Saha, P. Pal Chaudhuri, A VLSI arhiteture for ellular automata based parallel data ompression, IEEE-2006,Bangalore, India, Jan 03-06. IJCA TM : www.ijaonline.org 23