# A Review on LBG Algorithm for Image Compression

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 A Review on LBG Algorithm for Image Compression Ms. Asmita A.Bardekar #1, Mr. P.A.Tijare #2 # CSE Department, SGBA University, Amravati. Sipna s College of Engineering and Technology, In front of Nemani Godown Badnera Road, Amravati, Maharashtra, India Abstract-One of the important factors for image storage or transmission over any communication media is the image compression. Compression makes it possible for creating file sizes of manageable, storable and transmittable dimensions. Compression is achieved by exploiting the redundancy. Image Compression techniques fall under two categories, namely, Lossless and Lossy. The Algorithm, used for this purpose, is the Linde, Buzo, and Gray (LBG) Algorithm. This is an iterative algorithm which alternatively solves the two optimality criteria i.e. Nearest neighbor condition and Centroid condition. The algorithm requires an initial codebook to start with. Codebook is generated using a training set of images. The set is representative of the type of images that are to be compressed. There are different methods like Random Codes and Splitting in which the initial code book can be obtained. This initial codebook is obtained by the splitting method in LBG algorithm. In this method an initial code vector is set as the average of the entire training sequence. This code vector is then split into two. The iterative algorithm is run with these two vectors as the initial codebook. The final two code vectors are splitted into four and the process is repeated until the desired number of code vector is obtained. The compression algorithm can be measured by certain performances such as Compression Ratio (CR), Peak Signal-to-Noise Ratio (PSNR). Keywords- Vector quantization, Codebook generation, LBG algorithm, Image compression. I. INTRODUCTION In this paper, the LBG algorithm for image compression is reviewed. One of the important factors for image storage or transmission over any communication media is the image compression. Compression makes it possible for creating file sizes of manageable, storable and transmittable dimensions. Compression is achieved by exploiting the redundancy. Image Compression techniques fall under two categories, namely, Lossless and Lossy. In Lossless Techniques the image can be reconstructed after compression, without any loss of data in the entire process. Lossy techniques, on the hand are irreversible, because, they involve performing quantization, which result in loss of data. More compression is achieved in the case of lossy compression than lossless compression[2].in return for accepting this distortion in the reconstruction, we can generally obtain much higher compression ratios than is possible with lossless compression. Any compression algorithm is acceptable provided a corresponding decompression algorithm exists. Vector Quantization (VQ) achieves more compression making it useful for band-limited channels. The algorithm for the design of optimal VQ is commonly referred to as the Linde-Buzo- Gray (LBG) algorithm, and it is based on minimization of the squared-error distortion measure. The basic idea of VQ is that if a set of representative image vectors, also called codewords, can be designed, they can be then used to represent all the image blocks. The set of representative codewords forms the codebook for VQ. Generally, VQ can be divided into three procedures: codebook design procedure, image encoding procedure, and the image decoding procedure. The codebook design procedure is executed before the other two procedures for VQ. Image compression is an indispensable coding operation that is able to minimize the data volume for many applications such as storage of images in a database, TV and facsimile transmission, video conferencing. Compression of images involves taking advantage of the redundancy in data present within an image. Vector quantization is often used when high compression ratios are required. In VQ, the transformed image has to be coded into vectors. Each of these vectors is then approximated and replaced by the index of one of the elements of the codebook. The only useful information in this is the index for each of the blocks of the original image, thus reducing the transmission bit rate. The goal of the codebook design procedure is to design the desired codebook that is to be used in the image encoding/decoding procedures. So far, several codebook design algorithms had been used to design the VQ codebooks. Among them, the LBG algorithm is the most commonly used method for codebook design. In the image encoding procedure of VQ, the image is partitioned into a set of non-overlapped image blocks of n x n pixels. Each image block of n x n pixels can be ordered to form a k- dimensional image blocks, where k = n x n. The closest codeword in the codebook for each image vector is to be determined. The compressed codes of VQ are the indices of the closest codewords in the codebook for all the image 2584

2 blocks [6].Vector Quantization (VQ) methods allow arbitrary compression of digital images. They are widely employed in encoding video. These methods share a common structure. The image is divided into pixel blocks and these blocks are approximated by a smaller set of images drawn from a fixed code book. The number of bits needed to specify the original code block is replaced by the typically smaller number of bits needed to specify its corresponding code. The image compression using vector quantization (VQ) techniques has received large interest.in VQ approaches adjacent pixels are taken as a single block, which is mapped into a finite set of codewords. In decoding stage the codewords are replaced by corresponding vectors. The set of codewords and the associated vectors together is called a codebook. In VQ, the correlation which exists between adjacent pixels are naturally taken into account, and with a comparatively small codebook one achieves a small quantization error in reconstructed images. The main idea in VQ, is to find a codebook which minimizes the quantization mean error in reconstructed images. One of the best known VQ methods is the Linde- Buzo-Gray (LBG) algorithm, which iteratively searches clusters in the training data. The cluster centroids are used as the codebook vectors, while the codewords for them can be selected arbitrarily [10].The compression rate is the ratio of the file size of the uncompressed image over the size for the compressed image, denoted. The compression rate is varied by decreasing the size of the codebook but typically at the cost of increasing the discrepancy between a block and the code which replaces it. II.RELATED WORK Grouping source outputs together and encoding them as a single block, we can obtain efficient lossy as well as lossless compression algorithms. We can view blocks as vectors, hence the name vector quantization. Vector Quantization (VQ) is a lossy data compression method based on the principle of Block Coding. It is a fixed- to- fixed length algorithm.the design of the vector quantizer (VQ) is considered to be a challenging problem due to the need for multi-dimensional integration [3]. Linde, Buzo and Gray (LBG) proposed a VQ design algorithm based on a training sequence. The use of the training sequence bypasses the need for multidimensional integration [1].Each quantizer codeword represents a single sample of the source output. By taking longer and longer sequences of input samples, it is possible to extract the structure in the source coder output. Even when the input is random, encoding sequences of samples instead of encoding individual samples separately provides a more efficient code. Encoding sequences of samples is more advantageous in the lossy compression framework as well. By advantageous we mean a lower distortion for a given rate, or a lower rate for a given distortion. By rate we mean the average number of bits per input sample, and the measures of distortion will generally be the mean squared error and the signal-to-noise ratio. The idea that, encoding sequences of outputs can provide an advantage over, the encoding of individual samples and the basic results in information theory were all proved by taking longer and longer sequences of inputs. This indicates that a quantization strategy that works with sequences or blocks of output would provide some improvement in performance i.e. if we wish to generate a representative set of sequences. Given a source output sequence, we would represent it with one of the elements of the representative set. In vector quantization we group the source output into blocks or vectors. For example, we can take a block of L pixels from an image and treat each pixel value as a component of a vector of size or dimension L. This vector of source outputs forms the input to the vector quantizer. The encoder and decoder of the vector quantizer, we have a set of L-dimensional vectors called the codebook of the vector quantizer. The vectors in this codebook, known as code-vectors are selected to be representative of the vectors we generate from the source output. Each code-vector is assigned a binary index. At the encoder, the input vector is compared to each code-vector in order to find the code-vector closest to the input vector. In order to inform the decoder about which code-vector was found to be the closest to the input vector, we transmit or store the binary index of the code-vector. Because the decoder has exactly the same codebook, it can retrieve the code-vector given its binary index. A pictorial representation of this process is shown in Fig.1 Fig. 1 The vector quantization procedure Although the encoder may have to perform a considerable amount of computations in order to find the closest reproduction vector to the vector of source outputs, the decoding consists of a table lookup. This makes vector quantization a very attractive encoding scheme for applications in which the resources available for decoding are considerably less than the resources available for encoding [7].Codebook consists of a set of vectors, called codevectors. A vector quantization algorithm uses this codebook to map an input vector to the codevector closest to it. Image 2585

3 compression, the goal of vector quantization, can then be achieved by transmitting or storing only the index of the vector. The most commonly Known and used algorithm is the Linde-Buzo-Gray (LBG) algorithm. Codebook Initialization has methods, they are random codes, which use a random initial codebook where the first N c, vectors (or any widely spaced Nc vectors) of the training set are chosen as the initial codebook and in splitting centroids for the entire training set is found, and this single codeword is then split (perturbed by a small amount) to form two codewords. Design continues, optimum code of one stage is split to form an initial code for the next stage until N c code vectors are generated. For a particular training set (e.g. training set corresponding to approximate component) a step by step selection process is applied to generate a prescribed number of temporary clusters. Similar' vectors of the training space gather under the same cluster. Clustering process is initiated by finding the smoothest vector in the training space. The variance of a vector is used as the determining factor of smoothness and the vector having minimum variance is selected as the smoothest vector and termed as the reference vector. From these clusters, C vectors are chosen very carefully, one from each cluster, by natural selection. The quality of reconstructed image depends on the level of decomposition and size of the codebook. It also depends on the code vector dimension.more the size of the codebook, better the quality, but it correspondingly increases the rate. The amount of compression will be described in terms of the rate, which will be measured in bits per sample. Suppose we have a codebook of size K, and the input vector is of dimension L. In order to inform the decoder of which code-vector was selected, we need to use log2k bits. For example, if the codebook contained 256 code-vectors, we would need 8 bits to specify which of the 256 code-vectors had been selected at the encoder. Thus, the number of bits per vector is log2k bits. As each code-vector contains the reconstruction values for L, source output samples, the number of bits per sample would be log2k /L. Thus, the rate for an L-dimensional vector quantizer with a codebook of size K is log2k /L. When we say that in a codebook C containing the K code-vectors {Y i }, the input vector X is closest to Y j we will mean that X-Y j 2 X-Y i 2 for all Y i C, L i 1 Where X = (x 1 x 2 x L ) And X 2 = 2 x When we are discussing compression of images, a sample refers to a single pixel. Finally, the output points of the quantizer are often referred to as levels. Thus, when we wish to refer to a quantizer with K output points or code-vectors, we may refer to it as a K-level quantizer. As we block the input into larger and larger blocks or vectors, these higher i dimensions provide even greater flexibility and the promise of further gains to be made. The distortion measure, simply knowing the output points gives us sufficient information to implement the quantization process. Quantization is a very simple process. In practice, the quantizer consists of two mappings: an encoder mapping and a decoder mapping. The encoder divides the range of values that the source generates into a number of intervals. Each interval is represented by a distinct codeword. The encoder represents all the source outputs that fall into a particular interval by the codeword representing that interval. As there could be many possibly infinitely many distinct sample values that can fall in any given interval, the encoder mapping is irreversible. Knowing the code only tells us the interval to which the sample value belongs. For a given set of quantization values, the optimal thresholds are equidistant from the values. Instead of output levels, vector quantization (VQ) employs a set of representation vectors (for the one-dimensional case) or matrices (for the two-dimensional case). The set is referred to as the codebook and the entries as codewords. The thresholds are replaced by decision surfaces defined by a distance metric. Typically, Euclidean distance from the codeword is used. In the coding phase, the image is subdivided into blocks, typically of a fixed size of n x n pixels. For each block, the nearest codebook entry under the distance metric is found and the ordinal number of the entry is transmitted. On reconstruction, the same codebook is used and a simple look-up operation is performed to produce the reconstructed image. The standard approach to calculate the codebook is by way of the Linde, Buzo and Gray (LBG) algorithm. Initially, K codebook entries are set to random values. On each iteration, each block in the input space is classified, and based on its nearest codeword. Each codeword is then replaced by the mean of its resulting class. The iterations continue until a minimum acceptable error is achieved. This algorithm minimizes the mean squared error over the training set. In addition, the algorithm is very sensitive to the initial codebook [8]. When the image blocks are vector quantized, there likely to exist high correlation among the neighboring blocks and hence among the corresponding codeword indices. Therefore, if indices are coded by comparing with the previous indices, further reduction in the bit rate can be achieved. In our scheme, the index of the codeword of a block is encoded exploiting the degree of the similarity of the block with previously encoded upper or left blocks. If the degree of similarity of the block with the neighboring blocks is not high, we assume that the closest match codeword of the current block may be nearer to the codewords of the neighboring blocks. For example, if one of the two neighboring blocks codeword s index is N, the closest match codeword of the block to be encoded may lie between (N-J) th codeword and (N+J) th codeword in the codebook, where J is any arbitrary number. So the index can be coded in log 2(2 * J) bits. This idea is based on the property existing in the codebook design using LBG algorithm with splitting technique. In the splitting technique, bigger size codebook is generated by splitting each codeword 2586

4 of the smaller codebook into two. The size of the codebook is always in powers of two (2 M 2 (M+1) ). Hence, relatively similar two image blocks may have same closest match codeword in J th position at codebook of size 2 M and at codebook of size 2 (M+1), one of the two image blocks may have its closest match codeword at J th place in the codebook and other block s codeword may be in (J+1) th place[9]. Thus, the quantizer is completely defined by the output points and a distortion measure. The main problem in VQ is choosing the vectors for the codebook so that the mean quantization error is minimal, after the codebook is known, mapping input vectors to it is a trivial matter of finding the best match. As earlier VQ compression requires ¼ codebook entry of total length blocks if we select 2x2 block size, and index finding time is more as the codebook entry is more,so it is not suitable for the fast data transaction. Codebook formulation is also requires more time while LBG is efficient and very good choice for data transmission as it uses minimum codebook entries. Again it also need codebook entries and that has to formulate fast with minimum block size.the major problem is the length of codebook and time require to formulate the codebook.we have to make balancing between length of codebook entries, block size selection and time required to formulate the codebook. The maximum entries in the codebook will be time consuming. Then we should have initial codebook having some decided entries if the codebook entry does not match with initial codebook it will have more distortion. So, we have LBG algorithm which is optimize. III. LBG ALGORITHM The Linde, Buzo, and Gray (LBG) Algorithm, is an iterative algorithm which requires an initial codebook to start with. Codebook is generated using a training set of images [5].The set is representative of the type of images that are to be compressed. There are different methods like Random Codes and Splitting [4], in which the initial code book can be obtained. This initial codebook is obtained by the splitting method in LBG algorithm. In this method an initial code vector is set as the average of the entire training sequence. This codevector is then split into two. The iterative algorithm is run with these two vectors as the initial codebook. The final two codevectors are splitted into four and the process is repeated until the desired number of code vectors is obtained[1].the compression algorithm can be measured by certain performances such as compression Ratio(CR),Peak Signal to Noise ratio(psnr)[2]. One way of exploiting the structure in the source output is to place the quantizer output points where the source output (blocked into vectors) are most likely to congregate. The set of quantizer output points is called the codebook of the quantizer and the process of placing these output points is often referred to as codebook design. When we group the source output in two-dimensional vectors, we might be able to obtain a good codebook design by plotting a representative set of source output points and then visually locate where the quantizer output points should be. However, this approach to codebook design breaks down when we design higherdimensional vector quantizer. So we need an automatic procedure for locating where the source outputs are clustered [7].Linde-Buzo and Gray algorithm is as follows. 1. Start with an initial set of reconstruction values M ( o) Y i. Set k = 0, D (0) = 0. Select threshold. (For i 1 practical implementation we truncate step 1 as set of training vectors = X N n n 1, where N = new value of total training vectors.) 2. Find quantization regions ( k ) Vi X : d( X, Yi) d( X, Yj) j i j = 1, 2,, M. 3. Compute the distortion, M 2 ( k) ( k) ( k ) i Vi i 1 (k) (k 1) D X Y f ( X) dx (D D ) 4. If, stop; otherwise, continue. (k) D M ( k ) 5. k= k+1. Find new reconstruction values Y i i 1 ( 1) are the centroids of V k. Go to Step 2. i X that This algorithm forms the basis of most vector quantizer designs. It is popularly known as the Linde-Buzo-Gray or LBG algorithm. Linde, Buzo and Gray described a technique called splitting technique for initializing the design algorithm. We begin by designing a vector quantizer with a single output point, in other words, a codebook of size one, or a one-level vector quantizer. With a one-element codebook, the quantization region is the entire input space, and the output is the average value of the entire training set. From this output point,the initial codebook for a two-level vector quantizer can be obtained by including the output point for the onelevel quantizer and a second output point obtained by adding a fixed perturbation vector. We then use the LBG algorithm to obtain the two-level vector quantizer. Once the algorithm has converged, the two codebook vectors are used to obtain the initial codebook for a four-level vector quantizer. This initial four- level codebook consists of the two codebook vectors from the final codebook of the two-level vector quantizer and another two vectors obtained by adding to the two codebook vectors. The LBG algorithm can then be used until this four-level quantizer converges. In this manner, we keep doubling the number of levels until we reach the desired number of levels. The algorithm is continued till the distortion falls below some small threshold. The threshold can be selected arbitrarily to be some small value. The distortion threshold can be any value 0.The necessary condition for a quantizer to be optimal is that it is a fixed-point quantizer. For finite alphabet distributions the algorithm always converges to a fixed-point quantizer in a finite number of steps.in this algorithm, the distortion will not increase from one iteration to the next. The 2587

5 convergence of the algorithm depends on the initial conditions. So a splitting technique has been used for initializing the design algorithm. A one-level vector quantizer is designed with a code book of size one, which is the average of all the training vectors in the set. The initial codebook for a two-level vector quantizer is obtained by retaining the original output, and the second one obtained by adding a fixed perturbation vector to it. After convergence, the twolevel VQ is used to yield a four-level VQ in a similar manner. The above procedure is repeated till a codebook of required size is obtained. The codebook size increases exponentially with the rate of the quantizer, increasing the size of the codebook increases the quality of the reconstruction, but the encoding time also increases due to the increase in the computations required to find the closest match. By including the final codebook of the previous stage at each splitting, we guarantee that the codebook after splitting will be at least as good as the codebook prior to splitting. In order to see how this algorithm functions, consider the following example of a two-dimensional vector quantizer codebook design [7]. Example: Suppose our training set consists of the height and weight values shown in Table 1. The initial set of output points is shown in Table 2 (For ease of presentation, we will always round the coordinates of the output points to the nearest integer.) The inputs, outputs, and quantization regions are shown in Fig.2 TABLE 1 TRAINING SET FOR DESIGINING VECTOR QUANTIZER CODEBOOK The input (44, 41) has been assigned to the first output point; the inputs (56, 91), (57, 88), (59, 119), and (60, 110) have been assigned to the second output point: the inputs (62, 114), and (65, 120) have been assigned to the third output, and the five remaining vectors from the training set have been assigned to the fourth output. The distortion for this assignment is We now find the new output points. There is only one vector in the first quantization region, so the first output point is (44, 41). The average of the four vectors in the second quantization region (rounded up) is the vector (58,102), which is the new second output point. In a similar manner, we can compute the third and fourth output points as (64, 117) and (69, 168). The new output points and the corresponding quantization regions are shown in Fig. 3. From Fig. 3 we can see that, while the training vectors that were initially part of the first and fourth quantization regions are still in the same quantization regions, the training vectors (59,115) and (60,120), which were in quantization region 2, are now in quantization region 3. The distortion corresponding to this assignment of training vectors to quantization regions is 89, considerably less than the original Given the new assignments, we can obtain a new set of output points. The first and fourth output points do not change because the training vectors in the corresponding regions have not changed. However, the training vectors in regions 2 and 3 have changed. Recomputing the output points for these regions, we get (57, 90) and (62, 116). The final form of the quantizer in shown in Fig.4. The distortion corresponding to the final assignments is 60.17[7]. Height Weight Fig. 2 Initial state of the vector quantizer TABLE 2 INITIAL SET OF OUTPUT POINTS FOR CODEBOOK DESIGN Height Weight Fig.3 The vector quantizer after one iteration 2588

6 Fig. 4 Final state of the vector quantizer The LBG algorithm guarantees that the distortion from one iteration to the next will not increase [7].This compression is measured by certain performances such as Compression Ratio (CR), Peak Signal-to-Noise Ratio (PSNR)[2].Compression Ratio (CR), is defined as a ratio of the number of bits required to represent the data before compression to the number of bits required after compression. Mean square error (MSE), refers to the average value of the square of the error between the original signal and the reconstruction. A common measure of distortion is the mean squared error (MSE). Peak Signal-to- Noise Ratio (PSNR), is defined as the ratio of square of the peak value of the signal to the mean square error. The distortion in the decoded images measured using peak signalto-noise ratio (PSNR)[10]. [4] P.Franti, T.Kaukoranta, D.-F.Shen and K.-S.Chang, "Fast and memory efficient implementation of the exact PNN", IEEE Transactionsons Image Processing,9 (5), ,May [5] Qin Zhang, John M.Danskin, Neal E.Young, A codebook generation algorithm for document image compression,6211 Sudikoff Laboratory,Department Computer Science, Dartmouth College, Hanover,USA. [6] Yu-chen Hu,Bing-Hwang Su,Chih-Chiang Tsou. Fast VQ codebook search algorithm for grayscale imagecoding,sciencedirect,image and vision computing 26(2008) [7] Khalid Sayood, Introduction to Data Compression,Harcourt India Private limited,new Delhi. [8] Robert D. Dony, Neural Network Approaches to Image Compression, IEEE Simon Haykin, IEEE, Proceeding of the IEEE, vol. 83,no.2, February [9] K.Somasundaram, and S.Domnic, Modified Vector Quantization Method for Image Compression, World Academy of Science, Engineering and Technology [10] Jari Kangas, Increasing the Error Tolerance in Transmission of Vector Quantized Images by Self-Organizing Map,Helsinki University of Technology, Finland. IV. APPLICATION Data transfer of uncompressed image over digital networks requires very high bandwidth.the state-of-the-art image compression techniques may exploit the dependencies between the sub-bands in a transformed image. The LBG algorithm reduces complexity of a transferred image without sacrificing performance. The image compression have diverse applications including Tele-video-conferencing, Remote sensing, document and Medical imaging etc. V. CONCLUSION With this review, we came to the conclusion that Linde, Buzo and Gray (LBG) algorithm reduces complexity of a transferred image without sacrificing performance by splitting technique for the codebook generation. REFERENCES [1] Y.Linde,A.Buzo,and R.M.Gray, An Algorithm forvector Quantizer Design, IEEE Transactions on Communications,January [2] A.Vasuki and P.T.Vanathi, Image Compression using Lifting and Vector Quantization, GVIP Journal Volume 7, Issue 1,April [3] P. Franti, "Genetic algorithm with deterministic crossover for vector quantization", Pattern Recognition Letters, 21 (1), 61-68,

### An Adaptive and Deterministic Method for Initializing the Lloyd-Max Algorithm

An Adaptive and Deterministic Method for Initializing the Lloyd-Max Algorithm Jared Vicory and M. Emre Celebi Department of Computer Science Louisiana State University, Shreveport, LA, USA ABSTRACT Gray-level

### Statistical Image Compression using Fast Fourier Coefficients

Statistical Image Compression using Fast Fourier Coefficients M. Kanaka Reddy Research Scholar Dept.of Statistics Osmania University Hyderabad-500007 V. V. Haragopal Professor Dept.of Statistics Osmania

### CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

74 CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM Many data embedding methods use procedures that in which the original image is distorted by quite a small

### Joint Image Classification and Compression Using Hierarchical Table-Lookup Vector Quantization

Joint Image Classification and Compression Using Hierarchical Table-Lookup Vector Quantization Navin Chadda, Keren Perlmuter and Robert M. Gray Information Systems Laboratory Stanford University CA-934305

### PRIVACY PRESERVING DATA MINING BASED ON VECTOR QUANTIZATION

PRIVACY PRESERVING DATA MINING BASED ON VECTOR QUANTIZATION D.Aruna Kumari 1, Dr.K.Rajasekhara Rao 2, M.Suman 3 1,3 Department of Electronics and Computer Engineering, Associate professors,csi Life Member

### A New Technique of Lossless Image Compression using PPM-Tree

A New Technique of Lossless Image Compression PP-Tree Shams ahmood Imam, S.. Rezaul Hoque, ohammad Kabir Hossain, William Perrizo Department of Computer Science and Engineering, North South University,

### A Study on the Effect of Codebook and CodeVector Size on Image Retrieval Using Vector Quantization

Computer Science and Engineering. 0; (): -7 DOI: 0. 593/j.computer.000.0 A Study on the Effect of Codebook and CodeVector Size on Image Retrieval Using Vector Quantization B. Janet *, A. V. Reddy Dept.

### Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Image Compression for Mobile Devices using Prediction and Direct Coding Approach Joshua Rajah Devadason M.E. scholar, CIT Coimbatore, India Mr. T. Ramraj Assistant Professor, CIT Coimbatore, India Abstract

### Dictionary Based Compression for Images

Dictionary Based Compression for Images Bruno Carpentieri Abstract Lempel-Ziv methods were original introduced to compress one-dimensional data (text, object codes, etc.) but recently they have been successfully

### A Comparative Study between Two Hybrid Medical Image Compression Methods

A Comparative Study between Two Hybrid Medical Image Compression Methods Clarissa Philana Shopia Azaria 1, and Irwan Prasetya Gunawan 2 Abstract This paper aims to compare two hybrid medical image compression

### Wavelet Based Image Compression Using ROI SPIHT Coding

International Journal of Information & Computation Technology. ISSN 0974-2255 Volume 1, Number 2 (2011), pp. 69-76 International Research Publications House http://www.irphouse.com Wavelet Based Image

### JPEG Compression Using MATLAB

JPEG Compression Using MATLAB Anurag, Sonia Rani M.Tech Student, HOD CSE CSE Department, ITS Bhiwani India ABSTRACT Creating, editing, and generating s in a very regular system today is a major priority.

### User-Friendly Sharing System using Polynomials with Different Primes in Two Images

User-Friendly Sharing System using Polynomials with Different Primes in Two Images Hung P. Vo Department of Engineering and Technology, Tra Vinh University, No. 16 National Road 53, Tra Vinh City, Tra

### Very Low Bit Rate Color Video

1 Very Low Bit Rate Color Video Coding Using Adaptive Subband Vector Quantization with Dynamic Bit Allocation Stathis P. Voukelatos and John J. Soraghan This work was supported by the GEC-Marconi Hirst

### HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

31 st July 01. Vol. 41 No. 005-01 JATIT & LLS. All rights reserved. ISSN: 199-8645 www.jatit.org E-ISSN: 1817-3195 HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION 1 SRIRAM.B, THIYAGARAJAN.S 1, Student,

### Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong)

Biometrics Technology: Image Processing & Pattern Recognition (by Dr. Dickson Tong) References: [1] http://homepages.inf.ed.ac.uk/rbf/hipr2/index.htm [2] http://www.cs.wisc.edu/~dyer/cs540/notes/vision.html

### JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

Edith Cowan University Research Online ECU Publications Pre. JPEG compression of monochrome D-barcode images using DCT coefficient distributions Keng Teong Tan Hong Kong Baptist University Douglas Chai

### International Journal of Advancements in Research & Technology, Volume 2, Issue 8, August ISSN

International Journal of Advancements in Research & Technology, Volume 2, Issue 8, August-2013 244 Image Compression using Singular Value Decomposition Miss Samruddhi Kahu Ms. Reena Rahate Associate Engineer

### Overcompressing JPEG images with Evolution Algorithms

Author manuscript, published in "EvoIASP2007, Valencia : Spain (2007)" Overcompressing JPEG images with Evolution Algorithms Jacques Lévy Véhel 1, Franklin Mendivil 2 and Evelyne Lutton 1 1 Inria, Complex

### Lossless Image Compression having Compression Ratio Higher than JPEG

Cloud Computing & Big Data 35 Lossless Image Compression having Compression Ratio Higher than JPEG Madan Singh madan.phdce@gmail.com, Vishal Chaudhary Computer Science and Engineering, Jaipur National

### Case-Based Reasoning. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. Parametric / Non-parametric.

CS 188: Artificial Intelligence Fall 2008 Lecture 25: Kernels and Clustering 12/2/2008 Dan Klein UC Berkeley Case-Based Reasoning Similarity for classification Case-based reasoning Predict an instance

### ECE 499/599 Data Compression & Information Theory. Thinh Nguyen Oregon State University

ECE 499/599 Data Compression & Information Theory Thinh Nguyen Oregon State University Adminstrivia Office Hours TTh: 2-3 PM Kelley Engineering Center 3115 Class homepage http://www.eecs.orst.edu/~thinhq/teaching/ece499/spring06/spring06.html

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS SUBMITTED BY: NAVEEN MATHEW FRANCIS #105249595 INTRODUCTION The advent of new technologies

### Dynamic local search algorithm for the clustering problem

UNIVERSITY OF JOENSUU DEPARTMENT OF COMPUTER SCIENCE Report Series A Dynamic local search algorithm for the clustering problem Ismo Kärkkäinen and Pasi Fränti Report A-2002-6 ACM I.4.2, I.5.3 UDK 519.712

### WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS

WEINER FILTER AND SUB-BLOCK DECOMPOSITION BASED IMAGE RESTORATION FOR MEDICAL APPLICATIONS ARIFA SULTANA 1 & KANDARPA KUMAR SARMA 2 1,2 Department of Electronics and Communication Engineering, Gauhati

### 7.5 Dictionary-based Coding

7.5 Dictionary-based Coding LZW uses fixed-length code words to represent variable-length strings of symbols/characters that commonly occur together, e.g., words in English text LZW encoder and decoder

### A NEW ENTROPY ENCODING ALGORITHM FOR IMAGE COMPRESSION USING DCT

A NEW ENTROPY ENCODING ALGORITHM FOR IMAGE COMPRESSION USING DCT D.Malarvizhi 1 Research Scholar Dept of Computer Science & Eng Alagappa University Karaikudi 630 003. Dr.K.Kuppusamy 2 Associate Professor

### Feature Extractors. CS 188: Artificial Intelligence Fall Nearest-Neighbor Classification. The Perceptron Update Rule.

CS 188: Artificial Intelligence Fall 2007 Lecture 26: Kernels 11/29/2007 Dan Klein UC Berkeley Feature Extractors A feature extractor maps inputs to feature vectors Dear Sir. First, I must solicit your

### CoE4TN4 Image Processing. Chapter 8 Image Compression

CoE4TN4 Image Processing Chapter 8 Image Compression Image Compression Digital images: take huge amount of data Storage, processing and communications requirements might be impractical More efficient representation

### Context based optimal shape coding

IEEE Signal Processing Society 1999 Workshop on Multimedia Signal Processing September 13-15, 1999, Copenhagen, Denmark Electronic Proceedings 1999 IEEE Context based optimal shape coding Gerry Melnikov,

### Introduction to Machine Learning. Xiaojin Zhu

Introduction to Machine Learning Xiaojin Zhu jerryzhu@cs.wisc.edu Read Chapter 1 of this book: Xiaojin Zhu and Andrew B. Goldberg. Introduction to Semi- Supervised Learning. http://www.morganclaypool.com/doi/abs/10.2200/s00196ed1v01y200906aim006

### Successive Approximation Wavelet Coding of. AVIRIS Hyperspectral Images

Successive Approximation Wavelet Coding of 1 AVIRIS Hyperspectral Images Alessandro J. S. Dutra, Member, IEEE, William A. Pearlman, Fellow, IEEE, and Eduardo A. B. da Silva, Senior Member, IEEE Abstract

### Reconstruction PSNR [db]

Proc. Vision, Modeling, and Visualization VMV-2000 Saarbrücken, Germany, pp. 199-203, November 2000 Progressive Compression and Rendering of Light Fields Marcus Magnor, Andreas Endmann Telecommunications

### Image Coding and Data Compression

Image Coding and Data Compression Biomedical Images are of high spatial resolution and fine gray-scale quantisiation Digital mammograms: 4,096x4,096 pixels with 12bit/pixel 32MB per image Volume data (CT

### Partial Video Encryption Using Random Permutation Based on Modification on Dct Based Transformation

International Refereed Journal of Engineering and Science (IRJES) ISSN (Online) 2319-183X, (Print) 2319-1821 Volume 2, Issue 6 (June 2013), PP. 54-58 Partial Video Encryption Using Random Permutation Based

### Entropy Driven Bit Coding For Image Compression In Medical Application

IOSR Journal of Computer Engineering (IOSR-JCE) e-issn: 2278-0661,p-ISSN: 2278-8727, Volume 19, Issue 3, Ver. III (May - June 2017), PP 53-60 www.iosrjournals.org Entropy Driven Bit Coding For Image Compression

### Novel Lossy Compression Algorithms with Stacked Autoencoders

Novel Lossy Compression Algorithms with Stacked Autoencoders Anand Atreya and Daniel O Shea {aatreya, djoshea}@stanford.edu 11 December 2009 1. Introduction 1.1. Lossy compression Lossy compression is

### An Optimum Novel Technique Based on Golomb-Rice Coding for Lossless Image Compression of Digital Images

, pp.13-26 http://dx.doi.org/10.14257/ijsip.2013.6.6.02 An Optimum Novel Technique Based on Golomb-Rice Coding for Lossless Image Compression of Digital Images Shaik Mahaboob Basha 1 and B. C. Jinaga 2

### ISSN: ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 5, Issue 6, December 2015

ISO 91:28 Certified Volume 5, Issue 6, December 215 Performance analysis of color images with lossy compression by using inverse Haar matrix Reem Talib Miri, Hussain Ali Hussain Department of Mathematics,

### UC San Diego UC San Diego Previously Published Works

UC San Diego UC San Diego Previously Published Works Title Combining vector quantization and histogram equalization Permalink https://escholarship.org/uc/item/5888p359 Journal Information Processing and

### Analyzing Outlier Detection Techniques with Hybrid Method

Analyzing Outlier Detection Techniques with Hybrid Method Shruti Aggarwal Assistant Professor Department of Computer Science and Engineering Sri Guru Granth Sahib World University. (SGGSWU) Fatehgarh Sahib,

### Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform Thaarini.P 1, Thiyagarajan.J 2 PG Student, Department of EEE, K.S.R College of Engineering, Thiruchengode, Tamil Nadu, India

### Texture Image Segmentation using FCM

Proceedings of 2012 4th International Conference on Machine Learning and Computing IPCSIT vol. 25 (2012) (2012) IACSIT Press, Singapore Texture Image Segmentation using FCM Kanchan S. Deshmukh + M.G.M

### Hybrid image coding based on partial fractal mapping

Signal Processing: Image Communication 15 (2000) 767}779 Hybrid image coding based on partial fractal mapping Zhou Wang, David Zhang*, Yinglin Yu Department of Electrical and Computer Engineering, University

### HYBRID IMAGE COMPRESSION TECHNIQUE

HYBRID IMAGE COMPRESSION TECHNIQUE Eranna B A, Vivek Joshi, Sundaresh K Professor K V Nagalakshmi, Dept. of E & C, NIE College, Mysore.. ABSTRACT With the continuing growth of modern communication technologies,

### Priyanka Dixit CSE Department, TRUBA Institute of Engineering & Information Technology, Bhopal, India

An Efficient DCT Compression Technique using Strassen s Matrix Multiplication Algorithm Manish Manoria Professor & Director in CSE Department, TRUBA Institute of Engineering &Information Technology, Bhopal,

### Information Theory and Communication

Information Theory and Communication Shannon-Fano-Elias Code and Arithmetic Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/12 Roadmap Examples

### Improving K-Means by Outlier Removal

Improving K-Means by Outlier Removal Ville Hautamäki, Svetlana Cherednichenko, Ismo Kärkkäinen, Tomi Kinnunen, and Pasi Fränti Speech and Image Processing Unit, Department of Computer Science, University

### Interresolution Look-up Table for Improved Spatial Magnification of Image

Journal of Visual Communication and Image Representation 11, 360 373 (2000) doi:10.1006/jvci.1999.0451, available online at http://www.idealibrary.com on Interresolution Look-up Table for Improved Spatial

### Clustering. Mihaela van der Schaar. January 27, Department of Engineering Science University of Oxford

Department of Engineering Science University of Oxford January 27, 2017 Many datasets consist of multiple heterogeneous subsets. Cluster analysis: Given an unlabelled data, want algorithms that automatically

### Scalable Compression and Transmission of Large, Three- Dimensional Materials Microstructures

Scalable Compression and Transmission of Large, Three- Dimensional Materials Microstructures William A. Pearlman Center for Image Processing Research Rensselaer Polytechnic Institute pearlw@ecse.rpi.edu

### Large-scale visual recognition Efficient matching

Large-scale visual recognition Efficient matching Florent Perronnin, XRCE Hervé Jégou, INRIA CVPR tutorial June 16, 2012 Outline!! Preliminary!! Locality Sensitive Hashing: the two modes!! Hashing!! Embedding!!

### Optimum Global Thresholding Based Variable Block Size DCT Coding For Efficient Image Compression

Biomedical & Pharmacology Journal Vol. 8(1), 453-461 (2015) Optimum Global Thresholding Based Variable Block Size DCT Coding For Efficient Image Compression VIKRANT SINGH THAKUR 1, SHUBHRATA GUPTA 2 and

### ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N.

ADVANCED IMAGE PROCESSING METHODS FOR ULTRASONIC NDE RESEARCH C. H. Chen, University of Massachusetts Dartmouth, N. Dartmouth, MA USA Abstract: The significant progress in ultrasonic NDE systems has now

### Data Compression Techniques

Data Compression Techniques Part 1: Entropy Coding Lecture 1: Introduction and Huffman Coding Juha Kärkkäinen 31.10.2017 1 / 21 Introduction Data compression deals with encoding information in as few bits

### Secret Image Sharing Scheme Based on a Boolean Operation

BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 14, No 2 Sofia 2014 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.2478/cait-2014-0023 Secret Image Sharing Scheme Based

### IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

IMAGE COMPRESSION Image Compression Why? Reducing transportation times Reducing file size A two way event - compression and decompression 1 Compression categories Compression = Image coding Still-image

### Module 7 VIDEO CODING AND MOTION ESTIMATION

Module 7 VIDEO CODING AND MOTION ESTIMATION Lesson 20 Basic Building Blocks & Temporal Redundancy Instructional Objectives At the end of this lesson, the students should be able to: 1. Name at least five

### A Research Paper on Lossless Data Compression Techniques

IJIRST International Journal for Innovative Research in Science & Technology Volume 4 Issue 1 June 2017 ISSN (online): 2349-6010 A Research Paper on Lossless Data Compression Techniques Prof. Dipti Mathpal

### Hybrid ART/Kohonen Neural Model for Document Image Compression. Computer Science Department NMT Socorro, New Mexico {hss,

Hybrid ART/Kohonen Neural Model for Document Image Compression Hamdy S. Soliman Mohammed Omari Computer Science Department NMT Socorro, New Mexico 87801 {hss, omari01}@nmt.edu Keyword -1: Learning Algorithms

### Distributed Grayscale Stereo Image Coding with Improved Disparity and Noise Estimation

Distributed Grayscale Stereo Image Coding with Improved Disparity and Noise Estimation David Chen Dept. Electrical Engineering Stanford University Email: dmchen@stanford.edu Abstract The problem of distributed

### 4. Cluster Analysis. Francesc J. Ferri. Dept. d Informàtica. Universitat de València. Febrer F.J. Ferri (Univ. València) AIRF 2/ / 1

Anàlisi d Imatges i Reconeixement de Formes Image Analysis and Pattern Recognition:. Cluster Analysis Francesc J. Ferri Dept. d Informàtica. Universitat de València Febrer 8 F.J. Ferri (Univ. València)

### Journal of Computer Engineering and Technology (IJCET), ISSN (Print), International Journal of Computer Engineering

Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print), International Journal of Computer Engineering and Technology (IJCET), ISSN 0976 6367(Print) ISSN 0976 6375(Online) Volume

### Image Resolution Improvement By Using DWT & SWT Transform

Image Resolution Improvement By Using DWT & SWT Transform Miss. Thorat Ashwini Anil 1, Prof. Katariya S. S. 2 1 Miss. Thorat Ashwini A., Electronics Department, AVCOE, Sangamner,Maharastra,India, 2 Prof.

### Enhancing K-means Clustering Algorithm with Improved Initial Center

Enhancing K-means Clustering Algorithm with Improved Initial Center Madhu Yedla #1, Srinivasa Rao Pathakota #2, T M Srinivasa #3 # Department of Computer Science and Engineering, National Institute of

### Topic 6 Representation and Description

Topic 6 Representation and Description Background Segmentation divides the image into regions Each region should be represented and described in a form suitable for further processing/decision-making Representation

### Clustering Part 4 DBSCAN

Clustering Part 4 Dr. Sanjay Ranka Professor Computer and Information Science and Engineering University of Florida, Gainesville DBSCAN DBSCAN is a density based clustering algorithm Density = number of

### An SVD-based Fragile Watermarking Scheme With Grouped Blocks

An SVD-based Fragile Watermarking Scheme With Grouped Qingbo Kang Chengdu Yufei Information Engineering Co.,Ltd. 610000 Chengdu, China Email: qdsclove@gmail.com Ke Li, Hu Chen National Key Laboratory of

### Texture Compression. Jacob Ström, Ericsson Research

Texture Compression Jacob Ström, Ericsson Research Overview Benefits of texture compression Differences from ordinary image compression Texture compression algorithms BTC The mother of all texture compression

### A STUDY OF VARIOUS IMAGE COMPRESSION TECHNIQUES

A STUDY OF VARIOUS IMAGE COMPRESSION TECHNIQUES Sonal, Dinesh Kumar Department of Computer Science & Engineering Guru Jhambheswar University of Science and Technology, Hisar sonalkharb@gmail.com Abstract

### Video Coding in H.26L

Royal Institute of Technology MASTER OF SCIENCE THESIS Video Coding in H.26L by Kristofer Dovstam April 2000 Work done at Ericsson Radio Systems AB, Kista, Sweden, Ericsson Research, Department of Audio

### An Information Hiding Scheme Based on Pixel- Value-Ordering and Prediction-Error Expansion with Reversibility

An Information Hiding Scheme Based on Pixel- Value-Ordering Prediction-Error Expansion with Reversibility Ching-Chiuan Lin Department of Information Management Overseas Chinese University Taichung, Taiwan

### FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION 1 GOPIKA G NAIR, 2 SABI S. 1 M. Tech. Scholar (Embedded Systems), ECE department, SBCE, Pattoor, Kerala, India, Email:

### Hierarchical Ordering for Approximate Similarity Ranking

Hierarchical Ordering for Approximate Similarity Ranking Joselíto J. Chua and Peter E. Tischer School of Computer Science and Software Engineering Monash University, Victoria 3800, Australia jjchua@mail.csse.monash.edu.au

### Jarek Szlichta

Jarek Szlichta http://data.science.uoit.ca/ Approximate terminology, though there is some overlap: Data(base) operations Executing specific operations or queries over data Data mining Looking for patterns

### Author s copy. On a predictive scheme for colour image quantization. 1. Introduction. Y. C. HU *1, W. L. CHEN 1, C. C. LO 2, and C. M.

OPTO ELECTRONICS REVIEW 20(2), 159 167 DOI: 10.2478/s11772 012 0023 0 On a predictive scheme for colour image quantization Y. C. HU *1, W. L. CHEN 1, C. C. LO 2, and C. M. WU 3 1 Department of Computer

### SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM. Olga Kouropteva, Oleg Okun and Matti Pietikäinen

SELECTION OF THE OPTIMAL PARAMETER VALUE FOR THE LOCALLY LINEAR EMBEDDING ALGORITHM Olga Kouropteva, Oleg Okun and Matti Pietikäinen Machine Vision Group, Infotech Oulu and Department of Electrical and

### IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM

IMAGE PROCESSING USING DISCRETE WAVELET TRANSFORM Prabhjot kour Pursuing M.Tech in vlsi design from Audisankara College of Engineering ABSTRACT The quality and the size of image data is constantly increasing.

### SOMSN: An Effective Self Organizing Map for Clustering of Social Networks

SOMSN: An Effective Self Organizing Map for Clustering of Social Networks Fatemeh Ghaemmaghami Research Scholar, CSE and IT Dept. Shiraz University, Shiraz, Iran Reza Manouchehri Sarhadi Research Scholar,

### Comparative Analysis of 2-Level and 4-Level DWT for Watermarking and Tampering Detection

International Journal of Latest Engineering and Management Research (IJLEMR) ISSN: 2455-4847 Volume 1 Issue 4 ǁ May 2016 ǁ PP.01-07 Comparative Analysis of 2-Level and 4-Level for Watermarking and Tampering

### Progressive Geometry Compression. Andrei Khodakovsky Peter Schröder Wim Sweldens

Progressive Geometry Compression Andrei Khodakovsky Peter Schröder Wim Sweldens Motivation Large (up to billions of vertices), finely detailed, arbitrary topology surfaces Difficult manageability of such

### Evolved Multi-resolution Transforms for Optimized Image Compression and Reconstruction under Quantization

Evolved Multi-resolution Transforms for Optimized Image Compression and Reconstruction under Quantization FRANK W. MOORE Mathematical Sciences Department University of Alaska Anchorage CAS 154, 3211 Providence

### ADAPTIVE PICTURE SLICING FOR DISTORTION-BASED CLASSIFICATION OF VIDEO PACKETS

ADAPTIVE PICTURE SLICING FOR DISTORTION-BASED CLASSIFICATION OF VIDEO PACKETS E. Masala, D. Quaglia, J.C. De Martin Λ Dipartimento di Automatica e Informatica/ Λ IRITI-CNR Politecnico di Torino, Italy

### Comparison of Wavelet Based Watermarking Techniques for Various Attacks

International Journal of Engineering and Technical Research (IJETR) ISSN: 2321-0869, Volume-3, Issue-4, April 2015 Comparison of Wavelet Based Watermarking Techniques for Various Attacks Sachin B. Patel,

### Indexing by Shape of Image Databases Based on Extended Grid Files

Indexing by Shape of Image Databases Based on Extended Grid Files Carlo Combi, Gian Luca Foresti, Massimo Franceschet, Angelo Montanari Department of Mathematics and ComputerScience, University of Udine

### A QUAD-TREE DECOMPOSITION APPROACH TO CARTOON IMAGE COMPRESSION. Yi-Chen Tsai, Ming-Sui Lee, Meiyin Shen and C.-C. Jay Kuo

A QUAD-TREE DECOMPOSITION APPROACH TO CARTOON IMAGE COMPRESSION Yi-Chen Tsai, Ming-Sui Lee, Meiyin Shen and C.-C. Jay Kuo Integrated Media Systems Center and Department of Electrical Engineering University

### Lecture: k-means & mean-shift clustering

Lecture: k-means & mean-shift clustering Juan Carlos Niebles and Ranjay Krishna Stanford Vision and Learning Lab Lecture 11-1 Recap: Image Segmentation Goal: identify groups of pixels that go together

### Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video Tomonori Furuta, Hideki Noda, Michiharu Niimi, Eiji Kawaguchi Kyushu Institute of Technology, Dept. of Electrical, Electronic and Computer

### IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING

IMAGE DENOISING USING NL-MEANS VIA SMOOTH PATCH ORDERING Idan Ram, Michael Elad and Israel Cohen Department of Electrical Engineering Department of Computer Science Technion - Israel Institute of Technology

### CS 260: Seminar in Computer Science: Multimedia Networking

CS 260: Seminar in Computer Science: Multimedia Networking Jiasi Chen Lectures: MWF 4:10-5pm in CHASS http://www.cs.ucr.edu/~jiasi/teaching/cs260_spring17/ Multimedia is User perception Content creation

### IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 5, MAY

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 5, MAY 2011 2837 Approximate Capacity of a Class of Gaussian Interference-Relay Networks Soheil Mohajer, Member, IEEE, Suhas N. Diggavi, Member, IEEE,

### A Miniature-Based Image Retrieval System

A Miniature-Based Image Retrieval System Md. Saiful Islam 1 and Md. Haider Ali 2 Institute of Information Technology 1, Dept. of Computer Science and Engineering 2, University of Dhaka 1, 2, Dhaka-1000,

### 1640 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 9, SEPTEMBER 2008

1640 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 17, NO. 9, SEPTEMBER 2008 On Dictionary Adaptation for Recurrent Pattern Image Coding Nuno M. M. Rodrigues, Student Member, IEEE, Eduardo A. B. da Silva,

### Searching similar images - Vector Quantization with S-tree

Searching similar images - Vector Quantization with S-tree Jan Platos, Pavel Kromer, Vaclav Snasel, Ajith Abraham Department of Computer Science, FEECS IT4 Innovations, Centre of Excellence VSB-Technical

### Robust Steganography Using Texture Synthesis

Robust Steganography Using Texture Synthesis Zhenxing Qian 1, Hang Zhou 2, Weiming Zhang 2, Xinpeng Zhang 1 1. School of Communication and Information Engineering, Shanghai University, Shanghai, 200444,

### PERFORMANCE EVALUATION OF ADAPTIVE SPECKLE FILTERS FOR ULTRASOUND IMAGES

PERFORMANCE EVALUATION OF ADAPTIVE SPECKLE FILTERS FOR ULTRASOUND IMAGES Abstract: L.M.Merlin Livingston #, L.G.X.Agnel Livingston *, Dr. L.M.Jenila Livingston ** #Associate Professor, ECE Dept., Jeppiaar

### A vector quantization method for nearest neighbor classifier design

Pattern Recognition Letters 25 (2004) 725 731 www.elsevier.com/locate/patrec A vector quantization method for nearest neighbor classifier design Chen-Wen Yen a, *, Chieh-Neng Young a, Mark L. Nagurka b

### COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES H. I. Saleh 1, M. E. Elhadedy 2, M. A. Ashour 1, M. A. Aboelsaud 3 1 Radiation Engineering Dept., NCRRT, AEA, Egypt. 2 Reactor Dept., NRC,