ISSN: X Impact factor: 4.295

Similar documents
DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Keywords - DWT, Lifting Scheme, DWT Processor.

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Fingerprint Image Compression

Wavelet Based Image Compression, Pattern Recognition And Data Hiding

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

Wavelet Based Image Compression Using ROI SPIHT Coding

CSEP 521 Applied Algorithms Spring Lossy Image Compression

Adaptive Quantization for Video Compression in Frequency Domain

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Modified SPIHT Image Coder For Wireless Communication

CHAPTER 3 DIFFERENT DOMAINS OF WATERMARKING. domain. In spatial domain the watermark bits directly added to the pixels of the cover

MEMORY EFFICIENT WDR (WAVELET DIFFERENCE REDUCTION) using INVERSE OF ECHELON FORM by EQUATION SOLVING

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

Scalable Compression and Transmission of Large, Three- Dimensional Materials Microstructures

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

2-D SIGNAL PROCESSING FOR IMAGE COMPRESSION S. Venkatesan, Vibhuti Narain Rai

CHAPTER 2 LITERATURE REVIEW

A 3-D Virtual SPIHT for Scalable Very Low Bit-Rate Embedded Video Compression

Low-Memory Packetized SPIHT Image Compression

A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application

Image Compression. CS 6640 School of Computing University of Utah

Embedded Rate Scalable Wavelet-Based Image Coding Algorithm with RPSWS

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

Wavelet Transform (WT) & JPEG-2000

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Using Shift Number Coding with Wavelet Transform for Image Compression

Comparative Analysis of 2-Level and 4-Level DWT for Watermarking and Tampering Detection

Compression of Stereo Images using a Huffman-Zip Scheme

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Image Compression Algorithm and JPEG Standard

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

Image Compression Algorithms using Wavelets: a review

Improved Image Compression by Set Partitioning Block Coding by Modifying SPIHT

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

AUDIO COMPRESSION USING WAVELET TRANSFORM

A Review on Digital Image Compression Techniques

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Design of 2-D DWT VLSI Architecture for Image Processing

Implementation of Lifting-Based Two Dimensional Discrete Wavelet Transform on FPGA Using Pipeline Architecture

PERFORMANCE ANAYSIS OF EMBEDDED ZERO TREE AND SET PARTITIONING IN HIERARCHICAL TREE

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

Color Image Compression Using EZW and SPIHT Algorithm

Analysis of Robustness of Digital Watermarking Techniques under Various Attacks

Scalable Medical Data Compression and Transmission Using Wavelet Transform for Telemedicine Applications

A NEW ROBUST IMAGE WATERMARKING SCHEME BASED ON DWT WITH SVD

CS 335 Graphics and Multimedia. Image Compression

ENHANCED DCT COMPRESSION TECHNIQUE USING VECTOR QUANTIZATION AND BAT ALGORITHM Er.Samiksha 1, Er. Anurag sharma 2

An embedded and efficient low-complexity hierarchical image coder

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Embedded Descendent-Only Zerotree Wavelet Coding for Image Compression

A SCALABLE SPIHT-BASED MULTISPECTRAL IMAGE COMPRESSION TECHNIQUE. Fouad Khelifi, Ahmed Bouridane, and Fatih Kurugollu

Stereo Image Compression

COMPARISONS OF DCT-BASED AND DWT-BASED WATERMARKING TECHNIQUES

Short Communications

SIMD Implementation of the Discrete Wavelet Transform

Robust DWT Based Technique for Digital Watermarking

Hybrid Image Compression Technique using Huffman Coding Algorithm

International Journal of Advanced Research in Computer Science and Software Engineering

ROI Based Image Compression in Baseline JPEG

Image Compression using Haar Wavelet Transform and Huffman Coding

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

Comparison of different Fingerprint Compression Techniques

Efficient Image Steganography Using Integer Wavelet Transform

Comparative Analysis on Medical Images using SPIHT, STW and EZW

Digital Image Steganography Techniques: Case Study. Karnataka, India.

DCT SVD Based Hybrid Transform Coding for Image Compression

Low-complexity video compression based on 3-D DWT and fast entropy coding

SCALED WAVELET TRANSFORM VIDEO WATERMARKING METHOD USING HYBRID TECHNIQUE: SWT-SVD-DCT

Implication of variable code block size in JPEG 2000 and its VLSI implementation

Multimedia Communications. Transform Coding

Lecture 5: Compression I. This Week s Schedule

Image Compression Techniques

Enhancing the Image Compression Rate Using Steganography

Video Compression An Introduction

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

Robust Lossless Image Watermarking in Integer Wavelet Domain using SVD

Secure Data Hiding in Wavelet Compressed Fingerprint Images A paper by N. Ratha, J. Connell, and R. Bolle 1 November, 2006

SSIM based image quality assessment for vector quantization based lossy image compression using LZW coding

Image Transformation Techniques Dr. Rajeev Srivastava Dept. of Computer Engineering, ITBHU, Varanasi

Adaptive and Energy Efficient Wavelet Image Compression For Mobile Multimedia Data Services

Lecture 8 JPEG Compression (Part 3)

Performance Analysis of SPIHT algorithm in Image Compression

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

Statistical Image Compression using Fast Fourier Coefficients

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

Module 6 STILL IMAGE COMPRESSION STANDARDS

Performance Evaluation on EZW & SPIHT Image Compression Technique

CMPT 365 Multimedia Systems. Media Compression - Image

Comparison of Digital Image Watermarking Algorithms. Xu Zhou Colorado School of Mines December 1, 2014

A Parallel Reconfigurable Architecture for DCT of Lengths N=32/16/8

HIGH LEVEL SYNTHESIS OF A 2D-DWT SYSTEM ARCHITECTURE FOR JPEG 2000 USING FPGAs

Transcription:

ISSN: 2454-132X Impact factor: 4.295 (Volume2, Issue6) Available online at: www.ijariit.com An Insight into the Energy Efficiency based Image Compression on Handheld Devices M. Jyothirmai Dept. of ECE, Research Scholar, JNTUK, Kakinada jyothimunjamp@gmail.com Abstract Transmission of facts like voice, textual content, photograph and video has many drawbacks like confined statistics degree of networks, noise in the channel, battery constraints of the gadgets and many others. This paper addresses the power and information degree drawbacks of pictures in digital conversation. The intention is to supply a comparison of the modern techniques supported the number of information within the snap shots, minimizing the strength needed for communication and also the obtainable transmission bandwidths. The fashions in comparison are the compression algorithms for mobile gadgets to cut back the transmission power of snap shots through wireless networks and adaptive facts codec for still photos so one can appreciably decrease the energy needed for wi-fi photograph communication. The rule is applied earlier than transmission images and is obvious to users. We will be predisposed to consequently use publically obtainable databases for our evaluation of power low-budget compression method. Keywords DWT, JPEG, JPEG 2000, SPIHT, EEWITA. I. INTRODUCTION Mobile records verbal exchange along with voice, text, image and video has a developing call for because cell multimedia, email, mobile net get right of entry to cellular trade and cellular data sensing which extensively used areas are. The most critical project is the transmission of massive amount of facts which locations a constraint on the available battery of mobile devices and the transmission of the network. One technique to conquer this undertaking is to reduce the quantity of data transmitted with the aid of the use of compression techniques. Those compression techniques are primarily based on reaching exact compression without compromising on the photograph satisfactory however, they do now not remember the energy intake necessities and processing power overheads. The transmission bandwidth is inversely proportional to the power intake i.e. the images can be transmitted with lower strength at better bandwidths and better energy intake at lower bandwidths. The paper is organized as follows. In section II the image compression block diagram is reviewed. In section III the related work is analyzed. In section IV the DWT based compression scheme is reviewed. Section V shows the effect of set partitioning in hierarchical trees. In section VI the JPEG based image compression is reviewed, section VII gives an overview of JPEG 2000 algorithm and section VIII reviews EEWITA, an energy efficient image compression. II. IMAGE COMPRESSION The block diagram of image compression is shown in the figure 1. Forward transform is applied to a source image which generates a set of frequency coefficients. These transformed coefficients are then quantized (rounded up to the nearest integer values) which results in loss of information. Then the data is encoded where it is converted to a sequence of binary symbols. Some of the available encoding schemes are Shannon-Fano code, Huffman code, block transform code, run length code, LZW code 2016, IJARIIT All Rights Reserved Page 1

Figure 1. Block diagram of Image compression III. RELATED WORK Several energy efficient techniques have been proposed earlier to address these issues. Dong-U Lee et. al [1] compared the energy costs associated with direct transmission of uncompressed images and sensor platform-based JPEG compression followed by transmission of the compressed image data. Huaming Wu et. al [4] proposed an energy-balancing distributed JPEG2000 image compression scheme which uses a combination of tiling of images and load balancing by nodes rotation. This prolongs the system lifetime by up to 4 times and has normalized total energy consumption comparable to centralized image compression. Dong-Gi Lee et.al [6] proposed an energy efficient computation as well as communication energy needed, with minimal degradation in image quality. Zhiyuan Li et.al [7] Constructed a cost graph and applied a scheme to statically divide the program into server tasks and client tasks such that the energy consumed by the program is minimized. DWT based image compression The wavelet transform makes use of a sub-band decomposition method where a collection of samples are modified into subbands. The wavelet decomposes a snapshot into a suite of unique resolution sub-snap shots akin to the various frequency bands. This results in a multi decision illustration of photographs. The sub-band decoding is a method where the input sign is sub-divided into a number of frequency bands, which is implemented by means of a filter. The filter partitions the signal into frequency bands. The sub-band decomposition is proven in figure 2. Efficiency of Wavelet Transform The Daubechies 5-faucet/three-faucet filter is used for embedding in the wavelet transform. This filter entails local know-how within the effect, thereby eliminating the block outcomes of discrete cosine transform. Its facets are good localization, symmetric properties and a good nice compressed snapshot. The hardware implementation of this filter can also be expensive as it includes binary shifter and adder models. The energy efficiency is analyzed by way of picking out the number of times a distinct operation is performed for an input. This determines the quantity of switching exercise and the power consumption. For a given input image of size MxN and wavelet decomposition utilized via L turn out to be stages, the total computational load is estimated as follows. Figure 2. Progressive sub-band decomposition The decomposition in the vertical and horizontal directions is as follows. All even put photograph pixels are decomposed into the high pass coefficients and the complete computational load is calculated. Apart from various arithmetic operations, the transform step includes a gigantic number of memory accesses. The data entry load is estimated through counting the whole number of reminiscence accesses for the period of the wavelet transform. The total computation energy is computed as a weighted sum of the computational load and knowledge entry load. It was discovered that the add operation requires two instances more energy than the shift operation and the power price of the information entry load is 2.7 times more than the computational load. 2016, IJARIIT All Rights Reserved Page 2

The energy is acquired through C*R the place C is the scale of the compressed photograph and R is the bit transmission energy consumed by way of the transmitter. For example, within the forward wavelet decomposition utilizing the above filter, eight shifts and eight add operations are required to transform the sample photo pixel into a low-go coefficient. In a similar fashion, excessive-cross decomposition requires 2 shift and four additions. We model the energy consumption of the low/excessive-pass decomposition with the aid of counting the number of operations and denote this as the computational load. Thus 8S + 8A items of computational load are required in a unit pixel of the low-cross decomposition and 2S + 4A models for the excessive passes. For a given input image dimension of M x N and wavelet decomposition utilized via L transform stages, we can estimate the total computational load as follows. Suppose we first practice the decomposition within the horizontal course. Due to the fact that all even-positioned image pixels are decomposed into the low-pass coefficients and odd-placed image pixels are decomposed into the high-pass coefficients, the complete computational load concerned in horizontal decomposition is 1/2MN(10S+12A). The quantity of computational load within the vertical decomposition is equal. Apart from various arithmetic operations, the transform step includes a tremendous quantity of memory accesses. Considering the fact that the energy consumed in external and inside data transfers may also be giant, we estimate the data-entry load with the aid of counting the whole quantity of memory accesses for the duration of the wavelet transform. At the transform level, each pixel is read twice and written twice. As a consequence, of the above estimation process, the total information-entry is given with the aid of the number of read and writes operations. The overall computation energy is computed as a weighted sum of the computational load and data-access load. From our implementation experiments, it is observed that the add operation requires two times extra energy consumption than the shift operation, and the power cost of the data-entry load is 2.7 instances more than the computational load. We additionally estimate the energy C*R, where C is the scale of the compressed image (in bits) and R is the per bit transmission energy consumed by way of the RF transmitter. IV. SPIHT The set partitioning in hierarchical tree algorithm is utilized for lossless image compression at the present time. One of the robust wavelet situated image compression approaches is SPIHT. The principal benefits of SPIHT procedure are it may furnish excellent image high-quality with high PSNR and for transmission. First, the image is decomposed into 4 sub-bands and is repeated till ultimate scale. Every decomposition consists of 1 low-frequency sub-band with three excessive-frequency sub-bands. The complete SPIHT algorithm does compression in three steps reminiscent of sorting, refinement and quantization. The SPIHT algorithm encodes the picture data utilizing three lists akin to LIP, LIS and LSP. LIP includes the individual coefficients having the magnitudes smaller than the edge values. LIS contains the total wavelet coefficients outlined in tree constitution having magnitudes smaller than the threshold values. LSP is the set of pixels having magnitude larger than the edge worth of the most important pixels. In the sorting procedure, all of the pixels within the LIP list are established whether or not they are essential and then the pixels and the coordinates with the coefficients in all of the three lists are established. Only one coefficient is located as principal and it is going to be eradicated from the subsets, then inserted into the LSP or it's going to be inserted into the LIP. Within the refinement procedure, the nth MSB of the coefficient within the LSP is taken as the ultimate output. The worth of n is reduced, again sorting with refinement is applied until n=zero. Due to the fact, SPIHT algorithm managed the bit cost exactly and the execution will also be terminated at any time. As soon as the encoding process is over, then the decoding process is applied. Select partitions of pixels Um for each n = n 0, n 0-1, n 0-2,... if Sn(Um) = 0 (the set is insignificant) then disregard pixels in Um if Sn(Um) = 1 (the set is significant) then use recursive algorithm to partition Um Then the sets are tested until all significant coefficients are round. V. JPEG A power-conscious approach making sure that the JPEG computations utilize the minimal precision needed to ensure that quantization, not inadequate precision, remains the dominant source of error. To accomplish this design framework is finished that analytically determines the most suitable integer and fractional bit-widths for the signal paths within the compression procedure and is in a position to assurance a specified precision. That is utilized to mechanically generate platform-detailed JPEG C code and perform experiments utilizing the Atmel ATmega128, TI MSP430, TI TMS320C64x, and Analog gadgets Blackfin ADSP-BF533 processors to measure the power financial savings because of the precision. Having received optimized JPEG implementations, we then measure the vigor consumed with the aid of the compression procedure and subsequent transmission of the compressed data and compare this with the power required to transmit the 2016, IJARIIT All Rights Reserved Page 3

photographs of their uncompressed state. JPEG is essentially the most largely used lossy photograph compression typical. Seeing that the human eye is much less touchy to spatial detail within the chrominance add-ons, those photographs are almost always subsample. The three images are then tiled into sections of 8x8 blocks of pixels and processed separately. Each tile is level shifted to zero mean for example for a pixel depth of 8 bits, 128 would be subtracted from each pixel. The extent shifted block is then converted into frequency space through performing a 2-D DCT, which is on the whole entire through performing 1-D DCTs on the rows adopted by means of 1-D DCTs on the columns. After the DCT, quantization is carried out on the block by dividing each pattern by using a quantization step dimension defined in accordance with a quantization table, and rounding the result of the division to the nearest integer. Quantization tables are designed such that a number of the better frequency add-ons will likely be rounded to zero. Low frequency add-ons are smaller than these comparable to high frequency add-ons, with the effect that low frequency expertise might be retained more effectively (considering the fact that it's quantized making use of a smaller step dimension) than excessive frequency knowledge. To adjust image exceptional, all values in the table are rescaled in inverse proportion to a pleasant setting that degrees from 1 (very bad picture high-quality) to a hundred (very good photograph quality), with a steady of proportionality most commonly offered in order that corresponds to no rescaling of the table. The ultimate step in JPEG compression is entropy coding, a lossless compression procedure involving run-length encoding and Huffman coding. JPEG Algorithm steps 1. Take an image and divide it up into 8-pixel by 8-pixel blocks. If the image cannot be divided into 8-by-8 blocks, then add in empty pixels around the edges, essentially zero-padding the image. 2. For each 8-by-8 block, get image data such that you have values to represent the color at each pixel. 3. Take the Discrete Cosine Transform (DCT) of each 8-by-8 block. 4. After taking the DCT of a block, matrix multiply the block by a mask that will zero out certain values from the DCT matrix. 5. Finally, to get the data for the compressed image, take the inverse DCT of each block. All these blocks are combined back into an image of the same size as the original. VI. JPEG 2000 The essential thought of conveyed JPEG2000 picture pressure is circulating the workload of wavelet change to a few gatherings of hubs along the way from the source to the goal. Among the issues in the plan of conveyed wavelet-based image compression, information (e.g. crude picture or moderate outcomes) trade is of key significance due to the acquired remote correspondence vitality. In the exploration of parallel wavelet change, information is communicated to all processors to accelerate the execution time which may expand the vitality utilization. Tiling, which is utilized as a part of JPEG, can likewise be utilized as a part of wavelet based image compression. Typically, the wavelet deterioration is figured on the whole picture, which restrains irritating pressure blocking ancient rarities that happen at low piece rate coding. Be that as it may, JPEG2000 additionally underpins the idea of picture tiling for operation in low memory situations. For this situation, the picture is divided into tiles (squares), and after that every piece is sent to a hub to do 2-D wavelet change autonomously. Once a hub finishes its 2-D wavelet change, it sends the outcome back to an inside hub. If the tiles are processed independently, it leads to distortion loss and blocking artifacts because of the increase in the number of processors. The quality misfortune and blocking antiquities are little when the quantity of tiles is little or when the bit rate of compact image is not low. It was observed that the quality loss of tiling with four tiles contrasted with the outcome without tiling is littler than 0.2dB as far as PSNR. The blocking artifacts are likewise unimportant with a direct piece rate. In this way, it is still material in circulated waveletbased picture pressure if the quantity of tiles is little or for not low piece rate. Post handling methods can likewise be connected at the goal to adapt to the blocking artifacts. 2016, IJARIIT All Rights Reserved Page 4

Figure 2. JPEG 2000 VII. Energy efficient wavelet image transform algorithm (EEWITA) EEWITA is a wavelet-based transform algorithm that aims at minimizing computation energy (by reducing the number of arithmetic operations and memory accesses) and communication energy (by reducing the number of transmitted bits). Further, the algorithm aims at effecting energy savings while minimally impacting the quality of the image. EEWITA exploits the numerical distribution of the high-pass coefficients to judiciously eliminate a large number of samples from consideration in the image compression process. It was observed that the high-pass coefficients are generally represented by small integer values. For example, 80 % of the high-pass coefficients for level 1 are less than 5. Because of the numerical distribution of the high-pass coefficients and the effect of the quantization step on small valued coefficients, we can estimate the high-pass coefficients to be zeros (and hence avoid computing them) and incur minimal image quality loss. This approach has two main advantages. First, because the high-pass coefficients do not have to be computed, EEWITA helps to reduce the computation energy consumed during the wavelet image compression process by reducing the number of executed operations. Second, because the encoder and decoder are aware of the estimation technique, no information needs to be transmitted across the wireless channel, thereby reducing the communication energy required. EEWITA consists of two techniques attempting to conserve energy by avoiding the computation and communication of highpass coefficients: The first technique attempts to conserve energy by eliminating the least significant subband. Among the four subbands, we find that the diagonal subband (HHi) is least significant (Fig. 2), making it the best candidate for elimination during the wavelet transform step. We call this technique HH elimination. In the second scheme, only the most significant subband (low-resolution information, LLi) is kept and all high-pass subbands (LHi, HLi, and HHi) are removed. We call this H* elimination, because all high-pass subbands are eliminated in the transform step. The HH elimination method also results in large communication vigor financial savings. For each turn out to be stage that the HH elimination process is utilized, 25 % of the picture knowledge is eliminated main to much less knowledge to be transmitted over the wi-fi channel. While the HH elimination technique reduces some computation hundreds in the course of the turn into steps by way of taking out one out of every four subbands, the H* removing technique targets more massive computation energy savings. In the H* removal manner, most effective the LL subband is generated and all high pass subbands are eliminated. For that reason, only even-placed pixels are processed in the row become and fed to the following column transform. Strange-placed pixels are skipped, considering that these pixels characterize all the excessive-move coefficients (HL, HH). In a similar way, at the column transform step, all odd-columned pixels are skipped and best evencolumned low-handed pixels are processed. This leads to a savings of MN (6A+4S) operation items of computational load (over forty seven % in comparison with the AWIC algorithm). H* removing also reduces the information-entry load greatly. On account that the wavelet become makes use of neighborhood pixels to generate coefficients, all image pixels must be read as soon as to generate low-pass coefficients within the row change into. Nonetheless, in the column transform, most effective even-columned pixels are required. We as a result can reduce the quantity of read accesses by way of 25 %. In a similar way, in view that most effective low-cross coefficients (L, LL) are written to reminiscence and accessed by means of the next grow to be steps, write operations are saved by 63 %. 2016, IJARIIT All Rights Reserved Page 5

CONCLUSIONS The discrete wavelet transform gives adequate compression with no considerable corruption of picture quality. Examinations with customary JPEG libraries demonstrate speed and vitality enhancements going from components of 2 to 5 contingent upon which part of the calculation was considered. JPEG 2000 is straightforward and simple to execute. Execution assessment demonstrates that this plan can have fundamentally longer framework lifetime while fulfilling execution limitation regarding an objective picture quality contrasted with an incorporated approach. EEWITA goes for affecting energy savings while insignificantly affecting the nature of the image. SPIHT calculation packs and remakes the image without changing the innovation and the nature of the image. REFERENCES 1. Dong-U Lee, Hyungjin Kim and Mohammad Rahimi, Energy-Efficient Image Compression for Resource- Constrained Platforms IEEE transactions on image processing, Vol. 18, No. 9, September 2009. 2. Saeid Abolfazli and Zohreh Sanaei Cloud-Based Augmentation for Mobile Devices: Motivation, Taxonomies, and 3. Open Challenges IEEE Communications Surveys & Tutorials. 4. Heikkinen, M.V.J., Nurminen, J.K., Smura, T., Hämmäinen, H., Energy Efficiency of Mobile Handsets: Measuring user Attitudes and Behavior, Telematics and Informatics (2012), doi: 10.1016/j.tele.2012.01.005. 5. Wu, Huaming, and Alhussein A. Abouzeid. "Energy efficient distributed image compression in resource-constrained multihop wireless networks." Computer Communications 28.14 (2005): 1658-1668 6. Kumar, Karthik, and Yung-Hsiang Lu. "Cloud computing for mobile users: Can offloading computation save energy?." Computer 43.4 (2010): 51-56. Lee, Dong-Gi, and Sujit Dey. "Adaptive and energy efficient wavelet image compression for mobile multimedia data services." Communications, 2002. ICC 2002. IEEE International Conference on. Vol. 4. IEEE, 2002. 7. Li, Zhiyuan, Cheng Wang, and Rong Xu. "Computation offloading to save energy on handheld devices: a partition scheme." Proceedings of the 2001 international conference on Compilers, architecture, and synthesis for embedded systems. ACM, 2001. 8. Miettinen, Antti P., and Jukka K. Nurminen. "Energy Efficiency of Mobile Clients in Cloud Computing." HotCloud (2010): 4-4. 9. http://www.slideshare.net/ibbt/jpeg2000. 2016, IJARIIT All Rights Reserved Page 6