Comparative Analysis of Image Compression Algorithms for Deep Space Communications Systems

Similar documents
Image data compression with CCSDS Algorithm in full frame and strip mode

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

CSEP 521 Applied Algorithms Spring Lossy Image Compression

ISSN (ONLINE): , VOLUME-3, ISSUE-1,

Compression of 3-Dimensional Medical Image Data Using Part 2 of JPEG 2000

Optimized Progressive Coding of Stereo Images Using Discrete Wavelet Transform

CMPT 365 Multimedia Systems. Media Compression - Image

CS 335 Graphics and Multimedia. Image Compression

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

Comparison of EBCOT Technique Using HAAR Wavelet and Hadamard Transform

JPEG 2000 vs. JPEG in MPEG Encoding

DISCRETE COSINE TRANSFORM BASED IMAGE COMPRESSION Aniket S. Dhavale 1, Ganesh B. Gadekar 2, Mahesh S. Bhagat 3, Vitthal B.

Modified SPIHT Image Coder For Wireless Communication

Image Compression Algorithm and JPEG Standard

Scalable Coding of Image Collections with Embedded Descriptors

A Reversible Data Hiding Scheme for BTC- Compressed Images

Compression of RADARSAT Data with Block Adaptive Wavelets Abstract: 1. Introduction

A Study of Image Compression Based Transmission Algorithm Using SPIHT for Low Bit Rate Application

Digital Image Steganography Techniques: Case Study. Karnataka, India.

JPEG Joint Photographic Experts Group ISO/IEC JTC1/SC29/WG1 Still image compression standard Features

Medical Image Compression using DCT and DWT Techniques

ECE 533 Digital Image Processing- Fall Group Project Embedded Image coding using zero-trees of Wavelet Transform

MULTIMEDIA COMMUNICATION

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

STUDY AND IMPLEMENTATION OF VIDEO COMPRESSION STANDARDS (H.264/AVC, DIRAC)

Topic 5 Image Compression

Comparative Analysis of Image Compression Using Wavelet and Ridgelet Transform

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Three-dimensional SPIHT Coding of Hyperspectral Images with Random Access and Resolution Scalability

A SCALABLE SPIHT-BASED MULTISPECTRAL IMAGE COMPRESSION TECHNIQUE. Fouad Khelifi, Ahmed Bouridane, and Fatih Kurugollu

*Faculty of Electrical Engineering, University of Belgrade, Serbia and Montenegro ** Technical University of Lodz, Institute of Electronics, Poland

THREE DESCRIPTIONS OF SCALAR QUANTIZATION SYSTEM FOR EFFICIENT DATA TRANSMISSION

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Bit-Plane Decomposition Steganography Using Wavelet Compressed Video

Multi-View Image Coding in 3-D Space Based on 3-D Reconstruction

AUDIO COMPRESSION USING WAVELET TRANSFORM

Performance Enhanced Proxy Solutions for Satellite Networks: State of the Art, Protocol Stack and Possible Interfaces

Image Compression Using K-Space Transformation Technique

Digital Image Processing

Mesh Based Interpolative Coding (MBIC)

JPEG Baseline JPEG Pros and Cons (as compared to JPEG2000) Advantages. Disadvantages

Reversible Wavelets for Embedded Image Compression. Sri Rama Prasanna Pavani Electrical and Computer Engineering, CU Boulder

A Novel Statistical Distortion Model Based on Mixed Laplacian and Uniform Distribution of Mpeg-4 FGS

The Analysis and Detection of Double JPEG2000 Compression Based on Statistical Characterization of DWT Coefficients

Efficient Image Compression of Medical Images Using the Wavelet Transform and Fuzzy c-means Clustering on Regions of Interest.

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

SIGNAL COMPRESSION. 9. Lossy image compression: SPIHT and S+P

Comparative and performance analysis of HEVC and H.264 Intra frame coding and JPEG2000

Lossless Image Compression with Lossy Image Using Adaptive Prediction and Arithmetic Coding

A New Configuration of Adaptive Arithmetic Model for Video Coding with 3D SPIHT

MEMORY EFFICIENT WDR (WAVELET DIFFERENCE REDUCTION) using INVERSE OF ECHELON FORM by EQUATION SOLVING

SIMD Implementation of the Discrete Wavelet Transform

MRT based Fixed Block size Transform Coding

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

Compression Part 2 Lossy Image Compression (JPEG) Norm Zeck

ANALYSIS OF SPIHT ALGORITHM FOR SATELLITE IMAGE COMPRESSION

Wavelet Based Image Compression Using ROI SPIHT Coding

HYBRID TRANSFORMATION TECHNIQUE FOR IMAGE COMPRESSION

A QUAD-TREE DECOMPOSITION APPROACH TO CARTOON IMAGE COMPRESSION. Yi-Chen Tsai, Ming-Sui Lee, Meiyin Shen and C.-C. Jay Kuo

VLSI Implementation of Daubechies Wavelet Filter for Image Compression

Reducing/eliminating visual artifacts in HEVC by the deblocking filter.

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Novel Lossy Compression Algorithms with Stacked Autoencoders

Implementation and Analysis of Efficient Lossless Image Compression Algorithm

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

FAST AND EFFICIENT SPATIAL SCALABLE IMAGE COMPRESSION USING WAVELET LOWER TREES

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

EXPLORING ON STEGANOGRAPHY FOR LOW BIT RATE WAVELET BASED CODER IN IMAGE RETRIEVAL SYSTEM

A combined fractal and wavelet image compression approach

FRACTAL IMAGE COMPRESSION OF GRAYSCALE AND RGB IMAGES USING DCT WITH QUADTREE DECOMPOSITION AND HUFFMAN CODING. Moheb R. Girgis and Mohammed M.

Fingerprint Image Compression

JPEG compression of monochrome 2D-barcode images using DCT coefficient distributions

DEEP LEARNING OF COMPRESSED SENSING OPERATORS WITH STRUCTURAL SIMILARITY (SSIM) LOSS

Wireless Communication

[Singh*, 5(3): March, 2016] ISSN: (I2OR), Publication Impact Factor: 3.785

A Parallel Reconfigurable Architecture for DCT of Lengths N=32/16/8

CHAPTER-6 WATERMARKING OF JPEG IMAGES

An introduction to JPEG compression using MATLAB

Digital Watermarking with Copyright Authentication for Image Communication

Star Diamond-Diamond Search Block Matching Motion Estimation Algorithm for H.264/AVC Video Codec

Implication of variable code block size in JPEG 2000 and its VLSI implementation

Using Shift Number Coding with Wavelet Transform for Image Compression

A Low-power, Low-memory System for Wavelet-based Image Compression

An Information Hiding Scheme Based on Pixel- Value-Ordering and Prediction-Error Expansion with Reversibility

Low-complexity video compression based on 3-D DWT and fast entropy coding

International Journal of Advanced Research in Computer Science and Software Engineering

13.6 FLEXIBILITY AND ADAPTABILITY OF NOAA S LOW RATE INFORMATION TRANSMISSION SYSTEM

Compression of signs of DCT coefficients for additional lossless compression of JPEG images

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

Optimization of Bit Rate in Medical Image Compression

Lecture 5: Compression I. This Week s Schedule

Video Compression An Introduction

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

Digital Image Representation Image Compression

DCT SVD Based Hybrid Transform Coding for Image Compression

So, what is data compression, and why do we need it?

Measurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs

Compression II: Images (JPEG)

Transcription:

Comparative Analysis of Image Compression Algorithms for Deep Space Communications Systems Igor Bisio, Fabio Lavagetto, and Mario Marchese Department of Communications, Computer and System Science University of Genoa {igor.bisio,fabio.lavagetto,mario.marchese}@unige.it Abstract. In deep communications systems bandwidth availability, storage and computational capacity play a crucial role and represent precious, as well as limited, communications resources. Starting from this consideration, high efficient image compression coding algorithms may represent a key solution to optimize the resources employment. In this paper two possible approaches have been considered: and Image Compression, which is specifically designed for satellite and deep space communications. In more details, two coders have been compared in terms of performance: JasPer, which is an implementation of the standards, and the BPE, which is based on the recommendations. The proposed comparison takes into account both the quality of the compressed images, by evaluating the Peak Signal to Noise Ratio, and the time needed to compress the images: the Compression Time. The latter parameters, which concerns the computational complexity of the compression algorithm, is very interesting for deep space systems because of their limited computational and energy resources. Keywords: Deep Space Communications Systems, Image Compression,, Compression, PSNR, Compression Time. 1 Introduction In deep communications system bandwidth availability, storage and computational capacity play a crucial role and represent precious, as well as limited, communications resources. Starting from this consideration, high efficient image compression coding algorithms may represent a key solution to optimize the resources employment. is a wavelet-based image compression standard and coding system. It was created by the Joint Photographic Experts Group committee in the year 2 with the intention of superseding their original discrete cosine transform-based JPEG standard. On the other hand, The Consultative Committee for Space Data Systems () data compression working group has adopted a recommendation for image data compression that proposes an algorithm based on a two dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. approach represents a low computational load compression K. Sithamparanathan et al. (Eds.): PSATS 21, LNICST 43, pp. 63 73, 21. Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 21

64 I. Bisio, F. Lavagetto, and M. Marchese and, as a consequence is suitable for both frame-based image data and scan-based sensor data, and has applications for near-earth and deep-space missions. In this work, both and algorithms have been taken into account and compared in terms of performance. In more detail, two main performance parameters have been considered: Peak Signal to Noise Ratio (PSNR) and Compression Time. The former (PSNR) is a measure used to evaluate the quality of a compressed image with respect the original. This quality index is defined as the ratio of the maximum power of a signal and the power of noise that can invalidate the fidelity of his compressed version. PSNR is usually expressed in terms of the logarithmic scale of decibels. It is worth noting that in this context the noise concept comes from the amount of information that is lost during the compression with respect the original image. PSNR is often used as an indicator of perceptual quality: higher PSNR often results for compressed image more pleasing to the human eye. The latter parameter, the Compression Time, is the time needed by an algorithm to compress an image starting from the original one. In general the simplicity, fastness, and small storage necessities of an algorithm make it easy to be realized in hardware and suitable for space borne application. With respect to, approach is aimed at reducing the computational load of the compression algorithm by maintaining the overall quality of the compressed images. Nevertheless, the proposed comparison shows that the performance are strictly dependent on the practical implementation of the algorithm and, in particular, it is shown that among the employed coders the one have lower Compression Time with respect to based coder. The remainder of this paper is structured as follows. Section 2 shortly focuses on the considered Image Compression Approaches ( and ) and their possible implementation (JasPer and BPE coders). Section 3 illustrates the considered terms of the proposed performance comparison (PSNR and Compression Time). Performance comparison of the described approaches is presented in Section 4, whereas final remarks and conclusions are drawn in Section 5. 2 Image Compression Approaches: and 2.1 [1] is the well known new image compression standard for web and distribution on PDAs, phones, PCs, televisions, etc. It represents the evolution of the famous JPEG format and, even if equipped with highly innovative features, is not intended, at least in the short term, to replace JPEG, but rather is expected a transition during which the new standard will integrate and expand the features offered by JPEG. In summary, the salient features of JPEG 2 are: It is a unique coding system that can deal effectively with images from different sources, with different needs compression.

Comparative Analysis of Image Compression Algorithms 65 Allows both lossy compression (lossy), is that no loss compression (lossless). It produces images with better visual quality, especially at low bit-rate compared to those achievable with JPEG, thanks to the properties of the wavelet transform. Allows you to change and, possibly, to decode any image regions, working directly on data in compressed form. It can create scalable compressed images in terms of resolution both in the level of detail, leaving the implementer the freedom to choose how much information and what parts of the image can be used for decompression. Introduces the concept of Region of Interest (ROI) of an image. Indeed, one of the novelties introduced in is the opportunity to emphasize the importance of certain regions of the image, favoring the encoding of coefficients belonging to these areas. The use of ROI is particularly suitable for the encoding of images in which some parts are more important than their surroundings. Moreover can achieve high compression rates by maintaining acceptable image quality and there is the possibility of including, in a file that contains an image coded with, information on intellectual property and copyrights. 2.1.1 Considered Implementation: JasPer There are several distributions that allow to create encoding, certainly the most important are the implementation Kakadu (downloadable from the web site http://www.kakadusoftware.com/) and the JasPer Project (download http://www.ece.uvic.ca/~mdadams/jasper/), which is interesting for its peculiarity of being open source and can therefore exploit all the functionality of. Kakadu needs instead of purchasing. JasPer [2] is a software-based implementation of the codec specified in the standard. The development of this software had two motivations: firstly, the implementers wanted to develop a implementation using the standard as only reference. Secondly, by conducting interoperability testing with other implementations, implementers might find ambiguities in the text of the standards, allowing them to be corrected. In more detail, the design of the JasPer software was driven by several key concerns: fast execution speed, efficient memory usage, robustness, portability, modularity, maintainability, and extensibility. Since fixed-point operations are typically faster than their floating-point counterparts on most platforms, and some platforms lack hardware support for floating-point operations altogether, JasPer implementers elected to use only fixed-point operations in their software to match the objectives of high portability and fast execution speed. The JasPer software is written in the C programming language. This language was chosen mainly due to the availability of C development environments for most of computing platforms. The JasPer software consists of about 2, lines of code in total. This code is spread across several libraries. There are two executable programs, the first is the encoder, and the second is the decoder. The JasPer software can, moreover, handle

66 I. Bisio, F. Lavagetto, and M. Marchese image data in a number of popular formats (e.g., PGM/PPM, Windows BMP, and Sun Raster file). 2.2 The Consultative Committee for Space Data Systems () data compression working group has adopted a recommendation for image data compression. The algorithm adopted in the recommendation consists of a two-dimensional discrete wavelet transform of the image, followed by progressive bit-plane coding of the transformed data [4]. The algorithm can provide both lossless and lossy compression, and allows a user to directly control the compressed data volume or the fidelity with which the wavelet-transformed data can be reconstructed. The algorithm is suitable for both frame-based image data and scan-based sensor data, and has applications for near- Earth and deep-space missions. The standard is moreover accompanied by free software sources as briefly described in the sub-section below. 2.2.1 Considered Implementation: BPE Concerning the implementations of the standard, there are only two: one developed by the University of Nebraska, BPE (http://hyperspectral.unl.edu) and one by the Universitat Autonoma de Barcelona, the B Software (http:// www.gici.uab.cat/ter/). In this paper the first distribution, built in C + +, has been employed. Actually the TER, written in JAVA, has been used, but as a converter of raw images to pgm. 3 Terms of Comparison: PSNR and Compression Time 3.1 Peak Signal to Noise Ratio Peak Signal-to-Noise Ratio (PSNR) is a measure used to evaluate the quality of a compressed image from the original [5]. This quality index is defined as the ratio of the maximum power of a signal and the power of noise that can invalidate the fidelity of its compressed representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic scale of decibels as will be done in the presented comparison. It is worth noting that the noise is not related to the typical channel noise (e.g., thermal noise) or with the attenuation due to the distance between source and destination, or by other disturbs (e.g., electrical storm burst). The noise considered concerns, in the framework of this paper, the amount of information that is lost during the operation of lossy compression with respect to the original image. PSNR is often used as an indicator of perceptual quality in the sense that a higher PSNR often results in a compressed image more pleasing to the human eye. Nevertheless, this measure must be analyzed with caution because it happens that compressed images with values of PSNR lower than others are more similar to the original one. Independently of that consideration, it is possible to consider PSNR a fully reliable indicator in all cases in which it is used to compare results obtained from the same coder (or similar coders) [6].

Comparative Analysis of Image Compression Algorithms 67 From the analytical viewpoint, it is possible to define the PSNR from another well known estimator: the Mean Square Error (MSE) as reported in [6, 7]. It expresses the difference between the MSE of observed data and the values of estimated figures. Denoting by I the original image and the compressed image with K, both of dimension m n, the following quantity is defined as MSE between the two images: m 1n 1 2 MSE = I ( i, j) K ( i, j) (1) It represents the standard norm between correspondent pixels of the original image and compressed one. If the MSE is equal to would mean that there is no difference between the two images. Starting from equation (1), it is possible to compute the PSNR as follows: MAX { I} PSNR = 2log 1 (2) MSE Typically,.25 db are considered a significant improvement of the image compression method, valuable from the viewpoint of human perception. 3.2 Compression Time Another important aspect that has been taken into consideration in the evaluation of the compression algorithms is the compression time because, nevertheless the careful studies in the literature, some expected results may fails when the implementation aspects are taken into account. In more details, methods such as the, studied to have low computational loads, may be unready to be employed due to their row implementation that need to be further developed. In this paper, the standard is assumed as reference from the performance viewpoint. It is however worth noting that is valid for several environments and it is very complex and, for this reason, expensive from the computational viewpoint. Therefore, as also described in previous section, it was born an image compression algorithm developed by the studied to create a standard that focuses exclusively on the transmission of image through satellite and deep space transmission devices where the problem of energy consumption and, as a consequence, of computation load (and therefore of the compression time) becomes critical. As will be shown later, however, the envisaged increased compression speed of the approach, at the moment, does not exist, and indeed the algorithm in many cases is faster. 4 Performance Comparison 4.1 Reference Scenario Most of the tests were carried out using an image test taken directly from the site of (http://cwe.ccsds.org/sls/docs/sls-dc/). That image (b3.raw) has a resolution of 124 * 124 pixels, and is encoded with 8 bits/pixel.

68 I. Bisio, F. Lavagetto, and M. Marchese Below are reported several figures that analyze the performance of two approaches: (JasPer coder) and (BPE coder). The two considered approaches are absolutely identical in the processing of the original data in the Discrete Wavelength Transform domain but they differ in the coding of the transformed data. Firstly, coded images with the algorithm will be examined. They have been processed with DWT 9/7 floating point and with four different rates (bit per pixel):.25,.5, 1 and 2. Also different sizes of code blocks (8, 16, 32, 64) entering in to the coder itself have been tested. Analogously, the algorithm, with the same processor, same rate and same size of code blocks has been evaluated. As regards the calculation of PSNR, it should be noted that the JasPer open source package available is an executable that takes as input two images, in pgm format, and allows calculating various statistics, such as MSE and PSNR. Differently, it is more complicated to determine the PSNR in the case of algorithm, since that is not available a library that helps to calculate the statistics of the image. In this case, an ad-hoc algorithm to calculate precisely the PSNR has been developed. This program was written in Matlab, takes two input raw images and computes the MSE, thus giving the value of PSNR. Obviously the employed images are those before and after encoding. Concerning the compression time, the comparison has been performed in the following way: the time command has been exploited. In practice, the time spent in the execution of the compression from the first to the last line of code has been computed. Obviously that computation is influenced the type of hardware configuration and the processes running on your computer when launching the executable. For this reason the tests performed were made from the same machine on the same day, and with the same hardware configuration. Moreover, for each test, there have been seven measures. The result suggested in the graphs is the result of an arithmetic average on these measures. 5 PSNR vs Code Blocks = 8 4 3 PSNR (db) 2 1,25,5 1 2 Fig. 1. PSNR vs. (Code Blocks = 8)

Comparative Analysis of Image Compression Algorithms 69 4.2 PSNR Comparison In the following figures, the PSNR obtained by employing the described image compression algorithms has been reported in the several situations, in terms of and Code Blocks, mentioned above. 5 PSNR vs Code Blocks = 16 4 3 PSNR (db) 2 1,25,5 1 2 Fig. 2. PSNR vs. (Code Blocks = 16) PSNR vs Code Blocks = 32 5 4 3 PSNR (db) 2 1,25,5 1 2 Fig. 3. PSNR vs. (Code Blocks = 32)

7 I. Bisio, F. Lavagetto, and M. Marchese 5 PSNR vs Code Blocks = 64 4 3 PSNR (db) 2 1,25,5 1 2 Fig. 4. PSNR vs. (Code Blocks = 64) The analysis shows that the variation of the size of code blocks does not affect particularly the value of PSNR significantly. It remains fairly constant, except for cases where the code blocks have size equal to 8 (please note that size 8 means equal to 8 * 8, then a total of 64 blocks input to the coder) that is lower that about 2 db. Concerning the rate, as expected, increasing the compression, and thus reducing the rate, the quality decreases more significantly. It also indicates how the choice of encoding blocks of size 8 is always the worst from the performance point of view. The has a gain of at least 1 db compared to the in all scenarios, except for case in which the image is compressed with 2. In such cases, in fact, the performs better, and this improvement increases with the size of the blocks. In the case of high quality, where the test image has been encoded at 2 with Code Blocks = 64, the difference is estimated at 2.34 db. In general, from the test performed in this paper the superior quality of has been observed. 4.3 Compression Time Comparison In the following figures, the Compression Time obtained by employing the described image compression algorithms has been reported in the several situations also considered previously, in terms of and Code Blocks. These graphs are very interesting. The Figures, from 4 to 8, show that the compression time of JPEG2 decreases significantly with the increasing of the size of the blocks. As a consequence, in addition to increasing the quality, it also increases the efficiency. Differently, for the algorithm, the block size does not have any impact on the compression time.

Comparative Analysis of Image Compression Algorithms 71 TIME vs Code Blocks = 8 TIME (sec) 1,8 1,6 1,4 1,2 1,8,6,4,2,25,5 1 2 Fig. 5. Time vs. (Code Blocks = 8) TIME (sec),9,8,7,6,5,4,3,2,1 TIME vs Code Blocks = 16,25,5 1 2 Fig. 6. Time vs. (Code Blocks = 16) Except for the case of Figure 5, in which the behaves much better than the (the compression time equal to half of the reference standard), it is possible to observe the compression times are similar. Moreover, it is possible to observe a different trend compared to what was expected when the block size is equal to 32 and 64: in such cases, the algorithm is more efficient than the. This result was originally unexpected at the time of the analysis, since the was designed, by its express declaration, with the intent to be more efficient than.

72 I. Bisio, F. Lavagetto, and M. Marchese TIME vs Code Blocks = 32 TIME (sec),9,8,7,6,5,4,3,2,1,25,5 1 2 Fig. 7. Time vs. (Code Blocks = 32),9,8,7,6,5 TIME (sec),4,3,2,1 TIME vs Code Blocks = 64,25,5 1 2 Fig. 8. Time vs. (Code Blocks = 64) The key point is the considered implementation: the problem that affect the compression time does not concern the algorithm, but its implementation. In fact, the JasPer software has been optimized over time while the coder is still under development. 5 Conclusions This work focused on the comparison between two possible image compression approaches. The first algorithm, briefly described and evaluated, is based on the

Comparative Analysis of Image Compression Algorithms 73 standard. The second algorithm is based on the recommendation for image compression for satellite and deep space communication systems. Nevertheless the nature of the latter algorithm, though to be simple and to have a limited computational complexity, the current implementation of it, the BPE coder considered in this paper, has good PSNR performance but does not guarantee a really reduced image compression time. On the other hand, the coders, in particular the JasPer, being optimized guarantees good PSNR performance and also better compression time with respects the coder. In practice, from the results reported in this work, the image compression coders need to be further enhanced before their practical employment in satellite and deep space transmission systems. References 1. Taubman, D.S., Marcellin, M.W.: JPEG 2: Image Compression Fundamentals, Standards and Practice. Kluwer International Series in Engineering and Computer Science (2) 2. Adams, M.D., Kossentini, F.: JasPer: A Software-Based JPEG-2 Codec Implementation. In: Proc. IEEE International Conference on Image Processing, Vancouver, BC, Canada, October 2, vol. 2, pp. 53 56 (2) 3. International Standard ISO/IEC 15444-1, Core Coding System (December 2) 4. Yeh, P., Armbruster, P., Kiely, A., Masschelein, B., Moury, G., Schaefer, C., Thiebaut, C.: The New Image Compression Recommendation. In: Proc. IEEE Aerospace Conference, Big Sky, Montana, March 5-12 (25) 5. Web site, http://en.wikipedia.org/wiki/peak_signal-to-noise_ratio 6. Huynh-Thu, Q., Ghanbari, M.: Scope of validity of PSNR in image/video quality assessment. IEEE Electronics Letters 44(13), 8 81 7. Thomos, N., Boulgouris, N.V., Strintzis, M.G.: Optimized Transmission of Streams Over Wireless Channels. IEEE Transactions on Image Processing 15(1), 54 67