IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

Similar documents
IMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1

Topic 5 Image Compression

Image coding and compression

Image Coding and Compression

Fundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

1.Define image compression. Explain about the redundancies in a digital image.

Data and information. Image Codning and Compression. Image compression and decompression. Definitions. Images can contain three types of redundancy

Image Compression - An Overview Jagroop Singh 1

Compression. storage medium/ communications network. For the purpose of this lecture, we observe the following constraints:

Lossless Compression Algorithms

Source Coding Basics and Speech Coding. Yao Wang Polytechnic University, Brooklyn, NY11201

So, what is data compression, and why do we need it?

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

Image Coding and Data Compression

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

CS 335 Graphics and Multimedia. Image Compression

Multimedia Networking ECE 599

IMAGE COMPRESSION TECHNIQUES

Engineering Mathematics II Lecture 16 Compression

CoE4TN4 Image Processing. Chapter 8 Image Compression

Data Compression. Media Signal Processing, Presentation 2. Presented By: Jahanzeb Farooq Michael Osadebey

VC 12/13 T16 Video Compression

Fundamentals of Video Compression. Video Compression

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

7.5 Dictionary-based Coding

Compressing Data. Konstantin Tretyakov

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Lecture 10 (Chapter 7) ZHU Yongxin, Winson

EE67I Multimedia Communication Systems Lecture 4

WIRE/WIRELESS SENSOR NETWORKS USING K-RLE ALGORITHM FOR A LOW POWER DATA COMPRESSION

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Ch. 2: Compression Basics Multimedia Systems

Simple variant of coding with a variable number of symbols and fixlength codewords.

Distributed source coding

VIDEO SIGNALS. Lossless coding

Lecture 5: Compression I. This Week s Schedule

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels

Chapter 7 Lossless Compression Algorithms

Department of electronics and telecommunication, J.D.I.E.T.Yavatmal, India 2

Image and Video Compression Fundamentals

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

Sparse Transform Matrix at Low Complexity for Color Image Compression

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

JPEG Compression Using MATLAB

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

compression and coding ii

Digital Image Processing

Digital Image Processing

A Novel Image Compression Technique using Simple Arithmetic Addition

Source Coding Techniques

Image Compression Algorithm and JPEG Standard

DCT Based, Lossy Still Image Compression

ECE 499/599 Data Compression & Information Theory. Thinh Nguyen Oregon State University

Image coding and compression

Affable Compression through Lossless Column-Oriented Huffman Coding Technique

Multimedia Communications. Transform Coding

DigiPoints Volume 1. Student Workbook. Module 8 Digital Compression

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

IMAGE COMPRESSION TECHNIQUES

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Comparison of Image Compression Techniques: Huffman and DCT

Lecture 6: Compression II. This Week s Schedule

ITCT Lecture 8.2: Dictionary Codes and Lempel-Ziv Coding

Multimedia Systems. Part 20. Mahdi Vasighi

IMAGE COMPRESSION. Chapter - 5 : (Basic)

Shafq ur Réhman Image and Video Compression

Volume 2, Issue 9, September 2014 ISSN

Part 1 of 4. MARCH

Image Processing Computer Graphics I Lecture 15

Image Processing. Blending. Blending in OpenGL. Image Compositing. Blending Errors. Antialiasing Revisited Computer Graphics I Lecture 15

David Rappaport School of Computing Queen s University CANADA. Copyright, 1996 Dale Carnegie & Associates, Inc.

Video Compression An Introduction

Digital Image Processing (EI424)

CIS 121 Data Structures and Algorithms with Java Spring 2018

International Journal of Advancements in Research & Technology, Volume 2, Issue 9, September ISSN

Megapixel Video for. Part 2 of 4. Brought to You by. Presented by Video Security Consultants

Image and Video Coding I: Fundamentals

Video Codec Design Developing Image and Video Compression Systems

2-D SIGNAL PROCESSING FOR IMAGE COMPRESSION S. Venkatesan, Vibhuti Narain Rai

A Comparative Study of DCT, DWT & Hybrid (DCT-DWT) Transform

CS/COE 1501

Lecture Information Multimedia Video Coding & Architectures

Lecture 12: Compression

Wireless Communication

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

AN OPTIMIZED LOSSLESS IMAGE COMPRESSION TECHNIQUE IN IMAGE PROCESSING

Data Compression. An overview of Compression. Multimedia Systems and Applications. Binary Image Compression. Binary Image Compression

yintroduction to compression ytext compression yimage compression ysource encoders and destination decoders

Image Compression. Chapter 8 江铭炎教授 / 博导. Digital Image Processing Using MATLAB 网址 : 信息学院 => 精品课程 => 数字图像处理. Chapter 8 Image Compression

Digital Image Processing

Lempel-Ziv-Welch (LZW) Compression Algorithm

CS/COE 1501

Compression; Error detection & correction

CMPT 365 Multimedia Systems. Media Compression - Image

Introduction to Compression. Norm Zeck

Introduction to Data Compression

Compression II: Images (JPEG)

Image, video and audio coding concepts. Roadmap. Rationale. Stefan Alfredsson. (based on material by Johan Garcia)

Ch. 2: Compression Basics Multimedia Systems

Intro. To Multimedia Engineering Lossless Compression

Transcription:

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I 1

Need For Compression 2D data sets are much larger than 1D. TV and movie data sets are effectively 3D (2-space, 1-time). Need Compression for both transmission and storage. Applications - JPEG on internet - Picture telephones/video conferencing - Large image data bases (in US FBI stores 20 million fingerprints) 2

Types of Compression Applications to storage and transmission essentially the same. Although in the latter case may have to worry about transmission channel errors. Lossless compression is information preserving. - so is perfectly reversible. But often achievable compression is only modest (5:1 for grayscale images). Useful for permanent storage. Lossy compression does not preserve information. Can not be exactly reversed. Can get high compression, i.e. 20:1. Most useful for throw-away applications such as browsing a web page. 3

Sources of Compression Compression works by removing redundancy from data. Coding Redundancy. Lossless. Give common values short codes and uncommon values long codes. Interpixel Redundancy. Lossless/Lossy. Adjacent pixel values are highly correlated, so they don t send independent information. Psychovisual Redundancy. Lossy. Not all (high spatial frequency) information is perceived by eye/brain so don t transmit. A highly redundant image (uniform gray) is highly compressible. Characterised by one number. Random pixel-to-pixel uncorrelated noise with uniform distribution is incompressible. 4

General Compressor/Decompressor COMPRESSOR MAPPER QUANTISER SYMBOL ENCODER DECOMPRESSOR SYMBOL DECODER INVERSE MAPPER Mapper. Converts to a new (more compressible) image representation. i.e. take difference between pixels, convert to run lengths, or do a cosine transform. Quantiser. Really a re-quantiser. Take pixel values represented by 8 bits, lose accuracy and represent with 6 bits, therefore compress by factor 8/6 = 1.33. always lossy. Symbol Encoder. Encode common symbols with short codes and uncommon ones with long codes lossless The decompresser consists of an inverse symbol encoder and inverse mapper. There is no inverse quantiser since that step is lossy and irreversible. 5

Symbol Coder Encode separately each of the pixel values. Give common pixel values short codes, rare values long codes. Coder and decoder use same codebook. PROBABILITY OF GRAY LEVEL P(x) 0 255 x GRAY LEVEL VALUE GIVE LONG CODES e.g 10010011010101 GIVE SHORT CODES e.g. 1010 If the codeword for gray level value x has N(x) bits then the mean number of bits per pixel of compressed data N mean = 255 o P (x)n(x) For the right codebook we can get N mean < 8 and hence compression. 6

Coding Theorem Shannons Noiseless Coding Theorem - it is possible to encode without distortion a sequence of single symbols of entropy H 1 per symbol with a code using H 1 + η bits/symbol where η is arbitrarily small and where H 1 is; H 1 = L 1 x=0 P (x)log 2 P (x) where L is the number of gray levels and P (x) the probability of each gray level. The theorem however does not tell us how to construct the codebook. A simple way to approximate to it is via Huffman coding, where input pixel values of fixed length (say 8bits/pixel) are mapped to a variable length codewords. How do we know when the code for one pixel ends? Sending special combinations of bits to indicate pixel boundaries reduces compression. Instead make codewords uniquely decodable. 7

Huffman Coding Construct according to the following rules. Arrange the symbol probabilities, P (x) in decreasing order and consider them as leaf nodes on a tree. While there is more than one node: - Merge the two notes with smallest probability to form a new node whose probability is the sum of the merged nodes. - Arbitrarily assign 1 or 0 to each pair of branches into a node (often assign 1 to the branch to the larger probability, but it doesn t matter) create the code for each symbol by reading sequentially from the root node to the leaf node. 8

An example Huffman code for an image with 8 gray levels. Gray level value -decimal x Gray level Value-binary 0 000 0.25 2 010 0.21 1 001 0.15 3 011 0.14 4 100 0.0625 5 101 0.0625 7 111 0.0625 6 110 0.0625 Probability P(x) 0 0 0.54 1.00 0 1 1 1 1 0 0 0.29 0 0.125 0.250 0.125 1 0 1 0.46 1 Huffman Code 00 10 010 011 1100 1101 1110 1111 This is uniquely decodeable sequence because the first n bits of any length m Huffman code (where n < m) never equals another Huffman code. Can transmit coded data as a bit stream with no gaps.so the transmitted sequence 110000101101101111011 can be uniquely decoded as 1100/00/10/1101/10/1111/011/ or 4 0 2 5 2 6 3 9

For the above example we have x Input Prob Huffman Code Code Length P(x) N(x) 0 000 0.25 00 2 2 010 0.21 10 2 1 001 0.15 010 3 3 011 0.14 011 3 4 100 0.0625 1100 4 5 101 0.0625 1101 4 7 111 0.0625 1110 4 6 110 0.0635 1111 4 After encoding the mean length per pixel is N mean = 7 x=0 P (x)n(x) = 2.79 bits/pixel compared to original 3bits/pixels. Theoretical optimum for single symbol encoding is the entropy H 1 = 2.781 bits/pixel. 10

Huffman Coding-Notes Obviously both coders and decoders must use same codebook. Codebooks either can be hardwired into codecs as part of the standard (e.g Group 3 and 4 FAXm JPEG etc) (most common) OR Can transmit codebook first - but this reduces compression OR Can start with a default codebook and both transmitter and receiver can adapt using some algorithm a improved codebook as the symbols are decoded. Above basis of Lempel-Ziv-Welch (LZW) compression. Used in GIF and PNG image compression formats as well as unix Compress, gzip etc. For a signal with stationary statistics can show such universal compression approaches optimum lossless compression if signal of to H bits/pixel. 11

Multi-pixel Coding In a 8bit image, each of the possible 256 graylevels has its own, variable length Huffman code, - data rate approaches single pixel entropy H 1. If adjacent pixels are correlated (as they always are in interesting images) can encode groups of 2 or more pixels, giving each a unique code. Exploits interpixel and coding redundacy Consider first encoding pairs of pixel values. There are now 256 2 possible symbols. Based on probabilites of these symbols can calculate a two pixel entropy of H 2 bits/symbol. If no correlation between pixels, no benefit, H 2 measured in bits/symbol is twice H 1 in bits/symbol. In terms of bits/pixel have same value and maximum lossless compression ratio same. However if pixels intercorrelated. H 2 in bits/pixel can be less than H 1 in bits/pixel, can increase compression. 12

As the number of pixels jointly coded, n, goes to infinity we asymptotically obtain an estimate of the signal entropy H = H n, which depends on the statistics of interpixel correlation. H bits/pixel is the theoretical limit for any type of lossless compression in a signal (Shannon). Entropy per pixel H1 H2 H H3 H4 H5 H6 1 2 3 4 5 6 Number of Pixels per coded symbol 13

Quantiser Consider as a lossy alternative to entropy coding. Because it is lossy can achieve higher compression, but is not reversible(!) Go from fixed length binary number for each pixel value (say 8 bits) to another fixed length value (say 4 bits), and so compress by factor of 2. Simplest approach is just to remove 4 least significant bits from input value. Less distortion introduced if we convert pixels to 4-bit code words with unevenly distributed decision levels. Make decision levels closer together for regions where the image histogram peaks (p 368 of G & W). PROBABILITY OF GRAY LEVEL P(x) 0 0000 0001 0011 0010 0101 0100 0111 0110 1100 1011 1010 1001 1000 1101 1110 1111 255 x 44-bit Code values GRAY LEVEL VALUE 14

Given the pixel probabilities we have a requantisation table given below, Gray levle Input Range Input Range Code value Reconstruction (decimal) (8bit binary) (4 bit Binary) value 0-55 00000000-01100101 0000 27 56-70 01100110-10000110 0001 63 71-80 10000111-10100000 0010 76... etc, etc, etc We therefore have a compression of a factor of 2, although with a loss of image information. Choosing ranges for the requantisation based on the pixel histogram (narrow range at peak in histogram) however minimises errors. Theories exist on how for a given compression and image histogram to choose the decision levels and reconstruction values to minimise error. 15

Mapper We will discuss this more fully in the next lecture. consider both waveform and transform coders. Will Waveform encoders work in the the image domain. Simplest type, form difference between present pixel and the last one, then can quantise and entropy code this quanitity. Get more compression because the difference image has lower entropy. Will also discuss more complex predictive coding methods and run length coding. Transform coders use general transforms to another domain e.g. cosine or wavelet transforms used in JPEG and JPEG2000. Discuss JPEG in detail, since it uses psycho-visual redundancy, cosine transform, run-length coding and Huffman coding. In last lecture (if we have time) we will discuss briefly video compression (MPEG). 16

Simple Transformer - Pixel Difference Coding On each line of image form predictor of next pixel value based on previous pixels on a line f n by some rule. The simplest predictor is f n = f n 1, the last pixel value. Both transmitter and receiver use same rule. Transmit not f n but the pixel error e n = f n f n. If the predictor is simply the previous pixel values we encode pixel differences. Since pixels next to each other are usually highly correlated small differences are common. Histogram of differences is highly peaked and low entropy. Can often create an efficent entropy code for e n, get good lossless compression. 17

Transmitter Receiver u(n) + - e(n) Coder Decoder + + u (n) Predictor - u(n) Predictor 18