Megapixel Video for. Part 2 of 4. Brought to You by. Presented by Video Security Consultants

Similar documents
Part 1 of 4. MARCH

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Tech Note - 05 Surveillance Systems that Work! Calculating Recorded Volume Disk Space

Fundamentals of Video Compression. Video Compression

Digital Image Representation Image Compression

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

CS 335 Graphics and Multimedia. Image Compression

Advanced Video Coding: The new H.264 video compression standard

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

Video Compression An Introduction

DigiPoints Volume 1. Student Workbook. Module 8 Digital Compression

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

Video Compression MPEG-4. Market s requirements for Video compression standard

JPEG Compression Using MATLAB

Digital Video Processing

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm

JPEG 2000 vs. JPEG in MPEG Encoding

Image and Video Compression Fundamentals

Interframe coding A video scene captured as a sequence of frames can be efficiently coded by estimating and compensating for motion between frames pri

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

VIDEO AND IMAGE PROCESSING USING DSP AND PFGA. Chapter 3: Video Processing

Week 14. Video Compression. Ref: Fundamentals of Multimedia

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Module 8: Video Coding Basics Lecture 42: Sub-band coding, Second generation coding, 3D coding. The Lecture Contains: Performance Measures

Module 7 VIDEO CODING AND MOTION ESTIMATION

Introduction to Video Coding

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Lecture 5: Compression I. This Week s Schedule

Image Compression - An Overview Jagroop Singh 1

VC 12/13 T16 Video Compression

Topic 5 Image Compression

About MPEG Compression. More About Long-GOP Video

Video Compression Standards (II) A/Prof. Jian Zhang

Lecture 6: Compression II. This Week s Schedule

Video coding. Concepts and notations.

A COMPRESSION TECHNIQUES IN DIGITAL IMAGE PROCESSING - REVIEW

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

Rate Distortion Optimization in Video Compression

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

CMPT 365 Multimedia Systems. Media Compression - Image

Upcoming Video Standards. Madhukar Budagavi, Ph.D. DSPS R&D Center, Dallas Texas Instruments Inc.

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

Compression; Error detection & correction

Chapter 11.3 MPEG-2. MPEG-2: For higher quality video at a bit-rate of more than 4 Mbps Defined seven profiles aimed at different applications:

DIGITAL IMAGE PROCESSING WRITTEN REPORT ADAPTIVE IMAGE COMPRESSION TECHNIQUES FOR WIRELESS MULTIMEDIA APPLICATIONS

Video Transcoding Architectures and Techniques: An Overview. IEEE Signal Processing Magazine March 2003 Present by Chen-hsiu Huang

Digital Image Processing

IMAGE PROCESSING (RRY025) LECTURE 13 IMAGE COMPRESSION - I

Lecture 12: Compression

In the name of Allah. the compassionate, the merciful

Image compression. Stefano Ferrari. Università degli Studi di Milano Methods for Image Processing. academic year

Georgios Tziritas Computer Science Department

Interactive Progressive Encoding System For Transmission of Complex Images

Introduction to Video Compression

Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology

REVIEW ON IMAGE COMPRESSION TECHNIQUES AND ADVANTAGES OF IMAGE COMPRESSION

Compression Part 2 Lossy Image Compression (JPEG) Norm Zeck

MRT based Fixed Block size Transform Coding

FPGA IMPLEMENTATION OF BIT PLANE ENTROPY ENCODER FOR 3 D DWT BASED VIDEO COMPRESSION

Compression of Stereo Images using a Huffman-Zip Scheme

How an MPEG-1 Codec Works

CSCD 443/533 Advanced Networks Fall 2017

JPEG. Table of Contents. Page 1 of 4

IMAGE COMPRESSION USING HYBRID QUANTIZATION METHOD IN JPEG

CHAPTER 6. 6 Huffman Coding Based Image Compression Using Complex Wavelet Transform. 6.3 Wavelet Transform based compression technique 106

Multimedia Standards

Chapter 1. Digital Data Representation and Communication. Part 2

Laboratoire d'informatique, de Robotique et de Microélectronique de Montpellier Montpellier Cedex 5 France

Image Compression for Mobile Devices using Prediction and Direct Coding Approach

Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay

Module 6 STILL IMAGE COMPRESSION STANDARDS

Video Coding in H.26L

IMAGE COMPRESSION. Chapter - 5 : (Basic)

ONLIVE CLOUD GAMING SERVICE

Overview. Videos are everywhere. But can take up large amounts of resources. Exploit redundancy to reduce file size

Megapixel Networking 101. Why Megapixel?

Image and video processing

Lecture 5: Error Resilience & Scalability

Audio and video compression

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

EFFICIENT DEISGN OF LOW AREA BASED H.264 COMPRESSOR AND DECOMPRESSOR WITH H.264 INTEGER TRANSFORM

Course Syllabus. Website Multimedia Systems, Overview

ROI Based Image Compression in Baseline JPEG

Video Codec Design Developing Image and Video Compression Systems

Video Coding Standards: H.261, H.263 and H.26L

IMAGE COMPRESSION USING HYBRID TRANSFORM TECHNIQUE

Megapixel Video for. Part 1 of 4. Brought to You by. Presented by Video Security Consultants

Hybrid Image Compression Using DWT, DCT and Huffman Coding. Techniques

Visually Improved Image Compression by using Embedded Zero-tree Wavelet Coding

MPEG-l.MPEG-2, MPEG-4

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Comparative and performance analysis of HEVC and H.264 Intra frame coding and JPEG2000

Adaptive Quantization for Video Compression in Frequency Domain

Mahdi Amiri. February Sharif University of Technology

CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM

STUDY AND IMPLEMENTATION OF VIDEO COMPRESSION STANDARDS (H.264/AVC, DIRAC)

So, what is data compression, and why do we need it?

Image Compression Algorithm and JPEG Standard

Transcription:

rought to You by 2009 Video Security Consultants Presented by Part 2 of 4 A1

Part 2 of 4 How to Avert a Compression Depression Illustration by Jerry King While bandwidth is widening, larger video systems and more advanced megapixel cameras are continuing to push the throughput limits of network piping. Fortunately, new compression methods such as H.264 are available to help keep surveillance data flowing. Wy ob Wimmer elcome to Part II of the latest in SECURI- TY SALES & INTEGRATION s acclaimed D.U.M.I.E.S. series: Megapixel Video for D.U.M.I.E.S. rought to you by Pelco, this four-part series has been designed to educate readers about megapixel cameras and video the next phase of surveillance technology following the leap from digital to IP-based, or networked, CCTV systems. D.U.M.I.E.S. stands for dealers, users, managers, installers, engineers and salespeople. Recently, the megapixel revolution has begun to affect all of us in the industry. First came the megapixel camera, then megapixel lenses and, of course, megapixel video recorders. Perhaps the megapixel coffee cup is next! It s for certain that this changing technology is on a very fast track, and a great deal of hype has surrounded the megapixel revolution. ut what exactly is this so-called revolution all about? What is required to support megapixel systems and what are the main advantages of megapixel and IP cameras over analog cameras? The answers lie in this series of articles, which cover the theoretical and practical technology and design theories required to intelligently sell, install or service megapixel solutions. This edition tackles compression methods. Advantages of Megapixel Cameras In its basic form, compression is the art of removing information viewed as irrelevant to the viewer. In this case, the viewer is a dealer, systems integrator or anyone else who relies on high quality recorded images. The amount and type of information removed varies from system to system and can be controlled by system setup parameters. ut why do we need compression? To help answer this question, let s evaluate the requirements needed to transmit or store a single minute of composite video to a remote location. Without compression, the ability to store this information would require a minimum of 1.66G of storage space. In the case of a network-viewed, 3.1-megapixel camera it would require a system bandwidth of 168MHz. A2

Having recently attended the ISC West show in Las Vegas, the main attraction was not The Strip but rather advancing megapixel technology for the surveillance market. Everywhere one looked this technology was being displayed. After surveying the types of compression incorporated by manufacturers exhibiting at ISC West, it was obvious which have become the most popular. And the winners are: H.264 Motion JPEG JPEG MPEG-4 Wavelet Lossless Vs. Lossy Lossless When Less May e More Analyzing the video signal separates the signal into many parts, or subparts, and is classified by the importance in reference to the image s visual quality. Following this signal analysis, the next part is the quantizer. Quantization is simply the process of decreasing the number of bits needed to store a set of values, or transformed coefficients as they are called in data compression language. Since quantization is a many-to-one mapping and reduces the precision of those values, it is known as a lossy process (as opposed to lossless) and is the main source of compression in most image coding schemes. There is a trade-off between image quality and degree of quantization. A large quantization step size can produce unacceptably large image distortion. Lossy compression actually eliminates some of the data in the image and, therefore, provides greater compression ratios than lossless compression. Lossless, on the other hand, consists of those techniques guaranteed to generate an exact duplicate of the input data stream after a compress/ expand cycle. No information is lost, hence the name lossless. However, this method can only achieve a modest amount of compression. The lossless compression of images is important in fields such as medical imaging and remote-sensing where data integrity is essential. Typically, compression ratios for lossless codes, including variablelength encoding, are listed as an average of 4:1 compression. In variablelength encoding, prior to the writing of the image, the information is aligned according to frequency, which plays an important role in the image compression process. For the most part, lower frequencies, which occur more often, are placed to the front while higher frequencies are placed at the end. In any file, certain characters are used more than others. In general, we can attain significant savings if we use variable-length prefix codes that take advantage of the relative frequencies of the symbols in the messages to be encoded. The advantage of lossy methods over lossless methods is that, in some cases, a lossy method can produce a much smaller compressed file than any known lossless method, while still meeting the requirements of the application. 2 Paths: Full Image or Conditional There are several methods of analyzing a video image. The first is full image compression. This approach usually relates to Joint Photographic Experts Lossy The advantage of lossy methods over lossless methods is that in some cases a lossy method can produce a much smaller compressed file than any known lossless method, while still meeting the requirements of the application. Group (JPEG) and wavelet compression schemes in which the entire image is analyzed, compressed and transmitted. In most cases, this form of analyzing an image can only provide a limited amount of compression, meaning larger image file sizes and increased bandwidth issues. For the most part full image compression incorporates irrelevancy reduction methods. Irrelevancy reduction omits parts of the video signal that are not noticed or perceived by the signal receiver, which in this case is the human eye. Through the research of Human Visual Systems (HVS), it has been proven that small color changes are perceived less accurately than small changes in brightness, so why brother saving this information? It is also known that low frequency changes are more noticeable to the human eye than high frequency changes. (Low frequencies control the coarser or more noticeable conditions of a video image whereas higher frequencies are usually related to the finer details of a video image.) With conditional compression, only changes from image to image or to adjacent image are analyzed and compressed. This method is usually associated with Moving Picture Experts Group (MPEG) and H.264 compression methods. A3

Spatial Reduction Spectral Reduction Spatial reduction is based on the correlation between pixel values within an image. Spectral reduction is based on the correlation between color planes or bands within an image. Redundancy reduction is accomplished by removing duplication from the signal source, which is found either within a single image or between multiple images of a video stream. The first of three redundancy reduction methods is labeled spatial reduction. This is the reduction of the correlation between neighboring pixel values. As seen in the spatial reduction diagram above, the data stream can be reduced to single values for each of the four quadrants. Although this is a very simple example, it shows one of the basic ways for redundancy reduction. The next reduction method is spectral reduction. This is the correlation between color planes or bands within an image. As an example, let us look at the blue sky in the spectral reduction diagram above. Many areas of that sky have the same numeric value and, therefore, the amount of stored information to reproduce that same image in the decompression mode of operation. The last area is known as temporal reduction. This is the correlation between adjacent frames in a sequence. This information is the basis for MPEG as well as the H.263/H.264 series of compression methods. In temporal reduction two types of image arrangements are viewed. The first one is a full representation of the viewed image. This is known as the I- frame and is encoded as a single image, with no reference to any past or future images. In some circles it is also referred as the key-frame. The process for temporal is based on the question if there is no movement why bother saving the information? Any movement will be detected and the compression process will begin. Temporal Reduction Compression Fools the Human Eye There are four methods for compression, discrete cosine transform (DCT), vector quantization (VQ), fractal compression (FC) and discrete wavelet transform (DWT). DCT is a lossy compression algorithm that samples the image at regular intervals. It analyzes the components and discards those that do not affect the image as perceived by the human eye. JPEG, MPEG and H.264 are a few compression standards that incorporate DCT. VQ is also a lossy compression that looks at an array of important, instead of individual, values. It then generalizes what it sees, compresses redundant information and tries to retain the desired information as close to original as possible. FC is a form of VQ; however, this type of compression locates and compresses self-similar sections of an image. This compression then uses fractal algorithms. (Fractal is a generalization of an information-free, object-based Temporal reduction is the correlation between adjacent images in a sequence. Only changes in the scene are compressed. A4

compression scheme rather than a quantization matrix. It uses a set that is repetitive in shape, but not in size.) DWT compresses an image by frequency ranges. It filters the entire image, both high and low frequencies, and repeats this procedures several times. Wavelet compression utilizes the entire image, which differs from many DCT methods. Megapixel Compression Standards The most popular compression method displayed at the recent ISC expo was H.264. So we might as well begin there. H.264 This is an ITU standard for compressing video based on MPEG-4. H.264 delivers MPEG-4 quality with a frame size up to four times greater. It can also provide MPEG-2 quality at a reduced data rate requiring as little as one-third the original bandwidth. In theory, H.264 is based on block transforms and motion-compensated predictive coding. Motion estimation is used to identify and eliminate the temporal redundancies that exist between individual pictures. H.264 leverages today s processing power to provide improved coding techniques, including multiple reference frames and variable block sizes for motion compensation; intraframe prediction; an integer transform; an in-the-loop de-blocking filter; and improved entropy coding. This H.264 is also referred to as MPEG-4 AVC (Advanced Video Coding) or MPEG-4 Part 10. This compression standard introduces smaller block sizes, greater flexibility and greater precision in motion vectors. MPEG-4 This method incorporates the same compression methods as JPEG (DCT). However, MPEG is based on the group of images concept. The group of images is defined as the I- frames, P-frames and -frames. H.264 lock Diagram Video In The I-frame (intra) provides the starting point, or access point, and will offer only a small amount of compression. P-frames (predicted) are coded with reference to a previous picture, which can be either an I-frame or another P- frame. -frames (bidirectional) are intended to be compressed with a low bit rate, using both the previous and future references. The -frames are never used as the references. The relationship between the three frame types is described in the MPEG standard; however, it does not limit the number of -frames between the two references, or the number of images between two I-frames. Many of the megapixel IP cameras that incorporate MPEG-4 compression seem to be limited to 1.3 megapixels. Motion JPEG (M-JPEG) This is an informal name for multimedia formats where each video frame or interlaced field of a digital video sequence is separately compressed as a JPEG image. It is often used in mobile appliances such as digital cameras. M-JPEG uses intraframe coding technology that is very similar to the I- frame part of video coding standards Rate Control Transform Quantizer Coder uffer Motion compensated predictor Motion estimator Inverse quantizer Inverse transform Motion vectors H.264 using motion compensation to improve compression quality. Coded Image such as MPEG-1 and MPEG-2, but it does not use interframe prediction. The lack of use of interframe prediction results in a loss of compression capability, but eases video editing since simple edits can be performed at any frame when all frames are I-frames. M-JPEG is well suited to monitoring applications where it s not always essential to provide a TV-quality frame rate. With its relatively low processor demands, M-JPEG has made the current generation of network cameras possible. On the negative side, the M- JPEG format dates back to the early 1990s and compression technology has advanced considerably since then. Using only intraframe coding technology also makes the degree of compression capability independent of the amount of motion in the scene, since temporal prediction is not being used. However, although the bit rate of M-JPEG is substantially better than completely uncompressed video, it is considerably worse than that of video codecs that use interframe motion compensation, such as MPEG-1. While on this subject let s address the terms inter- and intraframe coding. Intraframe coding refers to the fact A5

MPEG Compression I-frames (key-frame) are coded using only information from that frame P-frames only code the difference between that frame and the previous I-frame or P-frame that the various lossless and lossy compression techniques are performed relative to information that is contained only within the current frame and not relative to any other frame in the video sequence. In other words, no temporal processing is performed outside of the current picture or frame. An interframe is a frame in a video compression stream that is expressed in terms of one or more neighboring frames. Interframe prediction tries to take advantage of temporal redundancy between neighboring frames, which allows it to achieve higher compression rates. An intercoded frame will firstly be divided into blocks known as macroblocks. After that, instead of directly encoding the raw pixel values for each block, as it would be done for an intraframe, the encoder will try to find a similar block to the one it is encoding on a previously encoded frame, referred to as a reference frame. This process is done by a block matching algorithm. If the encoder I P P P I -frames are coded using the best match from the previous I-frame or P-frame using only information from that frame In MPEG-type compression, the I-frame, or key-frame, can be adjusted to comply with greater or lesser bandwidth requirements. Visit www.securitysales. com/dumies to access more than five years of D.U.M.I.E.S. archives. succeeds on its search, the block could be directly encoded by a vector, known as a motion vector, which points to the position of the matching block at the reference frame. The process of motion vector determination is called motion estimation. The results of this motion estimation may not be accurate because the block found by the encoder may be similar but not exactly the same block it is encoding. This is why the encoder will compare both of them (the block found on the reference frame [I] and the block it is encoding), obtaining the differences between them. Those differences are known as the prediction error and need to be transformed and sent to the decoder. To sum up, if the encoder succeeds to find a matching block on a reference frame, it will obtain a motion vector pointing to the matched block and a prediction error. Using both elements, the decoder will be able to recover the raw pixels of the block. JPEG This is a lossy compression method, meaning that the decompressed image isn t quite the same as the one with which you started. JPEG is designed to exploit known limitations of the human eye, notably the fact that small color changes are perceived less accurately than small changes in brightness. Thus, it is intended for compressing images that will be viewed by humans. Data compression is achieved by concentrating on the lower spatial frequencies. According to the standard, modest compression of 20:1 can be achieved with only a small amount of image degrading. Wavelet Wavelet compression standards do not use DCT but instead incorporate the use of frequencies filtration. The advantage of wavelet compression is that, in contrast to JPEG and MPEG, its algorithm does not divide images into blocks, but rather analyzes the entire image. This characteristic of wavelet compression allows it to obtain good compression ratios, while maintaining good image quality. The filtering schemes rely on the image parts that are not noticed by the human eye. The more the filtering occurs the smaller the overall file size of the images, but also the lower the image quality when decompressed. As you can see with the addition of JPEG2000 (previously discussed), the Joint Photographic Experts Group is changing the way compression standards are being considered. Proprietary Techniques Armed with the background of many compression theories as well as the different ways video information is reduced, we can now apply this knowledge to the different compression standards available throughout the industry. A6

Next Up for D.U.M.I.E.S. : Megapixel Camera Applications Megapixels are the talk of the town. However, which surveillance application warrants the incorporation of this advancing technology? Find out in September s D.U.M.I.E.S. installment, found only in SSI. Megapixels Vs. Image Sizes Megapixel Image Sizes Image Size megapixels 4CIF (MPEG-4) 704 X 480 0.3 16CIF (MPEG-4; HD) 1,408 X 1,152 1.3 UXGA 1,600 X 1,200 2.0 QXGA 2,048 X 1,536 3.0 This article only explains the major compression standards presently approved, which means this list is by no means comprehensive. This is because many video equipment manufacturers have developed their own compression standards that they list as proprietary. They may have started with a common standard but modified it to meet special requirements for their equipment. Each image is assigned a numeric code in which common events or information is assigned only a few bits while rare or uncommon events are assigned a larger amount of bits. The steps to create this data output stream are divided into signal analysis, quantization and variable length encoding. y no means is the process behind compression easy. There is a tremendous amount of mathematical complexity required to establish the different compression methods incorporated in the video digital world. Image compression plays a very important part in the digital storage and transmission of video images. Most of the equipment offered today gives operators the capability to set up compression ratios (although the setup screens may use the term image quality) in order to meet their imaging needs. A high image quality setting represents low compression, while low quality settings indicate high compression of the signal. Megapixel cameras require compression in order for networks to handle the bandwidth requirements of today s systems. However, compression is not the only method incorporated to manage system bandwidth. The size of the image also plays a large part in this managing scheme. An area of significance that will improve network bandwidth is the actual pixel size of the video image necessary to produce the required results in a surveillance IP megapixel system (see diagram above). When utilizing MPEG-4 compression, the pixel size of an image is normally referred as the CIF (Common Intermediate Format) size, which is a standard video format used in videoconferencing. CIF formats are defined by their resolution 4CIF/16CIF. Room for Multiple Methods In closing, with all of the different types of reduction methods available for video images and the many different compression standards it is no wonder many people get confused with megapixel camera and bandwidth requirements. With each form of information reduction method or compression standard there is one single item to keep in mind: The quality of the reproduced image will depend on the application of that system. Not every compression method and image size is designed to match all requirements. When selecting your megapixel camera, keep in mind if the image quality, transmission speed and bandwidth capacity requirement is what you expected. If so, then you have made the right choice. n Robert (ob) Wimmer is president of Video Security Consultants and has more than 35 years of experience in CCTV. His consulting firm is noted for technical training, system design, technical support and overall system troubleshooting. A7