Compression for High-Quality, High Bandwidth Video. By Stewart Taylor

Similar documents
High Efficiency Video Coding. Li Li 2016/10/18

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

Introduction to Video Compression

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Video Compression An Introduction

Digital Video Processing

Video Codecs. National Chiao Tung University Chun-Jen Tsai 1/5/2015

Advanced Video Coding: The new H.264 video compression standard

Week 14. Video Compression. Ref: Fundamentals of Multimedia

Interframe coding A video scene captured as a sequence of frames can be efficiently coded by estimating and compensating for motion between frames pri

Laboratoire d'informatique, de Robotique et de Microélectronique de Montpellier Montpellier Cedex 5 France

Lecture 13 Video Coding H.264 / MPEG4 AVC

LIST OF TABLES. Table 5.1 Specification of mapping of idx to cij for zig-zag scan 46. Table 5.2 Macroblock types 46

MPEG-4: Simple Profile (SP)

Anatomy of a Video Codec

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

PREFACE...XIII ACKNOWLEDGEMENTS...XV

Video coding. Concepts and notations.

10.2 Video Compression with Motion Compensation 10.4 H H.263

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

In the name of Allah. the compassionate, the merciful

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Chapter 11.3 MPEG-2. MPEG-2: For higher quality video at a bit-rate of more than 4 Mbps Defined seven profiles aimed at different applications:

Video Compression Standards (II) A/Prof. Jian Zhang

H.264 STANDARD BASED SIDE INFORMATION GENERATION IN WYNER-ZIV CODING

The Scope of Picture and Video Coding Standardization

Chapter 10. Basic Video Compression Techniques Introduction to Video Compression 10.2 Video Compression with Motion Compensation

The VC-1 and H.264 Video Compression Standards for Broadband Video Services

Video Coding in H.26L

Video Compression MPEG-4. Market s requirements for Video compression standard

Lecture 6: Compression II. This Week s Schedule

Overview. Videos are everywhere. But can take up large amounts of resources. Exploit redundancy to reduce file size

Using animation to motivate motion

Digital Image Representation Image Compression

High Efficiency Video Coding: The Next Gen Codec. Matthew Goldman Senior Vice President TV Compression Technology Ericsson

VIDEO COMPRESSION STANDARDS

Chapter 2 Joint MPEG-2 and H.264/AVC Decoder

MPEG-4 Part 10 AVC (H.264) Video Encoding

Wireless Communication

JPEG: An Image Compression System. Nimrod Peleg update: Nov. 2003

Stereo Image Compression

Digital Image Processing

Multimedia Decoder Using the Nios II Processor

CMPT 365 Multimedia Systems. Media Compression - Video

Review and Implementation of DWT based Scalable Video Coding with Scalable Motion Coding.

Introduction to Video Encoding

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

Lecture 3 Image and Video (MPEG) Coding

Selected coding methods in H.265/HEVC

Standard Codecs. Image compression to advanced video coding. Mohammed Ghanbari. 3rd Edition. The Institution of Engineering and Technology

06/12/2017. Image compression. Image compression. Image compression. Image compression. Coding redundancy: image 1 has four gray levels

IMAGE COMPRESSION. Image Compression. Why? Reducing transportation times Reducing file size. A two way event - compression and decompression

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Chapter 10 ZHU Yongxin, Winson

Encoding Video for the Highest Quality and Performance

H.264 / AVC (Advanced Video Coding)

CS 335 Graphics and Multimedia. Image Compression

The Basics of Video Compression

OVERVIEW OF IEEE 1857 VIDEO CODING STANDARD

MPEG-2. ISO/IEC (or ITU-T H.262)

ESE532: System-on-a-Chip Architecture. Today. Message. Project. Expect. Why MPEG Encode? MPEG Encoding Project Motion Estimation DCT Entropy Encoding

Advanced Encoding Features of the Sencore TXS Transcoder

Index. 1. Motivation 2. Background 3. JPEG Compression The Discrete Cosine Transformation Quantization Coding 4. MPEG 5.

H.264/AVC und MPEG-4 SVC - die nächsten Generationen der Videokompression

IMPLEMENTATION OF H.264 DECODER ON SANDBLASTER DSP Vaidyanathan Ramadurai, Sanjay Jinturkar, Mayan Moudgill, John Glossner

Ch. 4: Video Compression Multimedia Systems

Introduction ti to JPEG

Rate Distortion Optimization in Video Compression

TRANSCODING OF H264 BITSTREAM TO MPEG 2 BITSTREAM. Dr. K.R.Rao Supervising Professor. Dr. Zhou Wang. Dr. Soontorn Oraintara

EFFICIENT DEISGN OF LOW AREA BASED H.264 COMPRESSOR AND DECOMPRESSOR WITH H.264 INTEGER TRANSFORM

FPGA based High Performance CAVLC Implementation for H.264 Video Coding

VHDL Implementation of H.264 Video Coding Standard

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Complexity Estimation of the H.264 Coded Video Bitstreams

Image and Video Compression Fundamentals

Features. Sequential encoding. Progressive encoding. Hierarchical encoding. Lossless encoding using a different strategy

Mark Kogan CTO Video Delivery Technologies Bluebird TV

THE H.264 ADVANCED VIDEO COMPRESSION STANDARD

Introduction to Video Encoding

Video Coding Standards. Yao Wang Polytechnic University, Brooklyn, NY11201 http: //eeweb.poly.edu/~yao

Lecture 4: Video Compression Standards (Part1) Tutorial 2 : Image/video Coding Techniques. Basic Transform coding Tutorial 2

Multimedia Systems Video II (Video Coding) Mahdi Amiri April 2012 Sharif University of Technology

JPEG: An Image Compression System

Lecture 8 JPEG Compression (Part 3)

Implementation and analysis of Directional DCT in H.264

Module 7 VIDEO CODING AND MOTION ESTIMATION

Introduction to Video Coding

Efficient support for interactive operations in multi-resolution video servers

Emerging H.26L Standard:

Lecture 5: Compression I. This Week s Schedule

Video Coding Standards: H.261, H.263 and H.26L

Introduction of Video Codec

An introduction to JPEG compression using MATLAB

Image and Video Watermarking

H.264 / AVC Context Adaptive Binary Arithmetic Coding (CABAC)

How an MPEG-1 Codec Works

Video Compression. Learning Objectives. Contents (Cont.) Contents. Dr. Y. H. Chan. Standards : Background & History

HEVC The Next Generation Video Coding. 1 ELEG5502 Video Coding Technology

JPEG. Wikipedia: Felis_silvestris_silvestris.jpg, Michael Gäbler CC BY 3.0

CMPT 365 Multimedia Systems. Media Compression - Image

Transcription:

Compression for High-Quality, High Bandwidth Video By Stewart Taylor Introduction This article provides an introduction to video compression and decompression algorithms, including two popular specifications for video compression, and the handling of video compression in the Intel Integrated Performance Primitives (Intel IPP). Overview of Coding Image and video encoders and decoders, in software called codecs, are intended to compress their media for storage or transmission. Raw images are quite large; with present technology, raw digital video is almost unworkable. Moreover, working with these media uncompressed, except for capture and display, is completely unnecessary and inefficient with processors as they are. It is faster to read compressed video from disk and decompress it than it would be to read uncompressed video. Most compression is based on taking advantage of redundancy and predictability in data to reduce the amount of information necessary to represent it. Two common techniques are run-length coding, which converts runs of data into run-lengths and values, and variable-length coding, which converts data of fixed bit lengths into variable bit lengths according to popularity. Huffman coding and arithmetic coding are examples of variable-length coding.

Another source of compression is exploiting the limits of perceptibility. Obviously, for some kinds of data, such as text and binary executables, compression must be lossless. A compression method that sometimes changed an a to an A would not be acceptable. Stand-alone Huffman coding is exactly reversible. However, it is possible to compress media information in a way that is not exactly reversible but is virtually undetectable. Such methods are called lossy. This means that the output is not guaranteed to be exactly the same. However, in many cases the loss can be imperceptible or have manageable visual effect. Just as with audio coding, the compression algorithm transforms the data into spaces in which information can be removed while minimizing the perceptible impact to the media. Most media compression is done using transformbased coding methods. Such methods convert the position-based information into frequency-based or position/frequency-based information. The compression benefit is that important information becomes concentrated in fewer values. Then the coder can represent the more-important information with more bits and the less-important information with fewer bits. The perception model dictates the importance of information, but generally higher-frequency information is considered less important. Figure 1 shows the framework of a transform-based encoding and decoding scheme. 2

Figure 1 Simple Diagram of Transform-Based Image Coding Compression schemes for video usually try to take advantage of a second source of redundancy, repetition between frames of video. The coder either encodes raw frames of video or encodes the difference, often compensated for motion, between successive frames. Coding in Intel Integrated Performance Primitives (Intel IPP) The Intel IPP support of video compression is very similar to that for JPEG, taking several forms. Intel IPP provides portions of codecs and includes samples that are partial codecs for several compression algorithms. In particular, it includes: n General functions such as transforms and arithmetic operations that are applicable across one or more compression algorithms. 3

n Specific functions such as Huffman coding for JPEG that you can think of as codec slices. At present, Intel IPP provides such functions for MPEG-1, MPEG-2, MPEG-4, DV, H.263, and H.264. n Sample encoders and decoders of several major video standards, including MPEG-2, MPEG-4, and H.264. n Universal Media Classes (UMC) that wrap these codecs into a platform-neutral video decode and display pipeline. The subsequent sections will explain each of these elements for two algorithms, MPEG-2 and H.264, and describes and gives examples of UMC. The explanation includes the above categories of support, leaning heavily on examples from the codec samples. MPEG-2 This section describes the video portion of the MPEG-2 standard. MPEG-2 is intended for high-quality, highbandwidth video. It is most prominent because it is used for DVD and HDTV video compression. Computationally, good encoding is expensive but can be done in real time by current processors. Decoding an MPEG-2 stream is relatively easy and can be done by almost any current processor or, obviously, by commercial DVD players. MPEG-2 players must also be able to play MPEG-1. MPEG-1 is very similar, though the bit stream differs and the motion compensation has less resolution. It is used as the video compression on VCDs. MPEG-2 is a complicated format with many options. It includes seven profiles dictating aspect ratios and feature sets, four levels specifying resolution, bit rate, and frame rate, and three frame types. The bit stream code is complex and requires several tables. However, at its core are computationally com- 4

plex but conceptually clear compression and decompression elements. These elements are the focus of this section. MPEG-2 Components MPEG-2 components are very similar to those in JPEG. MPEG-2 is DCT based, and uses Huffman coding on the quantized DCT coefficients. However, the bit stream format is completely different, as are all the tables. Unlike JPEG, MPEG-2 also has a restricted, though very large, set of frame rates and sizes. But the biggest difference is the exploitation of redundancy between frames. There are three types of frames in MPEG: I (intra) frames, P (predicted) frames, and B (bidirectional) frames. There are several consequences of frame type, but the defining characteristic is how prediction is done. Intra frames do not refer to other frames, making them suitable as key frames. They are, essentially, self-contained compressed images. By contrast, P frames are predicted by using the previous P or I frame, and B frames are predicted using the previous and next P or I frame. Individual blocks in these frames may be intra or nonintra, however. MPEG is organized around a hierarchy of blocks, macroblocks, slices, and frames. Blocks are 8 pixels high by 8 pixels wide in a single channel. Macroblocks are a collection of blocks 16 pixels high by 16 pixels wide and contain all three channels. Depending on subsampling, a macroblock contains 6, 8, or 12 blocks. For example, a YCbCr 4:2:0 macroblock has four Y blocks, one Cb and one Cr. Following are the main blocks of an MPEG-2 codec, in encoding order. Figure 2 shows how these blocks relate to one another. Motion Estimation and Compensation The key to the effectiveness of video coding is using earlier and sometimes later frames to predict a 5

value for each pixel. Image compression can only use a block elsewhere in the image as a base value for each pixel, but video compression can aspire to use an image of the same object. Instead of compressing pixels, which have high entropy, the video compression can compress the differences between similar pixels, which have much lower entropy. Objects and even backgrounds in video are not reliably stationary, however. In order to make these references to other video frames truly effective, the codec needs to account for motion between the frames. This is accomplished with motion estimation and compensation. Along with the video data, each block also has motion vectors that indicate how much that frame has moved relative to a reference image. Before taking the difference between current and reference frame, the codec shifts the reference frame by that amount. Calculating the motion vectors is called motion estimation and accommodating this motion is called motion compensation. This motion compensation is an essential and computationally expensive component in video compression. In fact, the biggest difference between MPEG-1 and MPEG-2 is the change from full-pel to half-pel accuracy. This modification makes a significant difference in quality at a given data rate, but also makes MPEG-2 encode very time-consuming. DCT Like JPEG, MPEG is DCT-based. The codec calculates a DCT on each 8 x 8 block of pixel or difference information of each image. The frequency information is easier to sort by visual importance and quantize, and it takes advantage of regions of each frame that are unchanging. 6

Figure 2 High-Level MPEG-2 Encoder and Decoder Blocks 7

Quantization Quantization in MPEG is different for different block types. There are different matrices of coefficient-specific values for intra and non-intra macroblocks, as well as for color and intensity data. There is also a scale applied across all matrices. Both the scale and the quantization matrix can change each macroblock. For intra blocks, the DC, or zero-frequency, coefficient is quantized by dropping the low 0 to 3 bits; that is, by shifting it right by zero to three bits. The AC coefficients are assigned into quantization steps according to the global scale and the matrix. The quantization is linear. For non-intra blocks, the DC component contains less important information and is more likely to tend toward zero. Therefore, the DC and AC components are quantized in the same way, using the nonintra quantization matrix and scale. Huffman Coding In order for reduced entropy in the video data to become a reduced data rate in the bit stream, the data must be coded using fewer bits. In MPEG, as with JPEG, that means a Huffman variable-length encoding scheme. Each piece of data is encoded with a code the length of which is inversely related to its frequency. Because of the complexity of MPEG-2, there are dozens of tables of codes for coefficients, block types, and other information. For intra blocks, the DC coefficient is not coded directly. Instead, the difference between it and a predictor is used. This predictor is either the DC value of the last block if present and intra, or a constant average value otherwise. Two scan matrices are used to order the DCT coefficients. One does a zig-zag pattern that is close to diagonally symmetric for blocks that are not interlaced; the other does a modified zig-zag for interlaced blocks. These matrices put the coefficients 8

in order of increasing frequency in an attempt to maximize lengths of runs of data. The encoder codes run-level data for this matrix. Each run-level pair represents the number of consecutive occurrences of a certain level. The more common pairs have codes in a Huffman table. Less common codes, such as runs of more than 31, are encoded as an escape code followed by a 6-bit run and 12-bit level. MPEG-2 in Intel IPP Intel IPP provides a very efficient sample encoder and decoder for MPEG-2. Due to the number of variants, it is only a sample and not a compliant codec. Each side of the codec includes hundreds of Intel IPP function calls. The bulk of the code in the sample is for bit stream parsing and data manipulation, but the bulk of the time is spent decoding the pixels. For this reason, almost all of the Intel IPP calls are concentrated in the pixel decoding blocks. In particular, the key high-level functions are the member functions of the class MPEG2VideoDecoderBase: DecodeSlice_FrameI_420 DecodeSlice_FramePB_420 DecodeSlice_FieldPB_420 DecodeSlice_FrameI_422 DecodeSlice_FramePB_422 DecodeSlice_FieldPB_422 These functions decode the structure of the image, then pass the responsiblility for decoding individual blocks into a function such as ippidecodeintra8x8idct_mpeg2_1u8u. Figure 2 shows the key portions of two of these functions. 9

Status MPEG2VideoDecoderBase::DecodeSlice_FrameI_420( IppVideoContext *video)... DECODE_VLC(macroblock_type, video->bs, vlcmbtype[0]); if (load_dct_type) GET_1BIT(video->bs, dct_type); if (macroblock_type & IPPVC_MB_QUANT) DECODE_QUANTIZER_SCALE(video->bs, video->cur_q_scale); if (PictureHeader.concealment_motion_vectors) if (PictureHeader.picture_structure!= IPPVC_FRAME_PICTURE) SKIP_BITS(video->bs, 1); mv_decode(0, 0, video); SKIP_BITS(video->bs, 1); RECONSTRUCT_INTRA_MB_420(video->bs, dct_type); //DecodeSlice_FrameI_420 #define RECONSTRUCT_INTRA_MB_420(BITSTREAM, DCT_TYPE) \ RECONSTRUCT_INTRA_MB(BITSTREAM, 6, DCT_TYPE) #define RECONSTRUCT_INTRA_MB(BITSTREAM, NUM_BLK, DCT_TYPE) \ \... for (blk = 0; blk < NUM_BLK; blk++) \ sts = ippidecodeintra8x8idct_mpeg2_1u8u(... ); \ \ Status MPEG2VideoDecoderBase::DecodeSlice_FramePB_420( IppVideoContext *video)... if (video->prediction_type == IPPVC_MC_DP) 10

mc_dualprime_frame_420(video); else mc_frame_forward_420(video); if (video->macroblock_motion_backward) mc_frame_backward_add_420(video); else if (video->macroblock_motion_backward) mc_frame_backward_420(video); else RESET_PMV(video->PMV) mc_frame_forward0_420(video); if (macroblock_type & IPPVC_MB_PATTERN) RECONSTRUCT_INTER_MB_420(video->bs, dct_type); return UMC_OK; //DecodeSlice_FramePB_420 void MPEG2VideoDecoderBase::mc_frame_forward0_422( IppVideoContext *video) MC_FORWARD0(16, frame_buffer.y_comp_pitch, frame_buffer.u_comp_pitch); #define MC_FORWARD0(H, PITCH_L, PITCH_C) \... ippicopy16x16_8u_c1r(ref_y_data + offset_l, PITCH_L, \ cur_y_data + offset_l, PITCH_L); \ ippicopy8x##h##_8u_c1r(ref_u_data + offset_c, PITCH_C, \ cur_u_data + offset_c, PITCH_C); \ ippicopy8x##h##_8u_c1r(ref_v_data + offset_c, PITCH_C, \ cur_v_data + offset_c, PITCH_C); #define RECONSTRUCT_INTER_MB_420(BITSTREAM, DCT_TYPE) \ RECONSTRUCT_INTER_MB(BITSTREAM, 6, DCT_TYPE) #define RECONSTRUCT_INTER_MB(BITSTREAM, NUM_BLK, DCT_TYPE) \... for (blk = 0; blk < NUM_BLK; blk++) \... 11

sts = ippidecodeinter8x8idctadd_mpeg2_1u8u(...); Figure 2 Structure of MPEG-2 Intra Macroblock Decoding For decoding, two Intel IPP function groups execute most of the decoding pipeline. Between them they implement a large portion of an MPEG-2 decoder, at least for intra blocks. The first group is ippireconstructdctblock_mpeg2 for non-intra blocks and ippireconstructdctblockintra_mpeg2 for intra blocks. These functions decode Huffman data, rearrange it, and dequantize it. The source is the Huffman-encoded bit stream pointing to the top of a block and the destination is an 8 x 8 block of consecutive DCT coefficients. The Huffman decoding uses separate tables for AC and DC codes, formatted in the appropriate Intel IPP Spec structure. The scan matrix argument specifies the zigzag pattern to be used. The functions also take two arguments for the quantization, a matrix and a scale factor. Each element is multiplied by the corresponding element in the quantization matrix, then by the global scale factor. The function ReconstructDCTBlockIntra also takes two arguments for processing the DC coefficient: the reference value and the shift. The function adds the reference value, which is often taken from the last block, to the DC coefficient. The DC coefficient is shifted by the shift argument, which should be zero to three bits as indicated above. The second main function is the inverse DCT. The two most useful DCT functions are ippidct8x8invlsclip_16s8u_c1r for intra blocks and ippidct8x8inv_16s_c1r for non-intra blocks. The versions without level-shift and clipping can also be used. This former function inverts the DCT on an 8 x 8 block then converts the data to Ipp8u with a level shift. The output values are pixels. The latter function inverts the DCT and leaves the result in 12

Ipp16s; the output values are difference values. The decoder must then add these difference values to the motion-compensated reference block. Figure 4 shows these function groups decoding a 4:2:0 intra macroblock. The input is a bit stream and several pre-calculated tables. The DCT outputs the pixel data directly in an image plane. The four blocks of Y data are arrayed in a 2 x 2 square in that image, and the U and V blocks are placed in analogous locations in the U and V planes. This output can be displayed directly by the correct display, or the U and V planes can be upsampled to make a YCbCr 4:4:4 image, or the three planes can be converted by other Intel IPP functions to RGB for display. ippireconstructdctblockintra_mpeg2_32s( &video->bitstream_current_data, &video->bitstream_bit_ptr, pcontext->vlctables.ipptableb5a, pcontext->table_rl, scan_1[pcontext->pictureheader.alternate_scan], q_scale[pcontext->pictureheader.q_scale_type] [pcontext->quantizer_scale], video->curr_intra_quantizer_matrix, &pcontext->slice.dct_dc_y_past, pcontext->curr_intra_dc_multi, pcontext->block.idct, &dummy); ippireconstructdctblockintra_mpeg2_32s( pcontext->block.idct+64, &dummy); // Repeat two more times for other Y blocks ippireconstructdctblockintra_mpeg2_32s( ) VIDEO_FRAME_BUFFER* frame = &video->frame_buffer.frame_p_c_n [video->frame_buffer.curr_index]; // Inverse DCT and place in 16x16 block of image ippidct8x8invlsclip_16s8u_c1r( pcontext->block.idct, 13

frame->y_comp_data + pcontext->offset_l, pitch_y, 0, 0, 255); ippidct8x8invlsclip_16s8u_c1r( pcontext->block.idct, frame->y_comp_data + pcontext->offset_l + 8, pitch_y, 0, 0, 255); ippidct8x8invlsclip_16s8u_c1r( pcontext->block.idct, frame->y_comp_data + pcontext->offset_l + 8*pitch_Y, pitch_y, 0, 0, 255); ippidct8x8invlsclip_16s8u_c1r( pcontext->block.idct, frame->y_comp_data + pcontext->offset_l + 8*pitch_Y + 8, pitch_y, 0, 0, 255); ippireconstructdctblockintra_mpeg2_32s( &video->bitstream_current_data, &video->bitstream_bit_ptr, pcontext->vlctables.ipptableb5b, pcontext->table_rl, scan_1[pcontext->pictureheader.alternate_scan], q_scale[pcontext->pictureheader.q_scale_type] [pcontext->quantizer_scale], video->curr_chroma_intra_quantizer_matrix, &pcontext->slice.dct_dc_cb_past, pcontext->curr_intra_dc_multi, pcontext->block.idct, &i1); ippireconstructdctblockintra_mpeg2_32s( &pcontext->slice.dct_dc_cr_past, pcontext->curr_intra_dc_multi, pcontext->block.idct + 64,&i2); ippidct8x8invlsclip_16s8u_c1r ( pcontext->block.idct, frame->u_comp_data + pcontext->offset_c, pitch_uv, 0,0,255); ippidct8x8invlsclip_16s8u_c1r ( pcontext->block.idct + 64, frame->v_comp_data + pcontext->offset_c, pitch_uv, 0,0,255); Figure 3 Decoding an MPEG-2 Intra Macroblock 14

The dummy parameter to the first ippireconstructdctblock call is not used here but can be used for optimization. If the value returned is 1, then only the DC coefficient is nonzero and the inverse DCT can be skipped. If it is less than 10, then all the nonzero coefficients are in the first 4 x 4 block, and a 4 x 4 inverse DCT can be used. The ippidct8x8inv_16s8u_c1r functions could be called instead of the ippidct8x8invlsclip_16s8u_c1r because data is clipped to the 0 255 range by default. In the non-intra case, the pointer to the quantization matrix can be 0. In that case, the default matrices will be used. Figure 4 shows another approach to decoding, from the MPEG-2 sample for Intel IPP 5.2. Instead of using the ippireconstructdctblock function for decoding, it implements a pseudo-ipp function called ippidecodeintra8x8idct_mpeg2_1u8u. This function does almost the entire decoding pipeline, from VL coding through motion compensation. Within this function, much of the decoding is done within C++, largely using macros and state logic. The Huffman decoding in this sample is done in C++ using macros. The quantization is done in C++, on each sample as it is decoded. The motion compensation is done along with the DCT in one of the DCT macros. This function calls uses several DCT functions. Most of the DCTs are done by two useful functions, ippidct8x8inv_16s8u_c1r and ippidct8x8inv_16s_c1r for intra blocks and inter blocks, respectively. The former function converts the output to Ipp8u, because for intra blocks those values are pixels. The latter function leaves the result in Ipp16s, because the output values are difference values to be added to the motion-compensated reference block. The sample also uses other DCT function, such as the specialized function ippidct8x8inv_aantransposed, that assumes that the samples are transposed and in zigzag order, and accommodates implicit zero coeffi- 15

cients at the end. For blocks that are mostly zeros, the decoder also uses the function ippidct8x8inv_4x4_16s_c1. MP2_FUNC(IppStatus, ippidecodeinter8x8idctadd_mpeg2_1u8u, ( Ipp8u** BitStream_curr_ptr, Ipp32s* BitStream_bit_offset, IppiDecodeInterSpec_MPEG2* pquantspec, Ipp32s quant, Ipp8u* psrcdst, Ipp32s srcdststep)) // VLC decode & dequantize for one block for (;;) if ((code & 0xc0000000) == 0x80000000) break; else if (code >= 0x08000000) tbl = MPEG2_VLC_TAB1[UHBITS(code - 0x08000000, 8)]; common: i++; UNPACK_VLC1(tbl, run, val, len) i += run; i &= 63; // just in case j = scanmatrix[i]; q = pquantmatrix[j]; val = val * quant; val = (val * q) >> 5; sign = SHBITS(code << len, 1); APPLY_SIGN(val, sign); SKIP_BITS(BS, (len+1)); pdstblock[j] = val; mask ^= val; SHOW_HI9BITS(BS, code); continue; else if (code >= 0x04000000)...... pdstblock[63] ^= mask & 1; 16

SKIP_BITS(BS, 2); COPY_BITSTREAM(*BitStream, BS) IDCT_INTER(pDstBlock, i, idct, psrcdst, srcdststep); return ippstsok; #define FUNC_DCT8x8 ippidct8x8inv_16s_c1 #define FUNC_DCT4x4 ippidct8x8inv_4x4_16s_c1 #define FUNC_DCT2x2 ippidct8x8inv_2x2_16s_c1 #define FUNC_DCT8x8Intra ippidct8x8inv_16s8u_c1r #define FUNC_ADD8x8 ippiadd8x8_16s8u_c1irs #define IDCT_INTER(SRC, NUM, BUFF, DST, STEP) \ if (NUM < 10) \ if (!NUM) \ IDCTAdd_1x1to8x8(SRC[0], DST, STEP); \ else \ IDCT_INTER_1x4(SRC, NUM, DST, STEP) \ /*if (NUM < 2) \ FUNC_DCT2x2(SRC, BUFF); \ FUNC_ADD8x8(BUFF, 16, DST, STEP); \ else*/ \ FUNC_DCT4x4(SRC, BUFF); \ FUNC_ADD8x8(BUFF, 16, DST, STEP); \ \ else \ FUNC_DCT8x8(SRC, BUFF); \ FUNC_ADD8x8(BUFF, 16, DST, STEP); \ Figure 4 Alternate MPEG-2 Decoding on an Inter Macroblock The Intel IPP DCT functions also support an alternative layout for YUV data, a hybrid layout in which there are two planes, Y and UV. The UV plane consists of U and V data interleaved. In this case, there is one 16 x 8 block of UV data per macroblock. The Intel IPP functions ippidct8x8inv_aantransposed_16s_p2c2r supporting inter frames and ippidct8x8inv_aantransposed_16s8u_p2c2r for intra frames support this alternative layout. The ippimc16x8uv_8u_c1 and ippimc16x8buv_8u_c1 functions support motion compensation on this layout. 17

On the encoding side, functions are mostly analogous to each of the decode functions listed above. For intra blocks, the forward DCT function ippidct8x8fwd_8u16s_c1r converts a block of Ipp8u pixels into Ipp16s DCT coefficients. Then the function ippiquantintra_mpeg2 performs quantization, and the function ippiputintrablock calculates the run-level pairs and Huffman encodes them. The parameters for these last two functions are very similar to those for their decoding counterparts. For inter blocks, the function ippidct8x8fwd_16s_c1r converts the difference information into DCT coefficients, the function ippiquant_mpeg2 quantizes, and the function ippiputnonintrablock calculates and encodes the runlevel pairs. Motion Estimation and Compensation Motion estimation by the encoder is very computationally intensive, since it generally requires repeated evaluation of the effectiveness of candidate motion compensation vectors. However the possible motion vectors are chosen, using a fast evaluation function speeds up the algorithm. The Intel IPP functions ippisad16x16, ippisqrdiff16x16, and ippisqrdiff16x16 compare blocks from one frame against motion-compensated blocks in a reference frame. ippisad calculates the sum of absolute differences between the pixels, while ippisqrdiff calculates the sum of squared differences. The Intel IPP sample uses the former. Once the encoder has finished searching the space of possible motion vectors, it can use the many ippigetdiff functions to find the difference between the current frame and the reference frame after motion compensation. Both the encoder and decoder need a motion compensation algorithm. Intel IPP-based algorithms can use ippimc or ippiadd to combine the reference frame with the decoded difference information. Figure 6 18

shows such an algorithm for a macroblock from a 4:2:0 B-frame. // Determine whether shift is half or full pel // in horizontal and vertical directions // Motion vectors are in half-pels in bitstream // The bit code generated is: // FF = 0000b; FH = 0100b; HF = 1000b; HH = 1100b flag1 = pcontext->macroblock.prediction_type ((pcontext->macroblock.vector[0] & 1) << 3) ((pcontext->macroblock.vector[1] & 1) << 2); flag2 = pcontext->macroblock.prediction_type ((pcontext->macroblock.vector[0] & 2) << 2) ((pcontext->macroblock.vector[1] & 2) << 1); flag3 = pcontext->macroblock.prediction_type ((pcontext->macroblock.vector[2] & 1) << 3) ((pcontext->macroblock.vector[3] & 1) << 2); flag4 = pcontext->macroblock.prediction_type ((pcontext->macroblock.vector[2] & 2) << 2) ((pcontext->macroblock.vector[3] & 2) << 1); // Convert motion vectors from half-pels to full-pel // also convert for chroma subsampling // down, previous frame vector_luma[1] = pcontext->macroblock.vector[1] >>1; vector_chroma[1] = pcontext->macroblock.vector[1] >>2; // right, previous frame vector_luma[0] = pcontext->macroblock.vector[0] >> 1; vector_chroma[0] = pcontext->macroblock.vector[0] >> 2; // down, subsequent frame vector_luma[3] = pcontext->macroblock.vector[3] >> 1; vector_chroma[3] = pcontext->macroblock.vector[3] >> 2; // right, subsequent frame vector_luma[2] = pcontext->macroblock.vector[2] >> 1; vector_chroma[2] = pcontext->macroblock.vector[2] >> 2; offs1 = (pcontext->macroblock.motion_vertical_field_select[0] + vector_luma[1] + pcontext->row_l) * pitch_y + vector_luma[0] + pcontext->col_l, offs2 = 19

(pcontext->macroblock.motion_vertical_field_select[1] + vector_luma[3] + pcontext->row_l) * pitch_y + vector_luma[2] + pcontext->col_l, i = ippimc16x16b_8u_c1( ref_y_data1 + offs1, ptc_y, flag1, ref_y_data2 + offs2, ptc_y, flag3, pcontext->block.idct, 32, frame->y_comp_data + pcontext->offset_l, ptc_y, 0); assert(i == ippstsok); offs1 = (pcontext->macroblock.motion_vertical_field_select[0] + vector_chroma[1] + pcontext->row_c)* pitch_uv + vector_chroma[0] + pcontext->col_c; offs2 = (pcontext->macroblock.motion_vertical_field_select[1] + vector_chroma[3] + pcontext->row_c)* pitch_uv + vector_chroma[2] + pcontext->col_c; i = ippimc8x8b_8u_c1( ref_u_data1 + offs1, ptc_uv, flag2, ref_u_data2 + offs2, ptc_uv, flag4, pcontext->block.idct+256,16, frame->u_comp_data + pcontext->offset_c, ptc_uv, 0); assert(i == ippstsok); i = ippimc8x8b_8u_c1( ref_v_data1 + offs1, ptc_uv,flag2, ref_v_data2 + offs2, ptc_uv,flag4, pcontext->block.idct+320,16, frame->v_comp_data + pcontext->offset_c, ptc_uv, 0); assert(i == ippstsok); Figure 5 MPEG-2 Bidirectional Motion Compensation The first step is to convert the motion vectors from half-pel accuracy to full-pel accuracy, because the half-pel information is passed into the ippimc functions as a flag. The code drops the leastsignificant bit of each motion vector and uses it to generate this flag. The starting point of each ref- 20

erence block is then offset vertically and horizontally by the amount of the motion vector. Because this code handles bi-directional prediction, the code repeats all these steps for two separate motion vectors and two separate reference frames. This is the last decoding step, so the code places the result directly in the YCbCr output frame. Color Conversion The standard Intel IPP color conversion functions include conversions to and from YCbCr 4:2:2, 4:2:0, and 4:4:4. Because they are in the general color conversion set, these functions are called RGBToYUV422 / YUV422ToRGB, RGBToYUV420 / YUV420ToRGB, and RGBToYUV / YUVToRGB. These functions support interleaved and planar YCbCr data. Figure 7 shows a conversion of decoded MPEG-2 pixels into RGB for display. src[0] = frame->y_comp_data + pcontext->video[0].frame_buffer.video_memory_offset; src[1] = frame->v_comp_data + pcontext-video[0].frame_buffer.video_memory_offset/4; src[2] = frame->u_comp_data + pcontext->video[0].frame_buffer.video_memory_offset/4; srcstep[0] = frame->y_comp_pitch; srcstep[1] = pitch_uv; srcstep[2] = pitch_uv; ippiyuv420torgb_8u_p3ac4r(src, srcstep, video_memory + pcontext->video[0].frame_buffer.video_memory_offset/4, roi.width<<2, roi); Figure 6 Converting YCbCr 4:2:0 to RGB for Display 21

H.264 The two series of video codec nomenclature H.26x and MPEG-x overlap. MPEG-2 is named H.262 in the H.26x scheme. Likewise, another popular codec, H.264, is a subset of MPEG-4 also known as MPEG-4 Advanced Video Coding (AVC). Its intent, like that of all of MPEG- 4, was to produce video compression of acceptable quality and very low bit-rate around half of its predecessors MPEG-2 and H.263. This section describes the H.264 components and how each of those components is implemented in Intel IPP. H.264 Components Like its predecessors in the H.26x line, H.264 has two encoding modes for individual video frames, intra and inter. In the former, a frame of video is encoded as a stand-alone image without reference to other images in the sequence. In the latter, the previous and possibly future frames are used to predict the values. Figure 7 shows the high-level blocks involved in intra-frame encoding and decoding of H.264. Figure 9 shows the encoding and decoding process for inter frames. The remainder of this section explains each of these blocks, in the order in which the encoder would execute them. 22

Figure 7 Intra-Mode Encoding and Decoding in H.264 23

Figure 8 Inter Mode Encoding and Decoding in H.264 Motion Estimation and Compensation Blocks in H.264, whether in inter or intra frames, can be expressed relative to previous and subsequent blocks or frames. In inter frames, this is called motion estimation and is relative to blocks in other frames. This is the source of considerable compression. As with other video compression techniques, this exploits the fact that there is considerably less entropy in the difference between similar blocks than in the absolute values of the blocks. This is particularly true if the difference can be 24

between a block and a constructed block at an offset from that block in another frame. H.264 has very flexible support for motion estimation. The estimation can choose from 32 other frames as reference images, and is allowed to refer to blocks that have to be constructed by interpolation. The encoder is responsible for determining a reference image, block and motion vector. This block is generally chosen using some search among the possibilities, starting with the most likely options. The encoder then calculates and encodes the difference between previously encoded blocks and the new data. On the decoding end, after decoding the reference blocks, the code adds the reference data and the decoded difference data. The blocks and frames are likely to be decoded in non-temporal order, since the frames can be encoded relative to forwardlooking blocks and frames. H.264 encoding supports sub-pixel resolution for motion vectors, meaning that the reference block is actually calculated by interpolating inside a block of real pixels. The motion vectors for luma blocks are expressed at quarter-pixel resolution, and for chroma blocks the accuracy can be eighth-pixel accuracy. This sub-pixel resolution increases the algorithmic and computational complexity significantly. The decoding portion, which requires performing subpixel motion compensation only once per block, takes about 10 to 20 percent of decoding pipeline. The bulk of this time is spent interpolating values between pixels to generate the sub-pixel-offset reference blocks. The cost of performing sub-pixel estimation varies with the encoding algorithm, but may require performing motion compensation more than once. The interpolation algorithm to generate offset reference blocks is defined differently for luma and chroma blocks. For luma, interpolation is performed in two steps, half-pixel and then quarter-pixel in- 25

terpolation. The half-pixel values are created by filtering with this kernel horizontally and vertically: [1-5 20 20-5 1]/32 Quarter-pixel interpolation is then performed by linearly averaging adjacent half-pixel values. Motion compensation for chroma blocks uses bilinear interpolation with quarter-pixel or eighth-pixel accuracy, depending on the chroma format. Each subpixel position is a linear combination of the neighboring pixels. Figure 9 illustrates which pixels are thus used for both interpolation approaches. After interpolating to generate the reference block, the algorithm adds that reference block to the decoded difference information to get the reconstructed block. The encoder executes this step to get reconstructed reference frames, and the decoder executes this step to get the output frames. 26

Figure 9 Sub-pixel Interpolation for Motion Compensation in H.264 Intra Prediction Intra frames by their nature don t depend on earlier or later frames for reconstruction. However, in H.264 the encoder can use earlier blocks from within the same frame as reference for new blocks. This process, intra prediction, can give additional compression for intra macroblocks, and can be particularly effective if a sufficiently appropriate reference block can be found. The reference blocks are not used in the way that inter prediction blocks are, by taking the pixel-bypixel difference of actual blocks in adjacent frames. Instead, a prediction of the current block is calculated as an average of some of the pixels 27

bordering it. Which pixels are chosen and how they are used to calculate the block is dependent on the intra prediction mode. Figure 10 shows the directions that pixels may be used, along with the mode numbers as defined in the H.264 specification [JVT- G050]. This can also be one of the most computationally intensive parts of the encoding process. For the encoder to exhaustively search through all options, it would have to compare each 16x16 luma or 8x8 chroma block against 4 other blocks, and each 4x4 or 8x8 luma block against 9 other blocks. Figure 10 Mode Numbers for Intra Prediction in H.264 Because the encoder can consider a variety of block sizes, a scheme that optimizes the trade-off between the number of bits necessary to represent the video and the fidelity of the result is desirable. 28

Transformation Instead of the DCT, the H.264 algorithm uses an integer transform as its primary transform to translate the difference data between the spatial and frequency domains. The transform is an approximation of the DCT that is both lossless and computationally simpler. The core transform, illustrated in Figure 11, can be implemented using only shifting and adding. This 4x4 transform is only one flavor of the H.264 transform. H.264 defines transformations on 2x2 and 4x4 blocks in the baseline profile, and additional profiles support transforms on larger block sizes, rectangular or square, with dimensions that are also powers of two. The algorithm applies the transforms separately on the first, or DC chroma and luma component. In the baseline profile, H.264 uses one 2x2 transform chroma DC coefficients, a 4x4 transform luma DC coefficients, and the main 4x4 transform for all other coefficients. 29

Figure 11 Matrices for Transformation in H.264 Quantization The quantization stage reduces the amount of information by dividing each coefficient by a particular number to reduce the quantity of possible values that value could have. Because this makes the values fall into a narrower range, this allows entropy coding to express the values more compactly. Quantization in H.264 is arithmetically expressed as a two-stage operation. The first stage is multiplying each coefficient in the 4x4 block by a fixed coefficient-specific value. This stage allows the coefficients to be scaled unequally according to importance or information. The second stage is dividing by an adjustable quantization parameter (QP) value. This stage provides a single knob for ad- 30

justing the quality and resultant bitrate of the encoding. The two operations can be combined into a single multiplication and single shift operation. The QP is expressed as an integer from 0 to 51. This integer is converted to a quantization step size (QStep) nonlinearly. Each 6 steps increases the step size by a factor of 2, and between each pair of power-of-two step sizes N and 2N there are 5 steps: 1.125N, 1.25N, 1.375N, 1.625N, 1.75N. Reordering When encoding the coefficients of each macroblock using entropy coding, the codec processes the blocks in a particular order. The order helps increase the number of consecutive zeros. It s natural to handle this ordering when writing the output of the transform and quantization stage. Entropy Coding H.264 defines two entropy coding modes, Context Adaptive Variable Length Coding (CAVLC) and Context Adaptive Binary Arithmetic Coding (CABAC). CAVLC can be considered the baseline VLC. It is a conventional variable-length coding algorithm, with a table of uniquely-prefixed, variable-bitlength codes, but for additional efficiency the standard specifies additional tables. The selection among these tables and the length of the fixedlength coefficient value suffix is based on the local statistics of the current stream, termed the context. CAVLC employs 12 additional code tables: 6 for characterizing the content of the transform block as a whole, 4 for indicating the number of coefficients, 1 for indicating the overall magnitude of a quantized coefficient value, and 1 for representing consecutive runs of zero-valued quantized coefficients. Given the execution efficiency of VLC tables, combined with this limited adaptive coding to 31

boost coding efficiency, this provides a nice tradeoff between speed of execution and performance. The CABAC mode has been shown to increase compression efficiency by roughly 10 percent relative to the CAVLC mode, although CABAC is significantly more computationally complex. In a first step, a suitable model is chosen according to a set of past observations of relevant syntax elements; this is called context modeling. If a given symbol is nonbinary valued, it will be mapped onto a sequence of binary decisions, so-called bins, in a second step This binarization is done according to a specified binary, using a tree structure similar to a VLC code. Then each bin is encoded with an adaptive binary arithmetic coding engine using probability estimates which depend on the specific context. This pipeline is show in Figure 12. Figure 12 Arithmetic Coding Pipeline in H.264 Deblocking Filter The last stage before reconstruction is a deblocking filter. This filter is intended to smooth the visual discontinuities between transform blocks, and as 32

such is only applied to those pixels nearest these boundaries at most 4 on either side of a block boundary. The filter consists of separable horizontal and vertical filters. Figure 13 shows the boundaries in a macroblock and the pixels of interest for a horizontal filter across a vertical boundary. H.264 specified that the filter be applied on frames after de-quantization and before the image is used as a reference for motion compensation. For intra frames it should be applied after intra prediction. This filtering is a very computationally expensive portion of the decoder, taking 15 to 30 percent of the CPU for low-bitrate streams that require the most filtering. The deblocking filter is an adaptive filter, the strength of which is automatically adjusted according to the boundary strength and differences between pixel values at the border. The boundary strength is higher for intra blocks than inter, higher when the blocks in question have difference reference images, and higher when across a macroblock boundary. The pixel value differences must be less than a threshold that decreases with increasing quality. When the quantization parameter is small, increasing the fidelity of the compressed data, any significant difference is assumed to be an image feature rather than an error, so the strength of the filter is reduced. When the quantization step size is very small, the filter is shut off entirely. The encoder can also disable the filter explicitly or adjust it in strength at the slice level. 33

Figure 13 Horizontal Deblocking Filter in H.264 H.264 in Intel IPP This section explains how Intel IPP primitives implement many of the H.264 component blocks. Like the previous section, each block is presented in the order in which the encoder would execute it. Motion Compensation (Interpolation of Residual and Predicted Blocks) The most computationally intensive part of motion compensation in H.264 is generating the reference blocks. Since H.264 permits sub-pixel offsets from the actual data, the implementation must use a particular interpolation filter to calculate the blocks. The Intel IPP defines a set of interpolation functions to handle interpolation at different locations in the image. The functions are the following: n ippiinterpolateluma_h264_[8u 16u]_C1R n ippiinterpolatelumatop_h264_[8u 16u]_C1R n ippiinterpolatelumabottom_h264_[8u 16u]_C1R 34

n ippiinterpolatelumablock_h264_[8u 16u]_C1R n ippiinterpolatechroma_h264_[8u 16u]_C1R n ippiinterpolatechromatop_h264_[8u 16u]_C1R n ippiinterpolatechromabottom_h264_[8u 16u]_C1R n ippiinterpolatechromablock_h264_[8u 16u]_C1R These functions are divided into those handling the luma or brightness plane and those handling the chroma or color planes. They are also divided between those functions that handle blocks for which all the data is present and those that occur on a frame boundary outside which there is no data. The functions that handle all blocks not on the edge of a frame, functions ippiinterpolateluma_h264 and ippiinterpolatechroma_h264, do not consider the integral portion of the motion vectors. They only perform the interpolation. The input pointer for the reference data should already point to the integraloffset reference block. The functions then calculate the interpolated reference block, using the 2 or 3 bits specifying the fractional motion vector at quarter- or eighth-pixel resolution. Of the other functions, those with Top or Bottom in the function name interpolate data at the edge of the image. The parameters tell them how far outside the image the reference block is. The function generates that data outside that doesn t exist by replicating the border row, then performs the interpolation as usual. The remaining function type, that with Block in the function name, performs the interpolation on a reference block entirely within the image, but also takes the entire motion vector so that it can take care of the offset calculation. Figure 14 shows these functions in action. The function SelectPredictionMethod determines whether the algorithm needs to employ the border versions of the functions. The rest of the code is from another, unspecified function. 35

The bulk of the function prepares all of the arguments to the interpolation functions. The variables mvx and mvy hold the complete motion vectors. This code sets the variables xh and yh to the low bits of the motion vector, the fractional portion. Then, after clipping the motion vectors to lie within a maximum range, the code sets the variables xint and yint to the integral portion of the motion vector. Finally, it calculates the pointer to the offset reference block and calls the appropriate Intel IPP function. Note that the edge replication seems only to be an issue at the top and bottom and not the sides. This is because the replication at the top and bottom boundaries takes place at the macroblock level, but the left and right boundaries are replicated at the frame level. inline Ipp8s SelectPredictionMethod(Ipp32s MBYoffset,Ipp32s mvy, Ipp32s sbheight,ipp32s height) Ipp32s padded_y = (mvy&3)>0?3:0; mvy>>=2; if (mvy-padded_y+mbyoffset<0) return PREDICTION_FROM_TOP; if (mvy+padded_y+mbyoffset+sbheight>=height) return PREDICTION_FROM_BOTTOM; return ALLOK;... // set pointers for this subblock pmv_sb = pmv + (xpos>>2) + (ypos>>2)*4; mvx = pmv_sb->mvx; mvy = pmv_sb->mvy;... 36

xh = mvx & (INTERP_FACTOR-1); yh = mvy & (INTERP_FACTOR-1); Ipp8u pred_method = 0; if (ABS(mvy) < (13 << INTERP_SHIFT)) if (is_need_check_expand) pred_method = SelectPredictionMethod( mbyoffset+ypos, mvy, roi.height, height); else pred_method = SelectPredictionMethod( mbyoffset+ypos, mvy, roi.height, height); mvy = MIN(mvy, (height - ((Ipp32s)mbYOffset + ypos + roi.height - 1 - D_MV_CLIP_LIMIT))*INTERP_FACTOR); mvy = MAX(mvy, -((Ipp32s)(mbYOffset + ypos + D_MV_CLIP_LIMIT)*INTERP_FACTOR)); if (ABS(mvx) > (D_MV_CLIP_LIMIT << INTERP_SHIFT)) mvx = MIN(mvx, (width - ((Ipp32s)mbXOffset + xpos + roi.width - 1 - D_MV_CLIP_LIMIT))*INTERP_FACTOR); mvx = MAX(mvx, -((Ipp32s)(mbXOffset + xpos + D_MV_CLIP_LIMIT)*INTERP_FACTOR)); mvyc = mvy; xint = mvx >> INTERP_SHIFT; yint = mvy >> INTERP_SHIFT; pref = prefy_sb + xint + yint * pitch; switch(pred_method) 37

case ALLOK: ippiinterpolateluma_h264_8u_c1r(pref, pitch, ptmpy, ntmppitch, xh, yh, roi); break; case PREDICTION_FROM_TOP: ippiinterpolatelumatop_h264_8u_c1r(pref, pitch, ptmpy, ntmppitch, xh, yh, - ((Ipp32s)mbYOffset+ypos+yint),roi); break; case PREDICTION_FROM_BOTTOM: ippiinterpolatelumabottom_h264_8u_c1r(pref, pitch, ptmpy, ntmppitch, xh, yh, ((Ipp32s)mbYOffset+ypos+yint+roi.height)- height,roi); break; default:vm_assert(0); break; Figure 14 Framework for Interpolation in H.264 Intra Prediction The Intel IPP has three functions for prediction as applied to intra blocks. They are ippipredictintra_4x4_h264_8u_c1ir for 4x4 blocks, ippipredictintra_16x16_h264_8u_c1ir for 16x16 blocks, and ippipredictintrachroma8x8_h264_8u_c1ir for chroma blocks. These functions take as arguments a pointer to the location of the block start and the buffer s step value, the prediction mode as in Figure 10, and a set of flags indicating which data blocks up or to the left are available. Figure 15 lists code using these functions to perform prediction. There are three paths in this code: 16x16, 8x8, and 4x4. The 16x16 blocks call ippipredictintra immediately. The 8x8 call AddResidualAndPredict8x8 and the 4x4 call AddResidualAndPredict. The smaller blocks are organized into separate functions because of how 38

relatively complicated they are. The smaller blocks involve many types of boundaries with other blocks, and a loop within the macroblock. Of these functions, only the 4x4 version is shown. The 8x8 version is nearly identical. These prediction functions use a particular algorithm from the standard to calculate a reference block from previous blocks. The mode determines the direction of the data of interest, and then the algorithm calculates a prediction for each pixel based on average of one or more available pixels in that direction. This code takes the mode, already calculated elsewhere, as an argument. So the bulk of the code is dedicated to determining which outside reference blocks are available and calculating the block locations in memory. The border blocks are available if the predicted block is not on that border with another macroblock, or if the edge_type variable does not indicates that this macroblock is on a global (frame) edge. After calculating the predicted block, each of the two functions AddResidualAndPredict adds the residual using some flavor of motion compensation function starting with ippimc, using full-pel resolution. void AddResidualAndPredict(Ipp16s ** luma_ac, Ipp8u * psrcdstplane, Ipp32u step, Ipp32u cbp4x4, const IppIntra4x4PredMode_H264 *pmbintratypes, Ipp32s edge_type, bool is_half, Ipp32s bit_depth) Ipp32s srcdststep = step; Ipp8u * ptmpdst = psrcdstplane; /* bit var to isolate cbp for block being decoded */ Ipp32u ucbpmask = (1 << IPPVC_CBP_1ST_LUMA_AC_BITPOS); for (Ipp32s ublock = 0; ublock < (is_half? 8 : 16); ublock++, ucbpmask <<= 1) 39

ptmpdst = psrcdstplane; Ipp32s left_edge_subblock = left_edge_tab16[ublock]; Ipp32s top_edge_subblock = top_edge_tab16[ublock]; Ipp32s top = top_edge_subblock && (edge_type & IPPVC_TOP_EDGE); Ipp32s left = left_edge_subblock && (edge_type & IPPVC_LEFT_EDGE); Ipp32s top_left = ((top left) && (ublock!= 0)) ((edge_type & IPPVC_TOP_LEFT_EDGE) && (ublock == 0)); Ipp32s top_right = (top && (ublock!= 5)) (!above_right_avail_4x4[ublock]) ((edge_type & IPPVC_TOP_RIGHT_EDGE) && (ublock == 5)); Ipp32s avail = (left == 0)*IPP_LEFT + (top_left == 0)*IPP_UPPER_LEFT + (top_right == 0)*IPP_UPPER_RIGHT + (top == 0)*IPP_UPPER; ippipredictintra_4x4_h264_8u_c1ir(ptmpdst, srcdststep, pmbintratypes[ublock], avail); if ((cbp4x4 & ucbpmask)!= 0) const Ipp8u * ptmp = psrcdstplane; ippimc4x4_8u_c1(ptmp, srcdststep, *luma_ac, 8, psrcdstplane, srcdststep, IPPVC_MC_APX_FF, 0); *luma_ac += 16; psrcdstplane += xyoff[ublock][0] + xyoff[ublock][1]*srcdststep;... Ipp32s availability = ((edge_type & IPPVC_LEFT_EDGE) == 0)*IPP_LEFT + ((edge_type & IPPVC_TOP_LEFT_EDGE) == 0)*IPP_UPPER_LEFT + ((edge_type & IPPVC_TOP_RIGHT_EDGE) == 0)*IPP_UPPER_RIGHT + ((edge_type & IPPVC_TOP_EDGE) == 0)*IPP_UPPER; if (mbtype == MBTYPE_INTRA_16x16) ippipredictintra_16x16( 40

context->pyplane + offsety, rec_pitch_luma, (IppIntra16x16PredMode_H264) pmbintratypes[0], availability); if (luma_ac) AddResidual(luma_ac, context->pyplane + offsety, rec_pitch_luma, sd->m_cur_mb.localmacroblockinfo->cbp4x4_luma, sd->bit_depth_luma); else // if (intra16x16) if (is_high_profile) switch (special_mbaff_case) default: if (pgetmb8x8tsflag(sd->m_cur_mb.globalmacroblockinfo)) AddResidualAndPredict_8x8( &luma_ac, context->pyplane + offsety, rec_pitch_luma, sd->m_cur_mb.localmacroblockinfo->cbp, (IppIntra8x8PredMode_H264 *) pmbintratypes, edge_type_2t, true, sd->bit_depth_luma); AddResidualAndPredict_8x8( &luma_ac, context->pyplane + offsety + 8*rec_pitch_luma, rec_pitch_luma, sd->m_cur_mb.localmacroblockinfo->cbp >> 2, (IppIntra8x8PredMode_H264 *) pmbintratypes + 2, edge_type_2b, true, sd->bit_depth_luma); else AddResidualAndPredict( &luma_ac, context->pyplane + offsety, rec_pitch_luma, sd->m_cur_mb.localmacroblockinfo->cbp4x4_luma, 41