A Motion Vector Predictor Architecture for AVS and MPEG-2 HDTV Decoder

Similar documents
High Performance VLSI Architecture of Fractional Motion Estimation for H.264/AVC

Fast frame memory access method for H.264/AVC

ISSCC 2006 / SESSION 22 / LOW POWER MULTIMEDIA / 22.1

Multimedia Decoder Using the Nios II Processor

A COST-EFFICIENT RESIDUAL PREDICTION VLSI ARCHITECTURE FOR H.264/AVC SCALABLE EXTENSION

BANDWIDTH REDUCTION SCHEMES FOR MPEG-2 TO H.264 TRANSCODER DESIGN

MPEG-2. ISO/IEC (or ITU-T H.262)

Analysis and Architecture Design of Variable Block Size Motion Estimation for H.264/AVC

Fast Mode Decision for H.264/AVC Using Mode Prediction

Low-cost Multi-hypothesis Motion Compensation for Video Coding

AVS VIDEO DECODING ACCELERATION ON ARM CORTEX-A WITH NEON

Real-time and smooth scalable video streaming system with bitstream extractor intellectual property implementation

EFFICIENT PU MODE DECISION AND MOTION ESTIMATION FOR H.264/AVC TO HEVC TRANSCODER

Multicore SoC is coming. Scalable and Reconfigurable Stream Processor for Mobile Multimedia Systems. Source: 2007 ISSCC and IDF.

High Efficient Intra Coding Algorithm for H.265/HVC

A Quantized Transform-Domain Motion Estimation Technique for H.264 Secondary SP-frames

FPGA based High Performance CAVLC Implementation for H.264 Video Coding

A 4-way parallel CAVLC design for H.264/AVC 4 Kx2 K 60 fps encoder

Video Coding Using Spatially Varying Transform

Development of Low Power ISDB-T One-Segment Decoder by Mobile Multi-Media Engine SoC (S1G)

Week 14. Video Compression. Ref: Fundamentals of Multimedia

Optimized architectures of CABAC codec for IA-32-, DSP- and FPGAbased

High-Throughput Parallel Architecture for H.265/HEVC Deblocking Filter *

Error Concealment Used for P-Frame on Video Stream over the Internet

Lecture 5: Error Resilience & Scalability

A VLSI Architecture for H.264/AVC Variable Block Size Motion Estimation

High-Performance VLSI Architecture of H.264/AVC CAVLD by Parallel Run_before Estimation Algorithm *

Design and Implementation of 3-D DWT for Video Processing Applications

OVERVIEW OF IEEE 1857 VIDEO CODING STANDARD

Venezia: a Scalable Multicore Subsystem for Multimedia Applications

5LSE0 - Mod 10 Part 1. MPEG Motion Compensation and Video Coding. MPEG Video / Temporal Prediction (1)

10.2 Video Compression with Motion Compensation 10.4 H H.263

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

DSP-Based Parallel Processing Model of Image Rotation

An Infrastructural IP for Interactive MPEG-4 SoC Functional Verification

A SCALABLE COMPUTING AND MEMORY ARCHITECTURE FOR VARIABLE BLOCK SIZE MOTION ESTIMATION ON FIELD-PROGRAMMABLE GATE ARRAYS. Theepan Moorthy and Andy Ye

Chapter 10. Basic Video Compression Techniques Introduction to Video Compression 10.2 Video Compression with Motion Compensation

Deblocking Filter Algorithm with Low Complexity for H.264 Video Coding

Research on Transcoding of MPEG-2/H.264 Video Compression

Video Codecs. National Chiao Tung University Chun-Jen Tsai 1/5/2015

One-pass bitrate control for MPEG-4 Scalable Video Coding using ρ-domain

An Improved 3DRS Algorithm for Video De-interlacing

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

The RTP Encapsulation based on Frame Type Method for AVS Video

A NOVEL SCANNING SCHEME FOR DIRECTIONAL SPATIAL PREDICTION OF AVS INTRA CODING

Zonal MPEG-2. Cheng-Hsiung Hsieh *, Chen-Wei Fu and Wei-Lung Hung

Aiyar, Mani Laxman. Keywords: MPEG4, H.264, HEVC, HDTV, DVB, FIR.

Reconfigurable Variable Block Size Motion Estimation Architecture for Search Range Reduction Algorithm

Efficient VLSI Huffman encoder implementation and its application in high rate serial data encoding

Welcome Back to Fundamentals of Multimedia (MR412) Fall, 2012 Chapter 10 ZHU Yongxin, Winson

VIDEO COMPRESSION STANDARDS

A Novel Deblocking Filter Algorithm In H.264 for Real Time Implementation

An Infrastructural IP for Interactive MPEG-4 SoC Functional Verification

FAST SPATIAL LAYER MODE DECISION BASED ON TEMPORAL LEVELS IN H.264/AVC SCALABLE EXTENSION

Interframe coding A video scene captured as a sequence of frames can be efficiently coded by estimating and compensating for motion between frames pri

Module 7 VIDEO CODING AND MOTION ESTIMATION

Objective: Introduction: To: Dr. K. R. Rao. From: Kaustubh V. Dhonsale (UTA id: ) Date: 04/24/2012

A Dedicated Hardware Solution for the HEVC Interpolation Unit

Design of a High Speed CAVLC Encoder and Decoder with Parallel Data Path

Jun Zhang, Feng Dai, Yongdong Zhang, and Chenggang Yan

ISSCC 2001 / SESSION 9 / INTEGRATED MULTIMEDIA PROCESSORS / 9.2

Advances of MPEG Scalable Video Coding Standard

Video Coding Standards. Yao Wang Polytechnic University, Brooklyn, NY11201 http: //eeweb.poly.edu/~yao

A Universal Test Pattern Generator for DDR SDRAM *

CMPT 365 Multimedia Systems. Media Compression - Video

BANDWIDTH-EFFICIENT ENCODER FRAMEWORK FOR H.264/AVC SCALABLE EXTENSION. Yi-Hau Chen, Tzu-Der Chuang, Yu-Jen Chen, and Liang-Gee Chen

Coding of Coefficients of two-dimensional non-separable Adaptive Wiener Interpolation Filter

Complexity Reduction Tools for MPEG-2 to H.264 Video Transcoding

System Modeling and Implementation of MPEG-4. Encoder under Fine-Granular-Scalability Framework

Motion Vector Coding Algorithm Based on Adaptive Template Matching

CONTENT ADAPTIVE COMPLEXITY REDUCTION SCHEME FOR QUALITY/FIDELITY SCALABLE HEVC

Smart Bus Arbiter for QoS control in H.264 decoders

Introduction to Video Compression

Chapter 2 Joint MPEG-2 and H.264/AVC Decoder

ARCHITECTURES OF INCORPORATING MPEG-4 AVC INTO THREE-DIMENSIONAL WAVELET VIDEO CODING

Fast Decision of Block size, Prediction Mode and Intra Block for H.264 Intra Prediction EE Gaurav Hansda

Video Compression An Introduction

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

STACK ROBUST FINE GRANULARITY SCALABLE VIDEO CODING

IN RECENT years, multimedia application has become more

Research on the Application of Digital Images Based on the Computer Graphics. Jing Li 1, Bin Hu 2

Using animation to motivate motion

EFFICIENT DEISGN OF LOW AREA BASED H.264 COMPRESSOR AND DECOMPRESSOR WITH H.264 INTEGER TRANSFORM

Architecture of High-throughput Context Adaptive Variable Length Coding Decoder in AVC/H.264

For layered video encoding, video sequence is encoded into a base layer bitstream and one (or more) enhancement layer bit-stream(s).

MPEG-4: Simple Profile (SP)

Overview, implementation and comparison of Audio Video Standard (AVS) China and H.264/MPEG -4 part 10 or Advanced Video Coding Standard

Efficient MPEG-2 to H.264/AVC Intra Transcoding in Transform-domain

High Efficiency Data Access System Architecture for Deblocking Filter Supporting Multiple Video Coding Standards

Ch. 4: Video Compression Multimedia Systems

An Efficient Mode Selection Algorithm for H.264

[30] Dong J., Lou j. and Yu L. (2003), Improved entropy coding method, Doc. AVS Working Group (M1214), Beijing, Chaina. CHAPTER 4

Hardware Description of Multi-Directional Fast Sobel Edge Detection Processor by VHDL for Implementing on FPGA

Xuena Bao, Dajiang Zhou, Peilin Liu, and Satoshi Goto, Fellow, IEEE

Comparative Study of Partial Closed-loop Versus Open-loop Motion Estimation for Coding of HDTV

Image and video processing

Next-Generation 3D Formats with Depth Map Support

A full-pipelined 2-D IDCT/ IDST VLSI architecture with adaptive block-size for HEVC standard

Design of Entropy Decoding Module in Dual-Mode Video Decoding Chip for H. 264 and AVS Based on SOPC Hong-Min Yang, Zhen-Lei Zhang, and Hai-Yan Kong

implementation using GPU architecture is implemented only from the viewpoint of frame level parallel encoding [6]. However, it is obvious that the mot

Transcription:

A Motion Vector Predictor Architecture for AVS and MPEG-2 HDTV Decoder Junhao Zheng 1,3, Di Wu 1, Lei Deng 2, Don Xie 4, and Wen Gao 1,2,3 1 Institute of Computing Technology, Chinese Academy of Sciences, 100080 Beijing, China 2 Department of Computer Science, Harbin Institute of Technology, 150001 Harbin, China 3 Graduate University of Chinese Academy of Sciences 4 Grandview Semiconductor (BeiJing) Corporation {jhzheng, ldeng, dwu, wgao}@jdl.ac.cn, don.xie@grandviewsemi.com Abstract. In the advanced Audio Video coding Standard (AVS), many efficient coding tools are adopted in motion compensation, such as new motion vector prediction, direct mode matching, variable block-sizes etc. However, these features enormously increase the computational complexity and the memory bandwidth requirement and make the traditional MV predictor more complicated. This paper proposes an efficient MV predictor architecture for both AVS and MPEG-2 decoder. The proposed architecture exploits the parallelism to accelerate the speed of operations and uses the dedicated design to optimize the memory access. In addition, it can reuse the on-chip buffer to support the MV error-resilience for MPEG-2 decoding. The design has been described in Verilog HDL and synthesized using 0.18μm CMOS cells library by Design Compiler. The circuit costs about 62k logic gates when the working frequency is set to 148.5MHz. This design can support the real-time MV predictor of HDTV 1080i video decoding for both AVS and MPEG-2. Keywords: Motion compensation, Motion vector prediction, AVS, MPEG, VLSI architecture. 1 Introduction Chinese Audio Video Coding Standard [1] is a new national standard for the coding of video and audio which is known as AVS. The first version of AVS video standard [2] has been finished in Dec. 2003. AVS defines a hybrid block-based video codec, similar to prior standards such as MPEG-2 [3], MPEG-4 [4] and H.264 [5]. However, AVS is an application driven coding standard with well-optimized techniques. By adopting many new coding features and functionality, AVS [6] achieves more than 50% coding gains over MPEG-2 and similar performance with lower cost compared with H.264. The traditional block-based motion compensation (MC) is improved in the AVS standard. In the prior video standard, the simple MV prediction schemes are applied. For example, in H.264 [5] the predicted MV is just equal to the median value selected Y. Zhang et al. (Eds.): PCM 2006, LNCS 4261, pp. 424 431, 2006. Springer-Verlag Berlin Heidelberg 2006

A Motion Vector Predictor Architecture for AVS and MPEG-2 HDTV Decoder 425 from three decoded MVs of the spatial neighborhood. However, for AVS, the complicated algorithm based on the vector triangle which consists of a series of multiplier and division operations is adopted, as further described in section 2.2. Besides, AVS supports variable block sizes, new motion vector (MV) prediction, multiple reference pictures, direct and symmetric prediction modes etc. All new features require higher calculation capacity and more memory bandwidth which directly affect the cost effectiveness of a commercial video decoder solution. For HDTV 1080i application, the time budget is so tight that pure software implementation cannot provide real-time decoding if just depending on a simple or low-end CPU. So for the high-end application such as Set Top Box etc., it is necessary for the dedicated hardware accelerators. In [7], [8] some kinds of dedicated MC architectures had been proposed which were based on prior specific video standards. However AVS is a new standard, its own features associated with the new requirements make the old designs unsuitable. In this paper, we propose an efficient MV predictor architecture which can fully support the MV prediction algorithms of both AVS and MPEG-2. The proposed design employs the pipelined structure to exploit the parallelism for the AVS s special median prediction algorithm, adopts the line buffer to store the neighboring motion data and uses the specific FIFO to smooth the memory accessing. For AVS, the data of the line buffer are used in the spatial prediction. For MPEG-2, the on-chip line buffer is reused and can provide the neighboring motion data to help conceal the error. The remainder of the paper is organized as follows. The MV prediction algorithms applied by AVS and MPEG-2 are described in Section 2. Section 3 describes the details of the implemented architecture. Simulation results and VLSI implementation will be shown in Section 4. Finally, we draw a conclusion in Section 5. 2 MV Prediction Algorithm The aim of MC is to exploit temporal redundancy to obtain the higher coding performance. The prediction for an inter-coded macroblock (MB) is determined by the set of MVs that are associated with the MB. Since significant gains in efficiency can be made by choosing a good prediction, the process of MV prediction becomes quite complicated. In this section, some special functional blocks of the MV prediction algorithm will be explained. 2.1 Temporal Prediction of AVS AVS can support rich MB coding schemes with more than 30 kinds of MB types and tree structure MB partition (16 16 to 8 8). The predictive modes include intra, skip, forward, backward, spatial direct, temporal direct and symmetric. AVS adopts its own particular way to specify the symmetric and direct mode [2]. For the symmetric prediction, only the forward MV is transmitted for each partition. The backward MV is conducted from the forward one by a symmetric rule.

426 J. Zheng et al. For the direct prediction, both the forward and backward MVs are derived from the MV of the collocated inter-coded block in the backward reference picture. To support the temporal direct MV prediction, all MVs in the latest P-picture need to be stored in the memory as the collocated MV buffer. However, for AVS 1080i video the total bits of all motion data in one picture is about 118KB. It is so huge that all data must be stored to the external memory rather than the on-chip one. 2.2 Spatial Prediction of AVS For the spatial prediction, AVS employs a novel median selector. The edge with the median length is selected from the vector triangle [2]. The scaled MVs make up of the triangle which is illustrated in Fig. 1. origin (0,0) MVC MVA MVB VAB A VBC(FMV) B E VAC C Fig. 1. MV spatial prediction Firstly, calculate the scaled MVA, MVB and MVC using equation (1). 512 mvx BlkDistE + 256 MVX = BlkDistX (1) 9 2 X denotes the block A, B or C and mv for the origin MVs of the neighboring block. BlkDist is the distance differences between the reference pictures of the neighboring blocks. The vectors with double arrow are the scaled MVs in Fig. 1. Secondly, calculate the spatial distances between two scaled MVs. M and N denote the block A or B or C. VMN = Abs(MVM_x MVN_x) + Abs(MVM_y MVN_y) Thirdly, the temporary parameter FMV is given by the median of the corresponding spatial distances. The dashed line denotes the FMV in Fig. 1. FMV=Median (VAB, VBC, VAC). Finally, obtain the MVP using the scaled value from the corresponding vertex. For example, if the FMV is the VAB, thus the MVP is the MVC.

A Motion Vector Predictor Architecture for AVS and MPEG-2 HDTV Decoder 427 Three vertexes (see Fig. 1) need to be calculated so as to get only one MVP value which totally needs 3 divisions, 12 multiplications and 15 additions. Furthermore, AVS can support the 8 8 partition thus the maximum number of MVP value in one MB is 5 (three blocks with unidirectional prediction and one block with spatial direct prediction). The special method needs to be applied to accelerate the process which is described further in the subsection 3.2. In addition, because the motion data from the upper and left neighbor are required, a specific buffer is involved to store all relative neighboring data. 2.3 Concealment MVs of MPEG-2 MPEG-2 can support the concealment MVs [3] which are carried by intra MBs. For the normal decoding, these MVs are useless and can be discarded. However, when the data errors occur in the MB which lies vertically below the intra MB, these MVs can be used as the candidate MVs to conceal the visual error. Because the neighboring MBs have high correlation, it is reasonable that the lost block is very likely to have similar movement in the spatial domain. The more correct data are provided, the better quality can be achieved through the error-concealment. So the motion data from the neighboring MB should be stored including the concealment MVs of the intra MB and the real MVs of the inter MB. 3 MV Predictor Architecture MV predictor module is responsible for generating all motion data (MVs and reference picture indices). The module consists of the Input/Output Interface, the Main Controller, the public Line Buffer, the MPEG sub-module and the AVS submodule. Fig. 2 shows the implemented architecture of the MV predictor. The real lines indicate the data flow, and the dash lines for control messages. VLD MIPS DAT FIFO MVD CMD FIFO MPEG MPG MV Calculation MVP M U X MIPS Reg IF AVS Main Controller LineBuffer Spatial Prediction MVP + Output Control Ref Fetch Direct Mode Symmetric Mode Temporal Prediction RefMV MV FIFO FwMV BwMV SDRAM Fig. 2. MV Predictor block diagram

428 J. Zheng et al. Main controller unit firstly parses the commands sent by MIPS which contain the stream type and the MB information such as mb_type, the available flag for neighboring MBs etc. Then the controller invokes the corresponding sub-module working according to the current MB mode. For example if the controller finds the mb_type is equal the AVS symmetric mode, the Symmetric Prediction module will be activated through the handshake protocol. For the AVS side, the Spatial and Temporal Predictions perform the spatial and temporal MV predictive operations respectively. The motion data from or to the external memory are stored to the MV FIFO firstly to avoid trivial memory accessing request. The MV data read from MV FIFO are used by the Direct Prediction as the reference motion data. Output controller manages the final motion data to output to the reference fetch module and updates the Line Buffer whose data is used in the spatial prediction. Besides, in order to support the variable prediction blocksize, the MV predictor unit transfers all block modes to the uniform 8 8 block which is the minimum blocksize to simplify the operations in the downstream stages. For the MPEG-2 side, the MVs are generated in the MV Calculation unit according to the motion type and the MB type. The final motion data each MB is stored into the Line Buffer. The firmware can read back all motion data in the Line Buffer through the Register Interface when the error occurs. Then the firmware can use some specific error concealment algorithms to select or re-calculate the MVs and send them to the CMD FIFO. These special MVs will directly be outputted as the final MVs to the downstream stages (See the Mux unit in Fig. 2). So the MV predictor can provide the error concealment scheme. Due to the limit space, only the spatial prediction, the MV FIFO and the Line Buffer units are described in detail. 3.1 MV FIFO Direct mode needs to use the reference MVs from the backward reference picture. It is known that the motion data in a reference picture must be stored into the SDRAM according to the previous analysis. The straightforward way is to access the memory once the decoder finds the direct prediction mode in the process of current MB decoding. However, it s awful to request the controller frequently and irregularly. Because the controller has to serve multiple clients and guarantee the schedulability of all critical tasks, i.e. the display feeder, it is probable that the request for the motion data in the current MB decoding period could not be acknowledged in time and the irregular request will also impact the services for other clients. So a dedicated FIFO is built to improve the efficiency of the memory access. In the P-picture decoding, the MV FIFO works as a cache. Motion data are written into the MV FIFO after each MB is decoded. When the MV FIFO is half full, the writing request is send out to inform the controller and then the data are read from the MV FIFO successively. At the same time, send them to SDRAM through the interface. In the B-picture decoding, the MV FIFO pre-fetches the motion data from the SDRAM. The data flows are shown in Fig. 3.

A Motion Vector Predictor Architecture for AVS and MPEG-2 HDTV Decoder 429 Storing Processing Unit Processing Unit wr rd MV FIFO MV FIFO I/F Loading rd wr I/F SDRAM SDRAM Fig. 3. Data flows for the MV FIFO 3.2 Pipelined Spatial Prediction The algorithm is described in section 2.2. The pipelined architecture for the spatial MV prediction is shown in Fig. 4. It contains 5 stages for FMV calculation. S1 S2 S3 S4 S5 10b 512 10b 9b BlkDistX 13b mvx 9b BlkDistE division mul l 22b mul 2 +256,>>9 a-b + VAB median_edge 25b 24b 25b 25b MVX FMV Fig. 4. Pipelined spatial prediction S1. Division and 1st multiplication; S2. 2nd multiplication, successive addition and shift; S3. Absolute value S4. Addition S5. Median value The 10b/9b division costs 2 cycles in our design. So it takes only 15 cycles to finish all operations for the calculation of one MV prediction including preparing the input data. So for the worse case the total cycle is 15 5 = 75 cycles. Because the scaling technique is also applied to the direct and symmetric prediction in the AVS standard, the similar pipelined structures are implemented in the temporal prediction unit (see Fig. 2). 3.3 Line Buffer All motion data which had been decoded are stored to the Line Buffer which is illustrated in Fig. 5. There are n MBs in the horizontal direction and b x,y denotes the MB with (x,y) coordinates. E is the current MB with bold block edge and the MBs

430 J. Zheng et al. with gray background are decoded MB. The AVS spatial prediction for MB b x,y need the motion data from b x-1,y-1, b x,y-1, b x+1,y-1 and b x-1,y. These neighboring motion data are also important for the MPEG-2 error concealment. The Line Buffer is composed of motion data from B MB b x, y-1 to A MB b x-1, y. b 1,y b 1,y+1 A' b x-2,y b x-1,y-1 b x,y-1 b x+1,y-1 b x+1,y-1 b n,y-1 b x-1,y b x,y b x+1,y b n,y b x-1,y+1 D B C C' A E Fig. 5. Data flow for the Line buffer After finished the MV calculation for one MB, the data flow is beginning as shown in Fig. 5. The old D is discarded and the old B became the new D. So do C and others. The current motion data of MB E are written to the Line Buffer. For the MPEG-2, the same scheme is adopted which can provide more useful motion data than one specified by the standard [3]. For the intra MB, either the existed concealment MVs or the zero MVs are written to the buffer based on the bitstream syntax. For the inter MB, the final MVs are also recorded. Once the motion data is lost in the current decoding MB, the firmware can look up the buffer for any positions and get more neighboring motion data which can help the decoder make better decision. 4 Implementation Results We have described the design in Verilog HDL at the RTL level. According to AVS verification model [9] and MPEG2 reference codec [10], a C-code model of MV predictor is also developed to generate the simulation vectors. By testing with 52 HD (including AVS and MPEG-2) bitstreams, Synopsys VCS simulation results show that our Verilog code is functionally identical with the MV prediction module of the verification model for two standards. The validated Verilog code is synthesized using TSMC 0.18-μm CMOS cells library by Synopsys Design Compiler. The circuit totally costs about 62K logic gates exclusive the SRAM when the working frequency is set to 148.5MHz. Table 1 is our Table 1. Synthesized Results Technology TSMC 0.18μm Working Frequency 148.5 MHz Gate Count (without SRAM) 62K SRAM 34K Cycles/MB Max. 310 Processing Capacity 1920 1088,HD interleave, 60 field/s AVS Jizhun Profile 6.2, MPEG-2 MP@HL

A Motion Vector Predictor Architecture for AVS and MPEG-2 HDTV Decoder 431 synthesized result. The logic gates for SRAM are about 34K. The line buffer occupies 18K logic gates which can store the 1,830 Bytes for the maximum 1920 pixel width. The implemented architecture costs at most 310 cycles to perform the MV calculation operation for each MB, which is sufficient to realize the real-time MV prediction process for AVS Jizhun Profile 6.2 bit streams. The proposed design can also meet the real-time requirement of MPEG-2 MP@ HL bit streams. 5 Conclusion In this paper, we contribute an efficient VLSI architecture for MV predictor of AVS and MPEG-2 standard. Firstly, we described the algorithm of MV prediction. Then the architecture was proposed. Our main idea is to employ the pipelined structure to accelerate the process for the new median prediction algorithm and use the dedicated MV FIFO to smooth the memory accessing. Besides, the special line buffer is adopted to store the motion data which can provide either the neighboring motion information for AVS or the error concealment MVs for MPEG-2. Finally, we gave out the simulation results. The architecture was synthesized using TSMC 0.18μm CMOS cells library. The synthesized results show that our design can support the real-time MV prediction calculation of HDTV 1080i AVS and MPEG-2 video. The proposed design can easily embedded into the AVS and MPEG-2 CODEC SoC. Acknowledgments. This work has been supported by National Hi-Tech Development Programs (863) of China under grant No. 2003AA1Z1290. References 1. AVS working group official website. http://www.avs.org.cn 2. Information technology Advanced coding of audio and video Part 2: Video. AVS-P2 Standard draft (2005) 3. Information technology general coding of moving picture and associated audio information: video. ITU Recommendation H.262 ISO/IEC 13818-2 (MPEG-2) Standard draft (1994) 4. Information technology coding of audio-visual objects - Part 2: visual. ISO/IEC 14496-2 (MPEG-4) Standard (2001) 5. Advanced video coding for generic audiovisual services. ITU-T Recommendation H.264 ISO/IEC 14496-10 AVC Standard draft (2005) 6. Liang Fan, Siwei Ma, Feng Wu: Overview of AVS Video Standard. In: 2004 IEEE International Conference on Multimedia and Expo. Taibei China (2004) 423-426 7. He Wei-feng, Mao Zhi-gang, Wang Jin-xiang, Wang Dao-fu: Design and implementation of motion compensation for MPEG-4 AS profile streaming video decoding. In: Proceedings. 5th International Conference on ASIC, Beijing China (2003) 942-945 8. Chih-Da Chien, Ho-Chun Chen, Lin-Chieh Huang, Jiun-In Guo: A Low-power Motion Compensation IP Core Design for MPEG-1/2/4 Video Decoding. In: IEEE International Symposium on Circuits and Systems, Kobe Japan (2005) 4542-4545 9. AVS1.0 part 2 reference software model. RM52r1 (2004) 10. MPEG2 codec, V1.2a. MPEG Software Simulation Group (1996)