Validation of a Real-time AVS Encoder on FPGA

Similar documents
Design of Entropy Decoding Module in Dual-Mode Video Decoding Chip for H. 264 and AVS Based on SOPC Hong-Min Yang, Zhen-Lei Zhang, and Hai-Yan Kong

[30] Dong J., Lou j. and Yu L. (2003), Improved entropy coding method, Doc. AVS Working Group (M1214), Beijing, Chaina. CHAPTER 4

ISSCC 2006 / SESSION 22 / LOW POWER MULTIMEDIA / 22.1

International Conference on Information Sciences, Machinery, Materials and Energy (ICISMME 2015)

FPGA based High Performance CAVLC Implementation for H.264 Video Coding

Multimedia Decoder Using the Nios II Processor

The Design of Electronic Color Screen Based on Proteus Visual Designer Ting-Yu HOU 1,a, Hao LIU 2,b,*

A method of three-dimensional subdivision of arbitrary polyhedron by. using pyramids

A Novel Deblocking Filter Algorithm In H.264 for Real Time Implementation

2008 International Conference on Apperceiving Computing and Intelligence Analysis (ICACIA 2008) Chengdu, China December 2008

The S6000 Family of Processors

Research on Heterogeneous Communication Network for Power Distribution Automation

The Analysis and Research of IPTV Set-top Box System. Fangyan Bai 1, Qi Sun 2

AVS VIDEO DECODING ACCELERATION ON ARM CORTEX-A WITH NEON

Xin-Fu Wang et al.: Performance Comparison of AVS and H.264/AVC 311 prediction mode and four directional prediction modes are shown in Fig.1. Intra ch

A Network Disk Device Based on Web Accessing

Video Compression An Introduction

Qingdao, , China. China. Keywords: Deep sea Ultrahigh pressure, Water sound acquisition, LabVIEW, NI PXle hardware.

Sample Adaptive Offset Optimization in HEVC

H.264/AVC Video Watermarking Algorithm Against Recoding

OVERVIEW OF IEEE 1857 VIDEO CODING STANDARD

The design and implementation of TPC encoder and decoder

Research on Full-text Retrieval based on Lucene in Enterprise Content Management System Lixin Xu 1, a, XiaoLin Fu 2, b, Chunhua Zhang 1, c

Application Research of Wavelet Fusion Algorithm in Electrical Capacitance Tomography

result, it is very important to design a simulation system for dynamic laser scanning

An Information Hiding Algorithm for HEVC Based on Angle Differences of Intra Prediction Mode

A Design of Remote Monitoring System based on 3G and Internet Technology

Research on the Application of Digital Images Based on the Computer Graphics. Jing Li 1, Bin Hu 2

Journal of Chemical and Pharmaceutical Research, 2015, 7(3): Research Article

Piecewise Linear Approximation Based on Taylor Series of LDPC Codes Decoding Algorithm and Implemented in FPGA

A SCALABLE COMPUTING AND MEMORY ARCHITECTURE FOR VARIABLE BLOCK SIZE MOTION ESTIMATION ON FIELD-PROGRAMMABLE GATE ARRAYS. Theepan Moorthy and Andy Ye

PERFORMANCE ANALYSIS OF AVS-M AND ITS APPLICATION IN MOBILE ENVIRONMENT

Digital Video Processing

A Survey on open Source Protocols SIP, RTP, RTCP, RTSP, H.264 for Video Conferencing System

Research on Image Sensor System Acquisition Based on ARM

Fast frame memory access method for H.264/AVC

Design and Implementation of Aquarium Remote Automation Monitoring and Control System

EFFICIENT DEISGN OF LOW AREA BASED H.264 COMPRESSOR AND DECOMPRESSOR WITH H.264 INTEGER TRANSFORM

A COST-EFFICIENT RESIDUAL PREDICTION VLSI ARCHITECTURE FOR H.264/AVC SCALABLE EXTENSION

M out of N Safety Computing System Based on General-Purpose Computers

MPEG-4: Simple Profile (SP)

Study of the Way to Firmware Program Upgrade in FPGA Reconfiguration of Distributed Geophysical Instruments

Design of analog acquisition and storage system about airborne flight data recorder

2002 International Conference on Machine Learning and Cybernetics. Proceedings Volume 3 of 4

Design of the Power Online Monitoring System Based on LabVIEW

A NOVEL SCANNING SCHEME FOR DIRECTIONAL SPATIAL PREDICTION OF AVS INTRA CODING

Design of a High Speed CAVLC Encoder and Decoder with Parallel Data Path

Development of a Rapid Design System for Aerial Work Truck Subframe with UG Secondary Development Framework

PARAMETERIZED COMPUTER AIDED DESIGN OF STUBBLE CLEANER

Research on Distributed Video Compression Coding Algorithm for Wireless Sensor Networks

TKT-2431 SoC design. Introduction to exercises. SoC design / September 10

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

An algorithm of lips secondary positioning and feature extraction based on YCbCr color space SHEN Xian-geng 1, WU Wei 2

IMPLEMENTATION OF DEBLOCKING FILTER ALGORITHM USING RECONFIGURABLE ARCHITECTURE

High Performance VLSI Architecture of Fractional Motion Estimation for H.264/AVC

Design of Soybean Milk Machine Control System based on STC89C52. Ya-gang SUN, Yue ZHANG, Zhi-gang YANG, Rui-cheng ZHANG and Xiao-wei SHEN

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

2002 International Conference on Machine Learning and Cybernetics. Proceedings Volume 1 of 4

Video Inter-frame Forgery Identification Based on Optical Flow Consistency

Design and Implementation of Dual-Mode Wireless Video Monitoring System

Modeling and Analysis of the Support System of Space-based Anti-missile Information Based on UML

Table 1: Example Implementation Statistics for Xilinx FPGAs

Block-Matching based image compression

Design and Implementation of 3-D DWT for Video Processing Applications

A LOW-COMPLEXITY AND LOSSLESS REFERENCE FRAME ENCODER ALGORITHM FOR VIDEO CODING

ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

Multi-Grain Parallel Accelerate System for H.264 Encoder on ULTRASPARC T2

Video Quality Analysis for H.264 Based on Human Visual System

A Dedicated Hardware Solution for the HEVC Interpolation Unit

Design and Implementation of Inspection System for Lift Based on Android Platform Yan Zhang1, a, Yanping Hu2,b

Introduction to Video Encoding

Research Article A High-Throughput Hardware Architecture for the H.264/AVC Half-Pixel Motion Estimation Targeting High-Definition Videos

Multimedia Standards

High Efficiency Video Coding. Li Li 2016/10/18

An Efficient Hardware Architecture for H.264 Transform and Quantization Algorithms

Video Compression Standards (II) A/Prof. Jian Zhang

Virtual Interaction System Based on Optical Capture

Research on QR Code Image Pre-processing Algorithm under Complex Background

Research on Two - Way Interactive Communication and Information System Design Analysis Dong Xu1, a

arxiv: v1 [cs.cr] 31 Dec 2018

Research Article. Three-dimensional modeling of simulation scene in campus navigation system

Design And Implementation Of USART IP Soft Core Based On DMA Mode

Research of UAV Video Compression System based on DM6467

Proceedings of 2005 International Conference On Machine Learning and Cybernetics. Volume 9 of 9

VHDL Implementation of H.264 Video Coding Standard

High-Throughput Parallel Architecture for H.265/HEVC Deblocking Filter *

FPGA Implementation of 2-D DCT Architecture for JPEG Image Compression

Stripe Noise Removal from Remote Sensing Images Based on Stationary Wavelet Transform

2009 International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR 2009) Baoding, China July IEEE Catalog Number: ISBN:

Pipelined Fast 2-D DCT Architecture for JPEG Image Compression

Design and Implementation of MP3 Player Based on FPGA Dezheng Sun

Collaborative Image Compression Algorithm In Wireless Multimedia Sensor Networks

Research Article. ISSN (Print) *Corresponding author Chen Hao

The Study and Implementation of Text-to-Speech System for Agricultural Information

Upcoming Video Standards. Madhukar Budagavi, Ph.D. DSPS R&D Center, Dallas Texas Instruments Inc.

An intelligent LED landscape lighting system

ZigBee Routing Algorithm Based on Energy Optimization

Chapter 10. Basic Video Compression Techniques Introduction to Video Compression 10.2 Video Compression with Motion Compensation

Optimization of Cone Beam CT Reconstruction Algorithm Based on CUDA

JOINT RATE ALLOCATION WITH BOTH LOOK-AHEAD AND FEEDBACK MODEL FOR HIGH EFFICIENCY VIDEO CODING

Transcription:

Sensors & Transducers 2014 by IFSA Publishing, S. L. http://www.sensorsportal.com Validation of a Real-time AVS Encoder on FPGA 1 Qun Fang Yuan, 2 Xin Liu, 3 Yao Li Wang 1 Student Recruitment and Work Placement Office, Taiyuan University of Technology, Taiyuan, 030024, China 2, 3 College of Information Engineering, Taiyuan University of Technology, Taiyuan, 030024, China 3 Tel.: 00863516010042, fax: 00863516010042 3 E-mail: willingwan@gmail.com Received: 12 November 2013 /Accepted: 12 December 2013 /Published: 31 January 2014 Abstract: A whole I frame AVS real-time video encoder is designed and implemented on FPGA platform in this paper. The system uses the structure of the flow calculation, coupled with a dual-port RAM memory between/among the various functional modules. Reusable design and pipeline design are used to optimize various encoding module and to ensure the efficient operation of the pipeline. Through the simulation of ISE software and the verification of Xilinx Vritex-4 pro platform, it can be seen that the highest working frequency can be up to 110 MHz, meeting the requirements of the whole I frame real-time encoding of AVS in CIF resolution. Copyright 2014 IFSA Publishing, S. L. Keywords: AVS, Encoder, FPGA. 1. Introduction In 2006, AVS was officially approved as China's new generation of audio and video coding standard [1-2]. AVS video coding standard achieved a certain balance between the performance and the implementation of complexity, on behalf of the internationally advanced level. At present, AVS decoder and decoding chip have been developed and applied in a relatively mature manner. But the AVS encoding chip is still in research phase, rarely coming to the market. FPGA platform has rich register and logic resources, whose hardware parallel processing mode can meet the design requirements of a majority of high-speed electronic-circuits and can implement complex digital video signal processing. It is one of the best choices to implement or verify AVS encoder. This article has an intensive study of the algorithm of the whole I frame coding of AVS, intending to realize the stream processing structure of the whole I frame real-time encoding of AVS based on the characteristics of the FPGA hardware [3]. Because of the limitations of the occupation of resources and running speed, the system uses appropriate multiplexing and pipeline technology and forms the encoder algorithm processing module with macro block as processing unit. Dual-port RAM memory is coupled between various functional modules to ensure the efficient operation of the pipeline and the optimal utilization of hardware resources. 2. System Analysis The real-time capture of CIF resolution images, the whole I frame encoding of AVS and its network transmission are carried out on FPGA-based platform. The system mainly consists of video capture module, data scheduling module, I frame encoding module and Ethernet transmission module. The realization diagram is shown in Fig. 1. 128 Article number P_1811

3. Data Acquisition and Transmission Fig. 1. System Block Diagram. Video capture module mainly completes decoding the composite video into YUV (4:2:0) digital video, preparing video data for the whole I frame encoding of AVS. Data scheduling module provides I frame encoding system with original data and transmits the encoded bit-stream to Ethernet transmission module. Video data is of high throughput and high bandwidth, and massive data exchanges between low-speed memory (such as DDR) and high-speed memory (such as RAM of FPGA) are involved. Thus successful data scheduling strategy is one of the key technologies to implement the real-time encoding. This system consists of the IP core of self-designed DDR controller and the IP core of data scheduling. The whole I frame encoding system mainly implements I frame real-time encoding of AVS video. The whole system, implementing the design independently with hardware description language, is composed of intra prediction module, transformation and quantization module, and entropy coding module. Ethernet transmission module executes AVS bit-stream package and its transmission to PC. The IP core of Ethernet controller and the IP core of network transmission protocol, which are self-designed, are utilized in the system AVS player on PC adopts the DirectShow architecture. The AVS real-time player based on AVS decoder completed in our laboratory is to verify the realtime encoding capability of AVS coding system. TVP5150PBS video decoder is used to convert the input PAL video signal into the digital signal of YUV (4:2:0). The output format is ITU-R BT.656. When the system is powered on, FPGA initializes the TVP5150 decoder through IIC bus. TVP5150 can output 8-bit digital YUV video signals to the FPGA front-side video capture module when connected to composite video signal. Since the format of the video signal received is ITU565 and the video information is composed of 8-bit video signal without independent horizontal and vertical sync information, FPGA needs to extract horizontal and vertical sync signal from the received 8-bit video signal and convert the eight adjacent data (luminance and chrominance signals) into 16-bit video signal. The converted 16-bit video signal is written to external DDR SDRAM memory through the DDR SDRAM controller. The capacity of DDR SDRAM memory is 16 M 16 bit. The system uses the burst mode and stores the reception of one line of video signal into the DDR SDRM as a line. Then a frame of video signal can be stored into a Bank space of the DDR SDRAM. When encoding the video, the model reads data from the DDR SDRAM in accordance with the order of the macro block, i.e., the first macro block data is in line 0 to 15 and column 0 to 15 of DDR SDRAM, the second one is in line 0 to 15 and column 16 to 23 of DDR SDRAM, and the rest is read in this way. 4. Overall Design The block diagram of overall design of the whole I frame coding system of AVS [4-9] is shown in Fig. 2, consisting of the intra-prediction module, transformation and quantization module, and entropy encoding module. The whole I frame encoding of AVS can be realized when such peripheral circuits as residuals, the remodeling, the writing of CBP and so on are added. Fig. 2. Block Diagram of Whole I Frame Encoding. 129

Intra prediction module gets the reference data from the reconstructed data of neighboring macro block for the prediction of current macro block, and completes the 8x8 block prediction to obtain the prediction data and the prediction mode. Transformation and quantization module mainly completes DCT transformation, quantization, inverse quantization and inverse DCT transformation. The output of the quantization data after the Z-scan is for the entropy encoding which outputs the encoded stream. The data of Inverse DCT transformation is added to the prediction data to get reconstructed data of the sub-macro-block, which is fed back to the intra prediction module to predict the next sub-macroblock. The entropy coding module mainly completes the Z scanning and lengthening coding. The CBP value of the macro block is obtained through the fullzero judgment of the scanned data. The code stream of one macro block, 8 bits aligned and put into the RAM, is formed by adding the written CPB information to the front-side of the bit stream after the lengthening coding. The stream of each sub-macro-block is stored separately. After completion of the entire coding of the luminance and chrominance, the code stream of one macro block is composed when adding the CBP information to the front side. The code stream of the whole macro block is put into a RAM with 8 bits aligned. The algorithm is controlled by a state machine and each aligned part of the code word is shifted into RAM. After finishing the code stream of one macro block, the last code word less than 8 bits and the retaining bits are reserved. The insufficient bits of the code word should be padded first before writing the next stream of macro block. When one macro block coding is completed, save the data of the rightmost column and the lowermost line of the macro block respectively for updating the left 16 data when predicting the next macro block and the down data. Continue with the coding of the next line of macro block until one frame coding is completed. 5. I Frame Encoding Module The whole I frame encoding of AVS needs to fully consider the characteristics of the FPGA hardware platform. The entire system is designed independently with the hardware description language. 5.1. Intra Prediction The unit of intra prediction module [10-16] is macro block, reference sample acquisition and prediction, mainly two parts included. The 25 data of the top line and 16 data of the rightmost of reconstructed column of previous macro block are obtained from the neighboring macro block for prediction of 16x16 macro block. The macro block prediction diagram is shown in Fig. 3. Sub-macroblock prediction data are processed by transformation and quantization module to extract the boundary data from the reconstructed data for the next sub-macroblock prediction. The output prediction data subtract the original data before sub-macro-block coding, and the residual data acquired are given to the transformation and quantization module to output the best prediction mode to CBP writing module. Fig. 3. Macro block Prediction Diagram. Parallel implementation of a variety of prediction modes is used in the prediction module. One pixel is predicted in a clock cycle. The module calculates the absolute difference of the predicted current pixel, and adds it to the SAD register. The prediction is completed in 64 clock cycles and the value of SAD is computed at the same time. All available prediction modes are executed in parallel, and then the value of SAD of various modes are compared for the pattern judgment. The best prediction mode is selected and corresponding predicted data are put out. The various arithmetic modules are coupled and cascaded through the dual-port RAM without sync signal and interference. Algorithms share public computing units and multi-stage pipeline operations to form a stream-oriented computing architecture with high resource utilization. Both processing speed and implementation cost are taken into account. The ModelSim simulation result of the prediction module is shown in Fig. 4. The signal sum_v, sum_h sum_dc sum_dc_left sum_dc_top sum_ddl sum_ddr and sum_p are SAD values of the various prediction modes. The best prediction mode (logout) and prediction data (dataout_intra) are obtained by comparing separately. 5.2. Transformation and Quantization The transformation and quantization module includes four parts, DCT, quantization, inverse quantization and inverse DCT transformation [17-19]. The DCT transformation uses butterfly algorithm, processing a row of data of 8x8 at every time. Multiplication is replaced by shift and addition to save a lot of computing time. Table lookup operation is related in quantization. The quantization table is stored in the ROM so that we can obtain the desired results via the address. After the DCT transformation and quantization, the quantization coefficients of the residual data are sent to the entropy coding module while the quantized data are 130

processed with inverse quantization and inverse DCT transformation to obtain reconstructed data of the 8x8 sub-macro-block to feed back to the intra prediction module for the next 8x8 sub-macro-block prediction. This module may be shared by the luminance and chrominance since their algorithm is the same, which can reduce resource consumption. Transformation and quantization module, which involves a large number of operations, is in most cases consuming. Appropriate parallel operation in algorithm, the inverse quantization, inverse DCT transformation and the quantization are executed in parallel, is used to improve the processing speed. In addition, data bits are extended to assign a line of data of the 8x8 matrix at the same time to save the time to read data. ModelSim simulation result of transformation and quantization module is shown in Fig. 5, in which qp is 36 and n0_o is for quantized data. Fig. 4. Simulation Result of Intra Prediction Module. Fig. 5. Simulation Result of Transformation and Quantization Module. 5.3. Entropy Coding Entropy coding module includes data zig-zag scanning, run-length encoding, code table query, code table switch, Columbus encoding and stream output [20-22]. First, it scans the quantization data. If the scanned data are not all 0, the data will be processed with run-length encoding and Columbus encoding, and the code stream will be output. If the scanned data are all 0, the entropy coding will be terminated and we should wait for the input data of the next sub-macro-block quantization. When (level, run) is encoded, the codeword length is uncertain while the output stream is aligned to 8 bits. The codeword is combined based on the output of the previous codeword and the residual signal of the corresponding code length to obtain the final code stream. A variable value w is set as the output cache of the codeword, while t is the length of the codeword in w. When t is not less than 8, the bit stream is output and 8 is subtract from t. Otherwise the encoded codeword (code_num) and the length of it (numbits) are continued to be read. The state machine of implementing the algorithm is shown as Fig. 6. Fig. 6. State Transition Diagram. 131

We use a method to reduce the occupation of the storage space in algorithm storage code table when we design the entropy coding module. Code table query, optimization of code table and index Columbus encoding are combined as a pipeline unit for parallel processing, saving the storage space required by the intermediate result. Entropy coding tasks are processed in parallel to speed up the execution. ModelSim simulation result of entropy encoding module is shown in Fig. 7, in which the state machine judges in state 4, 5, 6 repeatedly and puts out sub-macro-block code stream (dataout) in state 6. 6. Simulation and Verification This paper uses the VHDL hardware description language for implementation, Xilinx ISE for comprehensive verification and ModelSim 6.2b for simulation. The device adopted is xc4vsx25 10ff668 from Xilinx. The highest operating frequency can reach 110 MHz. Comprehensive resource utilization is shown in Fig. 8. Encode the interception of the first macro block data of the I frame with encoding and decoding software when qp is 36. The code stream of the output macro-block is placed in memory 2 starting from the address 0x00980067 in Fig. 9 (a). Fig. 9 (b) is the simulation result of the system by ModelSim.6.2b. The encoded stream of macro block is output from datain_u5_top_o. From the figure it can be seen that the output data are consistent. For the image which is 4:2:0 in CIF format, if 25 frames/s, the requirement of the real-time decoding and playing, is met, the number of cycles required for processing a macro block is limited 9 16 16 10 within N 10101 with the 352 288 25 10 frequency of 100 MHz. The number of cycles of simulation of one macro block is about 7000 by Modelsim with the frequency of 100 MHz, fully satisfying the requirements of real-time encoding. When the system is running, PetaLinux embedded operating system and the SIP protocol stack are transplanted in MicroBlaze processor in FPGA, realizing the inter-transmission with PC through RTP and RTCP protocol. Based on the AVS decoder software designed by our laboratory, the AVS realtime player is designed in the DirectShow architecture in PC. The real-time encoding process performance of the entire AVS coding system is verified. The system participated in the 2012 Xilinx OpenHW, open source hardware and embedded Competition, and won the second prize. 7. Conclusions The Design and Implementation of AVS I-frame encoder on FPGA platform is completed in this paper. From the simulation and verification, the realtime encoding requirement of CIF resolution video image is met. The proper multiplexing and pipelining technology is used in the design to optimize the system, ensuring the efficient operation of the pipeline and the optimal use of the hardware resource as well. The system has the features of low cost of development, high performance and flexibility, easy to update. Fig. 7. Simulation Result of Entropy Encoding Module. Fig. 8. Resource Utilization. 132

(a) (b) Fig. 9. Output Stream. Acknowledgements It is a project supported by Shanxi National Natural Science Foundations (No. 2013011015-1). References [1]. Bi Houjie, Video coding standard: H_264/AVC, People's Posts and Telecommunications Press, Beijing, 2005. [2]. Yan Ming, Hu Guo-Rong, Sub-pixel Interpolation Algorithm Design in AVS Video Standard, Journal of Communication University of China Science and Technology, 13, 4, 2006, pp. 44-48. [3]. Wang Daoxian, Application and development of CPLD / FPGA, programmable logic device, National Defense Industry Press, 2004. [4]. Hu Qian, Zhang Ke, Lu Yu, A structure design of AVS video decoder and hardware implementation, Zhejiang University (Engineering Science), 12, 2006. [5]. Dong Bin, Jiang Yu-Ming, Optimization of AVS software decoder, Computer Engineering and Design, 27, 4, 2006, pp. 618-621. [6]. Zhuang Huan, Analysis of Audio Video Coding Standard, Computer Study, 4, 2008, pp. 2-3. [7]. Li Ji, Chen Ying-Qi, Wang Ci, Software Optimization of AVS Video Decoder on PC, Video Engineering, 34, 11, 2010, pp. 40-42, 50. [8]. Xie Li, Zhang Rui, Interpolation Optimization of Sub-pixel in AVS Decoder based on XSBase270 Platform, Computer Applications and Software, 27, 7, 2010, pp. 219-222, 267. [9]. Li Yan, Research on AVS Video Decoder and Optimization of the Key Modules, Beijing University of Technology, 2010. [10]. Liu Min, Wei Zhi-Qiang, A fast mode decision algorithm for intra prediction in AVS-M video coding, in Proceedings of the International Conference on Wavelet Analysis and Pattern Recognition (ICWAPR'07), 1, 2008, pp. 326-331. [11]. Cao Ming, Jiang Cheng, Zhang Chong-Yang, Subpixel Interpolation Optimization by SSE2 in AVS Video Coding, Video Engineering, 34, 11, 2010, pp. 30-32. [12]. Dong Zhi-Ping, Chen Shui-Xian, Ai Hao-Jun, The Loop Filter Optimization by SSE2 in Video Coding, Computer Engineering and Applications, 42, 8, 2006, pp. 34-36, 140. [13]. Wen Bin, He Ming-Hua, Huang Min-Qi, AVS Exponential-Golomb Code Algorithm, Computer Engineering, 37, 15, 2011, pp. 218-220. [14]. Wang Fei, Li Yuan, Jia Huizhu, Xie Xiaodong, Gao Wen, An efficient fractional motion estimation architecture for AVS real-time full HD video encoder, in Proceedings of the IEEE International Conference on Imaging Systems and Techniques (IST'12), 2012, pp. 279-284. [15]. Ding Dan-Dan, Research on Reconfigurable Video Coding, Zhejiang University, Hang Zhou, 2011. [16]. Ding Dan-Dan, Yu Lu, Overview of MPEG Reconfigurable Video Coding, Video Engineering, 33, 7, 2009, pp. 12-15. [17]. Huang You Wen, Yong-En Chen, An optimal design of AVS anti-scanning, inverse quantization and inverse transformation module, Computer Engineering and Applications, 44, 19, 2008, pp. 93-95. [18]. Li Xiao-Yu, Chen Lei-Ting, Lu Guang-Hui, Optimize frame-interpolation of AVS by linear assembly language, Application Research of Computers, 26, 6, 2009, pp. 2319-2321. [19]. Dandan Ding, Honggang Qi, Lu Yu, Wen Gao, Reconfigurable Video Coding Framework and Decoder Reconfigurable Instantiation of AVS, Signal Processing: Image Communication, 24, 4, 2009, pp. 287-299. [20]. Liu Qun Xin, The hardware design of the variable length decoder in AVS, Modern Electronic Technology, 23, 2007. 133

Sensors & Transducers, Vol. 163, Issue 1, January 2014, pp. 128-134 [21]. Wang Wen-Xiang, Shen Sub-Pixel Interpolation Multi-standard Video Computer-Aided Design 9, 2011, pp. 1603-1613. Hai-Hua, A Reconfigurable Architecture Design for Decoding, Journal of & Computer Graphics, 23, [22]. Zhao Jing, Zhou Li, Yu Qing-Dong, et al., Research of audio video standard (AVS) HD decoding based on a reconfigurable processor, Journal of Harbin Engineering University, 33, 2, 2012, pp. 226-233. 2014 Copyright, International Frequency Sensor Association (IFSA) Publishing, S. L. All rights reserved. (http://www.sensorsportal.com) 134