Module Introduction. Content 15 pages 2 questions. Learning Time 25 minutes

Similar documents
Hot Chips Bringing Workstation Graphics Performance to a Desktop Near You. S3 Incorporated August 18-20, 1996

0;L$+LJK3HUIRUPDQFH ;3URFHVVRU:LWK,QWHJUDWHG'*UDSKLFV

HotChips An innovative HD video and digital image processor for low-cost digital entertainment products. Deepu Talla.

Optimizing Games for ATI s IMAGEON Aaftab Munshi. 3D Architect ATI Research

Windowing System on a 3D Pipeline. February 2005

Building scalable 3D applications. Ville Miettinen Hybrid Graphics

Chapter 11.3 MPEG-2. MPEG-2: For higher quality video at a bit-rate of more than 4 Mbps Defined seven profiles aimed at different applications:

Whiz-Bang Graphics and Media Performance for Java Platform, Micro Edition (JavaME)

POWERVR MBX. Technology Overview

Introduction to Video Compression

Multimedia Decoder Using the Nios II Processor

Bringing it all together: The challenge in delivering a complete graphics system architecture. Chris Porthouse

Optimizing and Profiling Unity Games for Mobile Platforms. Angelo Theodorou Senior Software Engineer, MPG Gamelab 2014, 25 th -27 th June

POWERVR MBX & SGX OpenVG Support and Resources

AT-501 Cortex-A5 System On Module Product Brief

TKT-2431 SoC design. Introduction to exercises. SoC design / September 10

Falanx Microsystems. Company Overview

PowerVR Hardware. Architecture Overview for Developers

Architectures. Michael Doggett Department of Computer Science Lund University 2009 Tomas Akenine-Möller and Michael Doggett 1

Spring 2009 Prof. Hyesoon Kim

1. Introduction 2. Methods for I/O Operations 3. Buses 4. Liquid Crystal Displays 5. Other Types of Displays 6. Graphics Adapters 7.

Coming to a Pixel Near You: Mobile 3D Graphics on the GoForce WMP. Chris Wynn NVIDIA Corporation

Mattan Erez. The University of Texas at Austin

Using OpenGL Applications on the i.mx31 ADS Board

Programming Graphics Hardware

Image and video processing

Hardware Accelerated Graphics for High Performance JavaFX Mobile Applications

VISUALIZE Workstation Graphics for Windows NT. By Ken Severson HP Workstation System Lab

Product Technical Brief S3C2416 May 2008

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

Video Compression An Introduction

Video Compression MPEG-4. Market s requirements for Video compression standard

Mobile Performance Tools and GPU Performance Tuning. Lars M. Bishop, NVIDIA Handheld DevTech Jason Allen, NVIDIA Handheld DevTools

PowerVR Series5. Architecture Guide for Developers

GeForce4. John Montrym Henry Moreton

Spring 2011 Prof. Hyesoon Kim

Lecture 6: Texturing Part II: Texture Compression and GPU Latency Hiding Mechanisms. Visual Computing Systems CMU , Fall 2014

Real-Time Rendering (Echtzeitgraphik) Michael Wimmer

Course Recap + 3D Graphics on Mobile GPUs

ARM Multimedia IP: working together to drive down system power and bandwidth

Advanced Video Coding: The new H.264 video compression standard

Mattan Erez. The University of Texas at Austin

Completing the Multimedia Architecture

About MPEG Compression. More About Long-GOP Video

IVC-8371P. 4 Channel Hardware Codec MPEG-4 Video/Audio Capture Card

Emerging Architectures for HD Video Transcoding. Jeremiah Golston CTO, Digital Entertainment Products Texas Instruments

2D/3D Graphics Accelerator for Mobile Multimedia Applications. Ramchan Woo, Sohn, Seong-Jun Song, Young-Don

Mobile HW and Bandwidth

VIDEO COMPRESSION STANDARDS

Design and Optimization of Geometry Acceleration for Portable 3D Graphics

Pipeline Operations. CS 4620 Lecture Steve Marschner. Cornell CS4620 Spring 2018 Lecture 11

CHAPTER 1 Graphics Systems and Models 3

The Power and Bandwidth Advantage of an H.264 IP Core with 8-16:1 Compressed Reference Frame Store

Cannon Mountain Dr Longmont, CO LS6410 Hardware Design Perspective

CS427 Multicore Architecture and Parallel Computing

Video Conference Equipment High Performance Video Terminal Solution

PVRTC & Texture Compression. User Guide

Evolution of GPUs Chris Seitz

Multimedia in Every Mobile. Peter Chiang Nov, 04

Module 13C: Using The 3D Graphics APIs OpenGL ES

Emerging Architectures for HD Video Transcoding. Leon Adams Worldwide Manager Catalog DSP Marketing Texas Instruments

Multimedia in Mobile Phones. Architectures and Trends Lund

Texture Compression. Jacob Ström, Ericsson Research

CS451Real-time Rendering Pipeline

Pipeline Operations. CS 4620 Lecture 14

Image Processing Tricks in OpenGL. Simon Green NVIDIA Corporation

Digital Video Processing

3-D Accelerator on Chip

PRODUCT SPECIFICATION

Dave Shreiner, ARM March 2009

The VISUALIZE fx family of graphics subsystems consists of three

TKT-2431 SoC design. Introduction to exercises

3D Graphics in Future Mobile Devices. Steve Steele, ARM

GoForce 3D: Coming to a Pixel Near You

Graphics Hardware and Display Devices

Rendering Grass with Instancing in DirectX* 10

Streaming Media Portability

5LSE0 - Mod 10 Part 1. MPEG Motion Compensation and Video Coding. MPEG Video / Temporal Prediction (1)

AMD E8870 4GB PCIEX16 Mini DP X4 Low profile ER24FL-SK4 GFX-AE8870L16-5J

MPEG-4: Overview. Multimedia Naresuan University

Graphics Processing Unit Architecture (GPU Arch)

Barracuda. Technical Specification. Barracuda Issue 1.0 Page 1 of 10

Table 1: Example Implementation Statistics for Xilinx FPGAs

CS 130 Final. Fall 2015

STM32 Journal. In this Issue:

Effective System Design with ARM System IP

Real - Time Rendering. Pipeline optimization. Michal Červeňanský Juraj Starinský

Standard Graphics Pipeline

An H.264/AVC Main Profile Video Decoder Accelerator in a Multimedia SOC Platform

Bluray (

The Mobile Internet: The Potential of Handhelds to Bring Internet to the Masses. April 2008

Fundamentals of Video Compression. Video Compression

Google Workloads for Consumer Devices: Mitigating Data Movement Bottlenecks Amirali Boroumand

Applications and Implementations

Scalable Multi-DM642-based MPEG-2 to H.264 Transcoder. Arvind Raman, Sriram Sethuraman Ittiam Systems (Pvt.) Ltd. Bangalore, India

CS GAME PROGRAMMING Question bank

Multimedia on the Web

Matrox MXO Product Guide

Laboratoire d'informatique, de Robotique et de Microélectronique de Montpellier Montpellier Cedex 5 France

Video coding. Concepts and notations.

Transcription:

Purpose The intent of this module is to introduce you to the multimedia features and functions of the i.mx31. You will learn about the Imagination PowerVR MBX- Lite hardware core, graphics rendering, video processing, video encoding, and video decoding. Objectives Identify the key features of PowerVR MBX-Lite. Describe multimedia capabilities of the i.mx31. Identify the features of the. Describe MPEG-4 video encoding. Describe the role of H.264 decoding during video playback. Content 15 pages 2 questions Learning Time 25 minutes Module Introduction The intent of this module is to introduce you to the multimedia features and functions of the i.mx31. You will learn about the Imagination PowerVR MBX-Lite hardware core, which provides high performance 3D graphics rendering for less power and bandwidth than many traditionally architected accelerators. You will also learn about video processing, video encoding, and video decoding. It should be noted that the i.mx31l does not have 2D/3D graphics acceleration; otherwise, unless specifically mentioned, all information in this module applies to both the i.mx31 and the i.mx31l.

Key features of the MBX-Lite: Tile-based renderer PowerVR MBX-Lite Allows lower bandwidth to system memory vs. traditional architectures Allows high precision color and depth operations PowerVR Texture Compression (PVR-TC) 3D performance: Up to 1 million triangles per second 118 million pixels per second Flat and Gouraud shading Perspective texturing Specular highlights Two-layer multitexturing 32-bit Z support Full tile blend buffer Alpha test Full-scene anti-aliasing Standard Features Per vertex fog 16-bit textures 32-bit textures YUV video textures Point, bilinear, trilinear and anisotropic filtering Full range of blend modes The MBX-Lite uses a tile-based rendering technique to achieve high performance while keeping power and bandwidth low. The MBX-Lite is also able to yield higher precision color and depth processing. The MBX-Lite further reduces bandwidth and memory consumption by providing PowerVR Texture Compression (PVR-TC) texture compression. This reduces the size of textures to shrink the memory footprint of textures and the overall size of applications. In addition to these attractive key features, the MBX-Lite supports up to 1 million triangles per second and 118 million pixels per second, allowing developers to create compelling 3D applications. Lastly, MBX-Lite provides a host of standard 3D features to support industry APIs and developers. Here you can see these 3D features.

Traditional 3D Renderer All 3D Data Tile-based Rendering System Memory Tile On-chip Low Latency Intermediate Data MBX-Lite 3D Renderer on i.mx31 Resulting Data System Memory In tile-based rendering, the system divides the 3D data into blocks that refer to rectangular regions of the display. This division allows the rendering to occur in one region at a time and utilizes much fewer resources than if the whole screen were considered at one time. In traditional 3D rendering systems, all 3D data was saved to system memory. The MBX-Lite uses a set of small on-chip buffers that replaces the large, fast buffers of the traditional 3D renderer. Due to the order of rendering, only the resulting rendered scene is written out to system memory, and the on-chip memory absorbs the intermediate accesses. In addition, the deferred aspect of a tile-based approach allows the renderer to only read texture data that the end scene requires from the system memory. For the i.mx31 unified memory architecture, this results in lower system bandwidth usage and less power drain. The increased bandwidth and lower latency of the on-chip buffers allows the system to afford higher precision calculations than those available in traditional architectures. This results in more accurate color values and fewer depth-based artifacts.

Scene Management Graphics Partitioning Lighting ARM11 VFP Geometry Processing Rasterization MBX-Lite To render a 3D image, the data must pass through a set of standard stages of processing. Let s look at the hardware and software partitioning of these stages. The ARM1136 is partitioned to handle the scene management, lighting, and geometry processing stages in software. These stages are accelerated by the vector floating point (VFP) unit on the processor. This eliminates the need to do costly floating point conversions and emulation. The MBX-Lite 3D acceleration hardware handles the rasterization portion of the pipeline, which is traditionally the most bandwidth-intensive portion. This stage handles the interpolation of triangles, blending of colors, and occlusion checking. In addition, the tile partitioning is executed as a pre-processing step in hardware just prior to rasterization. Lastly, the handles the final compositing and display of the resulting 3D rendered image.

OpenGL ES Low level graphics API Open standard developed by the Khronos Group Available for non- Microsoft platforms for i.mx31 Graphics Software APIs Direct3D Mobile Low level graphics API Microsoft mobile 3D API Available only for WinCE 5.0 devices M3G / JSR184 High level (scenegraph) based Java API Available for i.mx31 JVM Depending on the platform, the i.mx31 provides one of three application programmer interfaces for accessing the capabilities on the MBX-Lite. OpenGL ES provides a low-level hardware abstraction API for native programming on most operating systems. Based on a subset of the desktop OpenGL, this API is an open, royalty-free standard developed by the Khronos Group. Direct3D Mobile is also a low-level API for 3D graphics accelerators. Similar to Direct3D, version 8 for personal computers, Direct3D Mobile provides a comprehensive interface to 3D hardware for WinCE based platforms. For Java-based platforms, M3G provides a higher level scene-graph interface for 3D accelerators. While commonly criticized for its floating-point usage, M3G excels on the i.mx31 due to the integrated VFP unit.

Which of the following statements about the tile-based rendering scheme of the MBX-Lite are true? Click all that apply, and then click Done. a. Tile-based rendering allows lower system bandwidth. b. Tile-based rendering allows better scene management. c. Tile-based rendering allows higher texture compression. d. Tile-based rendering allows higher precision color operations. Done Question Here is a question to check your understanding of the MBX-Lite. Correct. Tile-based rendering allows lower system bandwidth and higher precision color operations.

Multimedia Capabilities 16 Megapixels Resolution In Still Picture Capture Up to 60 Hours of MP3 Playback 128 Kbps 6 Hours (3 Full Movies) of MPEG-4 Decoding and Playback VGA 30 fps 2 s 2 Sensors TV Encoder Stereo Stereo DAC DAC WLAN WLAN ARM11 ARM11 18bits USB HS VFP VFP i.mx31 Base Base Band Band MPEG-4 MPEG-4 MMC/ SDIO MS Pro ATA HDD Up to 480 Mbps Synchronization Speed MMC card, Flash Card SDIO, MS Pro HDD Up to 10 Hours of Real-Time Video Capture & Encoding VGA 30 fps Up to 37 Hours of Viewfinder Operation The i.mx31 processor is optimized to support a variety of image and video applications. It offers power-efficient image and video processing, pre- and post-processing in hardware, simultaneous MPEG-4 Simple Profile (SP) video encoding and decoding, real-time video decode in advanced formats, and image capture of up to 30 megapixels per second. The video implementation in the i.mx31 processor is the result of a smart trade-off between performance and flexibility. With a VFP co-processor and L2 cache, the i.mx31 is designed for any wireless device running computationally-intensive multimedia applications such as digital video broadcast and videoconferencing. The i.mx31 has many multimedia highlights, including up to 60 hours of MP3 playback at 128 Kbps. It provides versatile connectivity to a variety of image sensors and display devices as well as many peripherals and expansion ports for devices such as MultiMedia Card, Flash cards, the SDIOs, Memory Stick PRO, and HDDs. The synchronization speed is up to 480 Mbps. Image capture in the i.mx31 can reach up to 30 megapixels per second, supporting VGA at 30+ fps in real time, 3 megapixels at 10 fps, and 16 megapixels for still picture capture. The synchronization speed is up to 480 Mbps. Image and video processing is very power efficient in the i.mx31. In particular, pre- and post-processing is performed fully in hardware, and the viewfinder, with up to 37 hours of operation, does not involve the ARM CPU. The i.mx31 supports simultaneous MPEG-4 SP Video Encoding and Decoding with up to VGA at 30 fps and 3 Mbits per second. Encoding is accelerated in hardware (approximately 1300 MHz of equivalent ARM11 performance), and decoding is performed in software. Pre- and post-processing is performed fully in hardware, adding considerable processing power to the system (approximately 1200 MHz of equivalent ARM11 performance). Pre- and post-processing includes functions such as resizing, inversion, rotation, de-blocking, de-ringing, blending, and color space conversion. i.mx31 supports six hours of real-time video decoding and playback with VGA at 30 fps. Other features of MPEG-4 video decoding include hardware-accelerated Post-Filtering for MPEG-4 and hardware-accelerated In-Loop De-Blocking for H.264. The i.mx31 supports real-time video decode in the following advanced formats: MPEG-4 Simple Profile (SP), H.264, Windows Media Video (WMV), RealVideo (RV), MPEG2, and DiVX.Video conference calling is supported on the i.mx31 with up to VGA at 30 fps and 1 Mbps.

Performed by: Camera (Image Signal Processing) (or ARM11 SW) in i.mx31 MPEG-4 Encoder in i.mx31 ARM11 SW Video Processing Bayer Format Conversion YUV Quality Enhancement Image Conversion Compression Combining with Audio Image Sensor MPEG-4 Encoder Viewfinder Window Memory Communication Network RGB Image Conversion Post Filtering YUV Decompression Separation from Audio Let s examine the video processing chain and its implementation. Images are captured by a camera and input directly to the Image Processing Unit () via the sensor interface. The performs some very processing-intensive image manipulations, adding considerable processing power to the system: approximately 1200 MHz of equivalent ARM11 performance. The includes all the functionality required for image processing and display management. It allows a camera preview function to be performed fully in hardware, allowing the CPU to be powered down in this stage. It performs post filtering for MPEG-4, including de-blocking and de-ringing, and it also performs in-loop de-blocking for H.264 as specified in this standard. Video and graphics can be combined, and transparency specified by a key color, global alpha value, or per-pixel alpha values interleaved with the pixel components. With regards to image conversion, it provides a fully flexible resizing ratio essentially between any two resolutions. Pixel format conversion features include fully flexible conversion coefficients, color space, and color adjustments. Other functions include filtering, 90, 180, and 270 degree rotation, and horizontal/vertical inversion. The pre-processor is part of the, and it resizes the data and performs color space conversion. The pre-processor can send data to a small viewfinder display, which provides visual feedback to the user to ensure that the desired data is being captured. The pre-processor then sends data to the MPEG-4 encoder, which performs data compression according to the MPEG-4 video standard. The encoded data can be stored to file or sent to a communication network for later retrieval and playback. Later, when the user wants to view the recorded video, the encoded data is retrieved and passed through the MPEG-4 decoder, which decompresses the data. The decompressed data is then sent to the post-processing module for quality enhancement, image resizing, and color space conversion. The data is then viewable on a display such as an LCD or TV monitor.

Video Processing Pre/Post processing: Performed fully in hardware Includes resizing, rotation and inversion, color conversion, de-blocking, de-ringing, and blending with graphics Encoding: MPEG-4 SP (fully HW accelerated) High performance; up to VGA @ 30 fps; image quality not compromised Very power efficient CPU is totally free to perform other tasks Sufficient for most purposes: MPEG-4 SP is used for video conferencing MPEG-4 SP is supported by most video players Other standards are left to SW Decoding: Post-filtering (de-blocking and de-ringing) is HW accelerated, providing significant acceleration. For H.264, the most processing-intensive standard, the de-blocking filter is HW accelerated. Other standards are implemented in software, enabling full flexibility to support a variety of algorithms and future extensions. This is enabled by the powerful ARM11 MCU and multilevel cache system. The i.mx31 has built in pre- and post- processing in hardware that includes all the functionality required for image processing and display management, including de-block, de-ring, color space conversion, independent horizontal and vertical resizing, blending of graphics and video planes, and rotation in parallel to video decoding. For video encoding, MPEG-4 SP and the H.263 baseline formats are fully hardware accelerated, supporting resolutions up to VGA at 30 fps. This achieves a high degree of power efficiency and frees the CPU to perform other tasks. It is sufficient for most purposes, as video conferencing and most video players support MPEG-4 SP. Software performs the encoding for other video standards. Based on a mixture of software and hardware, this implementation provides the greatest flexibility to support a variety of algorithms and future extensions. The advanced ARM11 instruction set and multilevel cache system optimizes software. For MPEG-4, hardware accelerates the post-filtering (deblocking and deringing), which results in a 75 percent load reduction on the ARM11 core. For H.264 baseline format the most processing-intensive format hardware also performs the deblocking filter, which provides a 30 percent acceleration improvement. The software does implement other standards, which enables full flexibility to support a variety of algorithms and future extensions. The powerful ARM11 processor (including its multi-level cache system) provides the flexibility to decode at a high rate any currently relevant formats (up to HVGA at 30 fps), as well as possible future extensions.

Graphics Accelerator Camera s TV Encoder s i.mx31 CPU Complex ARM11 CPU MPEG-4 Encoder EMI Memory As you saw earlier, the is at the heart of the video processing chain. It offers an integrative approach, including all functionalities required for image processing and display management. The supports connectivity to a wide range of external devices including cameras, displays, graphics accelerators, and TV encoders and decoders. To support all these devices, the has a synchronous interface and an asynchronous interface. The synchronous interface is for transfer of display data in synchronization with the screen refresh cycle. This interface is for memory-less displays and TV encoders, and it also transfers video to smart displays that have a video port. The asynchronous interface is for random read/write access to the memory and registers of smart displays and graphics accelerators. The data bus is 18 bits wide (or less), and it can transfer pixels of up to 24-bit color depth. The interface with cameras and TV decoders is much more systematic than the interface with displays and requires much less flexibility. The interface receives one data sample per bus cycle, with 8 to 16 bits per sample. There is one exception, a nibble mode, in which 8-bit samples are received through a 4-bit bus, each during two cycles. Synchronization signals (Vsync, Hsync) are either embedded in the data stream, following the BT.656 protocol, or transferred through dedicated pins. The main pixel formats are YUV (4:4:4 or 4:2:2) and RGB. Any other format, such as Bayer or JPEG, can be received as generic data, which is transferred without modification, to the system memory.

Interface to: smart image sensors raw image sensors camera flash support Deblocking and deringing Resizing Color conversion Combining with graphics Inversion and rotation Interface to: a smart/memory-less display a TV encoder a graphics accelerator Sensor Port Video Processing Port Synchronization & Control AHB Master Port AHB Slave Port IP Port System Memory ARM11 The is equipped with powerful control and synchronization capabilities to perform its tasks with minimal involvement of the ARM CPU. The integrated DMA controller (with two AHB master ports) allows autonomous access to system memory. An integrated display controller performs screen refresh of memory-less displays.a page-flip double buffering mechanism synchronizes read and write accesses to the system memory to avoid tearing. The also offers internal synchronization. Here you can see the layout of the. The sensor port provides interface to smart image sensors, raw image sensors, and camera flash support. Video processing provides deblocking and deringing, resizing, color conversion, combining with graphics, and inversion and rotation. The display port provides interface to a smart/memory-less display, a TV encoder, and a graphics accelerator. With the ARM platform powered down, the performs the following activities completely autonomously: screen refresh of a memory-less display, periodic update of the display buffer in a smart display, and display of a viewfinder window. When the system is idle, the user may want to display on the screen a changing image such as an animation or a running message. In i.mx31, this can be performed automatically. The CPU stores in system memory all the data to be displayed, and the performs the periodic display update without further CPU intervention. Integration, combined with internal synchronization, avoids unnecessary access to system memory, so it reduces the load on the memory bus and power consumption. In particular, input from a smart sensor (in YUV or RGB pixel formats) can be processed on the fly before being stored in system memory, and output to a smart display can be processed on the fly while being read from system memory. In some cases, input from a sensor can be sent directly to a display without passing through system memory at all. The integrative approach enables efficient hardware design in which the hardware is reused whenever possible for different applications. For example, the DMA controller is used for video capture, image processing and data transfer to display. In addition, the image conversion hardware is used both for captured video (from camera) and for video playback (from memory).

A B C Question Label the components in the diagram below to show that you recognize the function of each. Drag the letters from the left to the corresponding positions on the right. Click Done when you are finished. Interface to smart image sensors, raw image sensors, camera flash support Autonomous access to system memory Interface to a smart/memory-less display, a TV encoder, a graphics accelerator Sensor Port Video Processing Port Let s review the functions of the components of the. Correct. Synchronization & Control A C AHB Master Port AHB Slave Port IP Port The sensor port is the interface to smart image sensors, raw image sensors, and camera flash support. The two AHB Master Ports are for autonomous access to system memory, and the display port is the interface to a smart/memory-less display, a TV encoder, and a graphics accelerator. B

ARM Processing: MPEG-4 stream forming Encoder Processing: Motion estimation, DCT & quantization Inverse quantization, IDCT & motion compensation Scan, run-length coding & Huffman coding Rate control Processing: For compression: de-interleaving For display (viewfinder): color conversion, combining with graphics For both (independently): resizing, inversion, rotation MPEG-4 Encoding in Hardware Camera i.mx31 ARM11 CPU MPEG-4 Encoder EMI Memory MPEG-4 Stream VLC-Encoded Frame Reference Frame Buffer Video Input Double Buffer Graphics Overlay Double Buffer Here you can see how data flows for video capturing using MPEG-4 encoding. processing takes care of de-interleaving for compression; color conversion and combining with graphics for display (viewfinder); and resizing, inversion, and rotation for both compression and display (independently). Next, the encoder processes motion estimation, discrete cosine transform (DCT) and quantization, inverse quantization, inverse DCT (IDCT) and motion compensation, scan, run-length coding and Huffman coding, and rate control. Finally, the ARM takes care of MPEG-4 stream forming. The video encoding hardware accelerator of the i.mx31 processor supports MPEG-4 SP (all levels) and H.263 baseline and enables pixel rates up to VGA at 30 fps and compressed bit rate up to 4 Mbps. This adds up to 1300 MHz of equivalent ARM11 performance. Two methods can detect that the encoding of one frame is finished: either poll the register 1 or catch the interrupt signal (IP Indigo IF). The VGA MPEG-4 encoder in the i.mx31 has motion estimation capabilities with a motion vector length up to 32 pixels. VGA MPEG-4 encoding also includes error resilience tools as defined in the MPEG-4 standard. Additional features of the VGA MPEG-4 encoder include pre-processing for picture smoothing using a low-pass filter and camera movement stabilization, both of which are patented technologies.

ARM Processing: Decoding except in-loop deblocking Processing: In-loop de-blocking, resizing, color conversion, combining with graphics, Inversion, rotation Video Playback: H.264 i.mx ARM11 CPU For in-loop deblocking For post-processing EMI Memory H.264 Stream Reference Frame Buffer Video Output Double Buffer Graphics Overlay Double Buffer Here you can see the data flow of video playback using H.264 decoding. ARM processing takes care of decoding except in-loop deblocking. processing takes care of in-loop de-blocking, resizing, color conversion, combining with graphics, inversion, and rotation.

Module Summary Imagination PowerVR MBX-Lite High performance 3D graphics Less power and bandwidth than traditional architectures Three graphic software APIs: OpenGL ES Direct3D Mobile M3G/JSR184 i.mx31 processor multimedia capabilities Power-efficient image and video processing Simultaneous MPEG-4 SP video encoding and decoding Real-time video decode in advanced formats Image capture of up to 30 megapixels per second In this module, you learned about the features and functions of the of the Imagination PowerVR MBX-Lite hardware core, which provides high performance 3D graphics for less power and bandwidth than many traditionally architected accelerators. You also learned about the three graphic software APIs: OpenGL ES, Direct3D Mobile, and M3G/JSR184. Next you examined the multimedia capabilities of the i.mx31 processor, which include powerefficient image and video processing, simultaneous MPEG-4 SP video encoding and decoding, real-time video decode in advanced formats, and image capture of up to 30 megapixels per second. Finally, you learned about the features of the.