Digital Television DVB-C

Similar documents
ECE 417 Guest Lecture Video Compression in MPEG-1/2/4. Min-Hsuan Tsai Apr 02, 2013

Digital video coding systems MPEG-1/2 Video

Audio and video compression

DigiPoints Volume 1. Student Workbook. Module 8 Digital Compression

Optical Storage Technology. MPEG Data Compression

Multi-Source Analyzer (MSA) Series

Triveni Digital Inc. MPEG Technology Series. MPEG 101 (MPEG 2 with a dash of MPEG 4 thrown in) Copyright 2011 Triveni Digital, Inc.

MPEG-2. ISO/IEC (or ITU-T H.262)

MPEG-4. Today we'll talk about...

Week 14. Video Compression. Ref: Fundamentals of Multimedia

Computer and Machine Vision

DIGITAL TELEVISION 1. DIGITAL VIDEO FUNDAMENTALS

Digital Video Processing

MPEG-l.MPEG-2, MPEG-4

About MPEG Compression. More About Long-GOP Video

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

Lecture 6: Compression II. This Week s Schedule

VIDEO COMPRESSION STANDARDS

Image and video processing

Multimedia Standards

Video coding. Concepts and notations.

Advanced Video Coding: The new H.264 video compression standard

Interframe coding A video scene captured as a sequence of frames can be efficiently coded by estimating and compensating for motion between frames pri

The Scope of Picture and Video Coding Standardization

Ch. 4: Video Compression Multimedia Systems

Chapter 11.3 MPEG-2. MPEG-2: For higher quality video at a bit-rate of more than 4 Mbps Defined seven profiles aimed at different applications:

Technical Specifications EN80/81/40 Encoder - RD60 IRD

Video Coding Standards. Yao Wang Polytechnic University, Brooklyn, NY11201 http: //eeweb.poly.edu/~yao

MPEG TS. MAYAH Communications Application Note 34. MAYAH Communications GmbH Application Note 34 MPEG TS

2014 Summer School on MPEG/VCEG Video. Video Coding Concept

Introduction to LAN/WAN. Application Layer 4

CSCD 443/533 Advanced Networks Fall 2017

Transcoding Using the MFP Card

IPTV 1

Cisco D9034-S Encoder

Lesson 6. MPEG Standards. MPEG - Moving Picture Experts Group Standards - MPEG-1 - MPEG-2 - MPEG-4 - MPEG-7 - MPEG-21

MPEG-4: Overview. Multimedia Naresuan University

Video Compression An Introduction

PREFACE...XIII ACKNOWLEDGEMENTS...XV

Lecture 3 Image and Video (MPEG) Coding

ZEN / ZEN Vision Series Video Encoding Guidelines

AUDIO AND VIDEO COMMUNICATION MEEC EXERCISES. (with abbreviated solutions) Fernando Pereira

Networking Applications

VC 12/13 T16 Video Compression

INF5063: Programming heterogeneous multi-core processors. September 17, 2010

Opportunities for Data Broadcasting in Digital TV

Compression and File Formats

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

Audio Compression. Audio Compression. Absolute Threshold. CD quality audio:

Video Coding Standards

CISC 7610 Lecture 3 Multimedia data and data formats

VIDEO AND IMAGE PROCESSING USING DSP AND PFGA. Chapter 3: Video Processing

TransMu x. Users Manual. Version 3. Copyright PixelTools Corporation

MPEG-2 Transport Stream

ISO/IEC TR TECHNICAL REPORT. Information technology Coding of audio-visual objects Part 24: Audio and systems interaction

DMP900 Digital Media Platform

Compressed-Domain Video Processing and Transcoding

Advanced Encoding Features of the Sencore TXS Transcoder

CARRIAGE OF MPEG-4 OVER MPEG-2 BASED SYSTEMS. Ardie Bahraini Motorola Broadband Communications Sector

MPEG: It s Need, Evolution and Processing Methods

Image, video and audio coding concepts. Roadmap. Rationale. Stefan Alfredsson. (based on material by Johan Garcia)

MAXIMIZING BANDWIDTH EFFICIENCY

Fernando Pereira. Instituto Superior Técnico

High Efficiency Video Coding: The Next Gen Codec. Matthew Goldman Senior Vice President TV Compression Technology Ericsson

Multimedia Systems Image III (Image Compression, JPEG) Mahdi Amiri April 2011 Sharif University of Technology

MPEG-2 Patent Portfolio License Illustrative Cross-Reference Chart Ctry. Patent No. Claims Category Description Standard Sections

Introduction to Video Compression

Compression of Stereo Images using a Huffman-Zip Scheme

Tech Note - 05 Surveillance Systems that Work! Calculating Recorded Volume Disk Space

Professional Multiplexers. Digital Modulators. Digital Media Converters

Part 1 of 4. MARCH

EE Multimedia Signal Processing. Scope & Features. Scope & Features. Multimedia Signal Compression VI (MPEG-4, 7)

Lecture Information Multimedia Video Coding & Architectures

Encoding Video for the Highest Quality and Performance

CS 260: Seminar in Computer Science: Multimedia Networking

Compression; Error detection & correction

H.264/AVC und MPEG-4 SVC - die nächsten Generationen der Videokompression

Multimedia Coding and Transmission. Video Coding. Ifi, UiO Norsk Regnesentral Vårsemester 2003 Wolfgang Leister. This part of the course...

Georgios Tziritas Computer Science Department

Video Compression MPEG-4. Market s requirements for Video compression standard

Ericsson Encoder Evolution. 12 HU Chip Set based 6 HU Chip Set based 2 HU Chip Set based 2 HU FPGA based 1 HU FPGA based

Wireless Communication

The VC-1 and H.264 Video Compression Standards for Broadband Video Services

MPEG-4: Simple Profile (SP)

MPEG-2 Encoder System for DSNG

MPEG-4 Part 10 AVC (H.264) Video Encoding

4G WIRELESS VIDEO COMMUNICATIONS

MediaKind Encoding On-Demand

Theoretical and Practical Aspects of Triple Play

ISO/IEC Information technology Coding of audio-visual objects Part 15: Advanced Video Coding (AVC) file format

Lecture Information. Mod 01 Part 1: The Need for Compression. Why Digital Signal Coding? (1)

DHP 200. Principle Chart. Product Outline. Key Features. DHP 200 Digital Head-end Processor

Cross Layer Protocol Design

International Journal of Emerging Technology and Advanced Engineering Website: (ISSN , Volume 2, Issue 4, April 2012)

MediaKind CE-x Option Module

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

Bluray (

Multimedia Coding and Transmission. Video Coding. Ifi, UiO Norsk Regnesentral Vårsemester 2005 Wolfgang Leister. This part of the course...

Perceptual coding. A psychoacoustic model is used to identify those signals that are influenced by both these effects.

MULTIPLEXING/DE-MULTIPLEXING DIRAC VIDEO WITH AAC AUDIO BIT STREAM ASHWINI URS. Presented to the Faculty of the Graduate School of

Transcription:

2012 Page 1 3/27/2012

Digital Television DVB-C Digital headend series of encoder, MUX, scrambler, modulator for both SD and HD Integrated digital headend one carrier in one RU CAS / SMS Conditional Access System EPG Electronic Program Guide Pay-per-view (PPV) Remote CAS STB- cable, terrestrial, satellite, and MMDS, both SD and HD

Overview Purpose: - To gain a basic understanding of MPEG video transmission systems. Objective: - To understand the basics of the MPEG protocol - To learn about the MPEG frame types & elementary streams - To understand PTS, DTS, and PCR - To understand the PSI table structure

What does MPEG stand for?

Introduction In 1988, the ISO Moving Picture Experts Group (MPEG) began developing a set of coding (audio and video) and transport (multiplexing and synchronization) standards for the efficient storage and delivery of video, audio and data. These standards (with additional extensions by the DVB and ATSC organizations) have pushed MPEG into cable systems everywhere. MPEG offers: Superior video quality at substantial bandwidth savings Increased utilization in satellite and cable distribution networks Cost savings for storage and retrieval of programming Further savings due to MPEG standardization and interoperability

MPEG-1 MPEG-1 Audio and Video (first started in July 1989) 500kbps - 1.5Mbps ideally IS 11172-1: (System) describes synchronization and multiplexing of video and audio. IS 11172-2: (Video) describes compression of non-interlaced video signals. IS 11172-3: (Audio) describes compression of audio signals using high performance coding. The MPEG-1 standard, established in 1992, is designed to produce reasonable quality images and sound at low bit rates. MPEG-1 compression of audio signals, specifies a family of three audio coding schemes, simply called Layer-1,-2,-3, with increasing encoder complexity and performance (sound quality per bit-rate). MPEG-1 Layer-3 is more popularly known as MP3 and has revolutionized the digital music domain. MPEG-1 is intended to fit the bandwidth of CD-ROM, Video-CD and CD-i. MPEG-1 usually comes in Standard Interchange Format (SIF), which is established at 352x240 pixels at 1.5 megabits per second, a quality level about on par with VHS. MPEG-1 can be encoded at bit rates as high as 4-5Mbits/sec, but the strength of MPEG-1 is its high compression ratio with relatively high quality. MPEG-1 is also used to transmit video over ADSL, VOD, Video Kiosks, and corporate presentations. MPEG-1 is also used as audio-only (MP3).

MPEG-2 MPEG-2 Audio and Video (first started July 1991) 2-8 Mbps for SDTV, up to 19 Mbps for HDTV Established in 1994, MPEG-2 is designed to produce higher quality images at higher bit rates. MPEG-2 streams at lower MPEG-1 bit rates won't look as good as MPEG-1. But at its specified bit rates between 3-10Mbits/sec, MPEG-2 at the full CCIR-601 resolution of 720x480 pixels NTSC delivers true broadcast quality video. MPEG-2 was engineered so that any MPEG-2 decoder will play back an MPEG-1 stream, ensuring a side-grade path for users who enter into MPEG with the lower priced MPEG-1 encoding hardware. The primary users of MPEG-2 are broadcast and cable companies who demand broadcast quality digital video and utilize satellite transponders and cable networks for delivery of cable television and direct broadcast satellite. It's also the standard specified for DVD encoding.

MPEG-3 MPEG-3 was initially intended to cover HDTV, providing larger sampling dimensions and bit rates between 20-40Mbits/sec. It was later discovered that MPEG-2 could be used to cover the requirements of HDTV, so the MPEG-3 standard was dropped.

MPEG-4 MPEG-4 [H.264/AVC] (first started in July 1995) 384 Kbps 8+ Mbps The MPEG-4 standard was initiated in 1995 and was finalized at the end of 1998. This standard was initially specified for very low bit rates but now it supports up to 8+ Mbps. MPEG-4 is designed for use in broadcast, interactive and conversational environments. The way MPEG-4 is built allows MPEG-4 to be used in television and Web environments simultaneously, not just the one after the other, but also facilitates integration of content coming from both channels in the same multimedia scene. MPEG-4 is based on the MPEG-1 and -2 standards and VRML (Virtual Reality Modeling Language). MPEG-4 adds the following to MPEG-1, 2 and VRML: Support for 2D and 3D content Support for several types of interactivity Coding at very low rates (2 Kbps for speech, 5 Kbps for video) to very high ones (5 Mbps for transparent quality Video, 64 Kbps per channel for CD quality Audio) Standardized Digital Rights Management signaling, otherwise known in the MPEG community as Intellectual Property Management and Protection (IPMP). Native support for natural content and real-time streamed content, using URLs Several forms of support over networks with bandwidth unknown at the time of encoding. AAC (Advanced Audio Codec) was standardized as an adjunct to MPEG-2 (as Part 7) before MPEG-4 was issued.

MPEG-4 Parts MPEG-4 consists of several standards termed "parts" including the following: Part 1 (ISO/IEC 14496-1): Systems: Describes sync & multiplexing of video/audio. For example: Transport stream. Part 2 (ISO/IEC 14496-2): Visual: A compression codec for visual data (video, still textures, synthetic images, etc.). One of the many "profiles" in Part 2 is the Advanced Simple Profile (ASP). Part 3 (ISO/IEC 14496-3): Audio: A set of compression codecs for perceptual coding of audio signals, including some variations of Advanced Audio Coding (AAC) as well as other audio/speech coding tools. Part 4 (ISO/IEC 14496-4): Conformance: Describes procedures for testing conformance to other parts of the standard. Part 5 (ISO/IEC 14496-5): Reference Software: Provides software for demonstrating and clarifying the other parts of the standard. Part 6 (ISO/IEC 14496-6): Delivery Multimedia Integration Framework (DMIF). Part 7 (ISO/IEC 14496-7): Optimized Reference Software: Provides examples of how to make improved implementations (e.g., in relation to Part 5). Part 8 (ISO/IEC 14496-8): Carriage on IP networks: Specifies a method to carry MPEG-4 content on IP networks. Part 9 (ISO/IEC 14496-9): Reference Hardware: Provides hardware designs for demonstrating how to implement other parts of the standard. Part 10 (ISO/IEC 14496-10): Advanced Video Coding (AVC): A codec for video signals which is technically identical to the ITU-T H.264 standard. Part 11 (ISO/IEC 14496-11): Scene description and Application engine, also called BIFS; can be used for rich, interactive content with multiple profiles, including 2D and 3D versions. Part 12 (ISO/IEC 14496-12): ISO Base Media File Format: A file format for storing media content. Part 13 (ISO/IEC 14496-13): Intellectual Property Management and Protection (IPMP) Extensions. Part 14 (ISO/IEC 14496-14): MPEG-4 File Format: The designated container file format for MPEG-4 content, based on Part 12. Part 15 (ISO/IEC 14496-15): AVC File Format: For storage of Part 10 video based on Part 12. Part 16 (ISO/IEC 14496-16): Animation Framework extension (AFX). Part 17 (ISO/IEC 14496-17): Timed Text subtitle format. Part 18 (ISO/IEC 14496-18): Font Compression and Streaming (for OpenType fonts). Part 19 (ISO/IEC 14496-19): Synthesized Texture Stream. Part 20 (ISO/IEC 14496-20): Lightweight Scene Representation (LASeR). Part 21 (ISO/IEC 14496-21): MPEG-J Graphical Framework extension (GFX) (not finished - at "FCD" stage in July 2005, FDIS Jan. 2006). Part 22 (ISO/IEC 14496-22): Open Font Format Specification (OFFS) based on OpenType (not finished - reached "CD" stage in July 2005) Part 23 (ISO/IEC 14496-23): Symbolic Music Representation (SMR) (not yet finished - reached "FCD" stage in October 2006) Profiles are also defined within the "parts", so an implementation of a part is not always an implement of an entire part.

MPEG Compression MPEG offers superior video/audio quality at significant bandwidth savings. Digitizing a standard U.S. movie without means of compression will produce a transport stream of about 168 Mbps. Using 256 QAM modulation, it would take 5 separate 6 MHz channels to transmit one movie. Using an MPEG-II transport stream, this bit rate can be reduced to a 3-4 Mbps elementary stream (a 50:1 reduction) without significantly effecting perceived video quality. This allows an MSO to transmit this movie plus 9 or more movies in just one 256- QAM 6 MHz channel.

Analog versus Digital In the world of analog, video is nothing more than a collection of still pictures, which when presented in quick succession on a Television, appears to be fluid motion. Video is captured at 30 pictures (called frames) per second and presented at the same rate. The world of digital works much the same way. A stream of 1 s and 0 s (bits) of data are sent to a decoder, which reassembles them to again create 30 frames (29.97 to be exact) frames per second. The difference is in the bits. Each point (pixel) in the picture must be represented by these bits, which happens when the initial pictures are coded. Factoid: Movies viewed at the movie theater are only broadcast at 24 frames per second.

Intra-frame Coding Intra-frame coding In this technique, a frame is encoded using only information from the same analog frame (itself). Typical images contain large areas in which adjacent samples are spatially correlated. Macroblocks are 16x16 pixel (individual points in a picture) areas of original image. A macroblock usually consists of 4 Y blocks, 1 Cr block, and 1 Cb block. Research into the Human Visual System (HVS) has shown that the eye is most sensitive to changes in luminance, and less sensitive to chrominance variations. Since compression is key, MPEG operates on a color space that can effectively take advantage of the eye s different sensitivity to luminance and chrominance information. As such, MPEG uses the YCbCr color space to represent the data values instead of RGB, where Y is the luminance signal, Cb is the blue color difference signal, and Cr is the red color difference signal.

Intra-frame Coding Conclusion Intra-frame coding, macroblocks The macroblocks described previously and shown below are created using a mathematical technique known as a Discrete Cosine Transform (DCT). A macroblock can be represented in several different manners when referring to the YCbCr color space. The 3 formats used are known as 4:4:4, 4:2:2, and 4:2:0 video. 4:4:4 is full bandwidth YCbCr video, and each macroblock consists of 4 Y blocks, 4 Cb blocks, and 4 Cr blocks. Being full bandwidth, this format contains as much information as the data would if it were in the RGB color space. 4:2:2 contains half as much chrominance information as 4:4:4, and 4:2:0 contains one quarter of the chrominance information. Although MPEG-2 has provisions to handle the higher chrominance formats for professional applications, most cable and satellite feeds use the normal 4:2:0 mode.

Macroblocks in review 4:4:4 No chroma sub sampling, each pixel has Y, Cr and Cb values. 4:2:2 --> Horizontally sub sample Cr, Cb signals by a factor of 2. 4:1:1 Horizontally sub sampled by a factor of 4. 4:2:0 Sub sampled in both the horizontal and vertical dimensions by a factor of 2. Theoretically, the chroma pixel is positioned between the rows and columns as shown in the figure. 4:1:1 and 4:2:0 are mostly used in JPEG and MPEG.

Inter-frame Coding Inter-frame coding Intra-frame coding techniques were limited to processing the video signal relative only to information within the current video frame. Considerably more compression efficiency can be obtained however, if time-based redundancies, are exploited as well. With televisions displaying pictures at 30 frames per second, very little information will change from frame to frame in the span of a second or less, provided no scene changes occur. Temporal processing to exploit this redundancy uses a technique known as block-based motion compensated prediction, using motion estimation. Starting with an I frame, the encoder can forward predict a future frame. This is commonly referred to as a P frame, and it may also be predicted from other P frames, although only in a forward time manner. As an example, consider a group of pictures that lasts for 6 frames. In this case, the frame ordering is given as I,P,P,P,P,P,I,P,P,P,P,P. Each P frame in this sequence is predicted from the frame immediately preceding it, whether it is an I frame or a P frame. As a reminder, I frames are coded spatially with no reference to any other frame in the sequence.

Statistical Redundancy Statistical redundancy (Entropy or Huffman coding) Statistical methods are used to code the most frequently occurring sequences with fewer bits (pulled from a lookup table) compared to those which are less likely. Factoid- In 1951, David Huffman and his MIT information theory classmates were given the choice of a term paper or a final exam. The professor, Robert M. Fano, assigned a term paper on the problem of finding the most efficient binary code. Huffman, unable to prove any codes were the most efficient, was about to give up and start studying for the final when he hit upon the idea of using a frequency-sorted binary tree, and quickly proved this method the most efficient.

MPEG Video Frames in review Predicted Frames I Frame I Frame B Frame P Frame I Frames Intra-coded only This is a reference frame for future predictions. Moderate compression (on order of 10:1), limits the propagation of transmission of errors, supports random access and fast forward/fast reverse P Frames Forward prediction from either previous I frames or previous P frames. Reference for future P or B frames. Good compression savings (20:1) I Frame B Frames Bi-directional interpolated prediction from two sources. Previous reference I or P frames (forward prediction).future reference I or P frames (backwards prediction). Highest compression (50:1)

MPEG Video Structure Group of pictures (GOP) The first frame of a group and all additional frames dependant on each other for prediction. The number of frames in a group is user selectable (Typically 15) Video sequence A series of frames

System Timing Digital Video is all about timing with 3 sources of timing! PCR: Program Clock References Inserted into transport stream every 100 ms max. Based upon a 27MHz clock at the encoder Synchronizes program timing DTS: Decode Time Stamp Instructs the decoder (ie STB) when to decode specific GOP s PTS: Presentation Time Stamp Instructs the decoder when to present specific GOP s

MPEG Audio Overview Audio can be synchronized with video or be standalone MPEG-1 Audio: A 2-channel (mono, dual mono or stereo) w/sampling rates of 44.1 khz (CD), 48 khz (DAT), 32 khz (digital voice), compressed bit rates of 32-192 kbps/channel MPEG-2 BC: (Backward Compatible) is multi-channel (up to 5.1) w/additional lower 16, 22.05, and 24 khz sampling rates MPEG-2 AAC: (Advanced Audio Coding) is a very high-quality audio coding standard for 1 to 48 channels at sampling rates of 8 to 96 khz, with multi-channel, multi-lingual, and multi-program capabilities.

Dolby AC-3 Audio Dolby AC-3 is not part of the MPEG standard (although it can be carried in an MPEG-2 Transport Stream as a private data stream) It has been adopted by the ATSC (for HDTV), North American Digital Cable (most stb s now support both MPEG-2 Audio and Dolby AC-3) and is part of the DVD audio spec (along with MPEG-1 & 2 Audio and Linear PCM) The compression algorithm is proprietary in nature and must be licensed from Dolby Labs, Inc. Compressed bit rates of 32-640 kbps Mono to full 5.1 channels Factoid: Most feeds found in a Cable Headend or off satellite are encoded using AC-3 with a 48 KHz sampling rate at a bit rate of 192 or 384 KHz.

MPEG-2 System Multiplex Video Video Encoder Packetizer Video PES Audio Audio Encoder Packetizer Audio PES Transport Stream MUX Single Program Transpor Elementary Stream Data Dat a PES t Stream Sources are made up of elementary streams. Each program elementary stream (PES) is encoded as a separate stream, then synchronized through a common program PCR.

MPEG Transport Streams A Transport stream is a multiplex of one or more programs (each of which may have their own independent time base) Provides a synchronization mechanism for dealing with delay variations in packet delivery Provides for the more sophisticated handling of data (private data, CA) Fixed length 188 or 204 byte packets. Most North American Digital Cable Systems are designed for 188 byte packets. Factoid- The byte size of MPEG transport streams was chosen because they aligned with DES encryption and ATM payloads. The small size of the packets (188 or 204) are also less prone to errors during transport.

MPEG-2 Transport Packet Overview 4 Byte Header Sync, PID... 188 Byte Transport Packet PCR,... Video/Audio/Data Payload Stuffing Bytes PES Header with PTS Encoded Video PES Header with PTS Encoded Video... PES Packet Video Program Elementary Stream Sync is a unique code that indicates the start of every transport packet, Note that the PES packets can be split over transport packet boundaries. PES Header with PTS Encoded Audio PES Header with PTS Audio Program Elementary Stream Encoded Audio...

MPEG Program Specific Information (PSI) Tables The MPEG-2 standard defines a number of tables which help in the reassembly of individual elementary streams after packet transport across a network: Program Association Table (PAT) This table has a PID value of zero and contains a list of all programs in the transport and their associated PID values which are used to index into the Program Map Table. Program Map Table (PMT) This table has a user assigned PID value. Each program carried in the transport stream has a program map table entry associated with it. This entry contains the PID values of each of the elementary streams (video, audio, data) which make up a program. Conditional Access Table (CAT) This table has a unique PID value of one. If any of the elementary streams are scrambled / encrypted then a CAT entry is usually detailed. This provides the PID values for the transport packets that contain the conditional access management and entitlement information.

DVB Service Information (SI) The Digital Video Broadcast (DVB) consortium has extended the basic MPEG PSI tables (in documents EN 300 468 and others) in order to support a services model and additional data types such as teletext, subtitles and EPGs (Electronic Program Guides). DVB streams are typically found in Europe and other countries outside North America. These extensions include: Network Information Table (NIT) This table has a PID value of 0X10 and contains private data which details items such as channel frequencies, satellite transponder details, modulation characteristics, service origin, service name and details of any alternative networks. It also indicates the PID for the NIT Service Description Table (SDT) This table has a PID value of 0x11 and contains data describing the services in the system. Event Information Table (EIT) This table has a PID value of 0x12 and contains data concerning events or programmes, e.g names, start times, duration etc which are used to construct the electronic program guide.

Required PSIP Tables

MPEG Transport Multiplex An excellent example of an MPTS (Multiple Program Transport Stream). SPTS Video 1 PID Audio 1 PID Data 1 PID MPTS Other Data (Teletext, CA, etc) MPEG PSI ATSC PSIP or DVB SI Video 1 PID Audio 1 PID Data 1 PID Video 2 PID Audio 2 PID Audio 3 PID Data 2 PID... PAT, CAT, PMT STT, MGT,VCT, RTT EIT, ETT TDT, SDT, BAT EIT MPEG-2 MPTS

Digital modulation Using a 6 Mhz channel for transmission, the data rate depends on modulation Using 8-VSB (Off-Air), the maximum Data Rate is 19.39 Mbps. Using 64 QAM (Cable), the maximum Data Rate is 26.97 Mbps. Using 256 QAM (Cable), the maximum Data Rate is 38.8 Mbps. IMPORTANT- Juan C Marchesini If the accumulative www.amt.com data rate of all programs exceed the Copyright total max AMT 2010 allowed, all FOR pictures INTERNAL USE break ONLY up!!!

Typical data rates of individual programs A program can be encoded as a CBR or a VBR program. CBR- Constant Bit Rate. The bit rate is constant regardless of scene complexity, resulting in either wasted bandwidth or a low quality program. VBR- Variable Bit Rate. The bit rate constantly changes based on the complexity of each scene. Quality remains constant. Typical bit rates (everything except locally encoded is VBR and there are often exceptions)- Locally Encoded programs- 3-6 Mbps CBR (depending on settings) Most satellite delivered DC-II programs- Approx 3-4 Mbps average Most satellite delivered Powervue programs- Approx 4-5 Mbps average Sports programming- 4-6 Mbps average Non Sports HD programs over Satellite- 11-15 Mbps Sports HD programs over Satellite (and HDNet)- 17-19 Mbps HD programs broadcast Off-Air- 11-19 Mbps This means that a 256 QAM 6Mhz channel (38.8 Mbps) can comfortably carry about 9-12 programs without lose of quality.

Optimizing your Programming Original Transports Selected Programs Statistical Remultiplexer New Transport Stream Statistical Remultiplexing involves the use of two functions: 1. Statistical Remultiplexing allows a Cable Operator to pick and choose only the services he desires from multiple MPTSs (Multiple Program Transport Streams) and create a brand new MPTS with the new services chosen. 2. Since most services are VBR (variable bit rate), the bit rate is constantly changing. However, the transport stream each service rides in is always CBR (often 27 or 38.8 Mbps), so the statistical remultiplexer must manage the bit rate of each service to avoid bit rates which exceed the transport stream bandwidth. Managing service bit Rates (transrating) is done through several methods: Stripping out null padding from encoder generated Constant Bit Rate (CBR) streams. This method does not affect picture quality. Time shifting. Because the transport streams are being buffered, recreating a mux allows the opportunity to manage the buffers to ensure no two services peak at the same moment in bit rate. This method does not affect picture quality. Re-quantizing the video as required. This is usually achieved through proprietary algorithms. These methods above can often achieve a 25-40% reduction of bit rate without significant loss in quality, depending on the original content.

Digital TV Thank You