Multimedia. Sape J. Mullender. Huygens Systems Research Laboratory Universiteit Twente Enschede

Similar documents
Lecture 17: Distributed Multimedia

Distributed Multimedia Systems. Introduction

Communication in Distributed Systems

CS 457 Multimedia Applications. Fall 2014

CS 856 Latency in Communication Systems

Multimedia Networking

Multimedia Systems 2011/2012

3. Quality of Service

ITEC310 Computer Networks II

Networking Applications

Chapter 9. Multimedia Networking. Computer Networking: A Top Down Approach

Distributed Video Systems Chapter 3 Storage Technologies

CS 218 F Nov 3 lecture: Streaming video/audio Adaptive encoding (eg, layered encoding) TCP friendliness. References:

Data Communication Prof.A. Pal Department of Computer Science & Engineering Indian Institute of Technology, Kharagpur Lecture 36 Multimedia Networks

Streaming (Multi)media

COMP 249 Advanced Distributed Systems Multimedia Networking. Performance of Multimedia Delivery on the Internet Today

Communication Problems. Flow Control

Module 10 MULTIMEDIA SYNCHRONIZATION

Cell Switching (ATM) Commonly transmitted over SONET other physical layers possible. Variable vs Fixed-Length Packets

Delay Constrained ARQ Mechanism for MPEG Media Transport Protocol Based Video Streaming over Internet

Digital Asset Management 5. Streaming multimedia

Chapter 20: Multimedia Systems

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition,

Chapter 28. Multimedia

Multimedia Networking

CS640: Introduction to Computer Networks. Application Classes. Application Classes (more) 11/20/2007

Modeling of an MPEG Audio Layer-3 Encoder in Ptolemy

Transporting audio-video. over the Internet

ETSF10 Internet Protocols Transport Layer Protocols

15: OS Scheduling and Buffering

ELEC 691X/498X Broadcast Signal Transmission Winter 2018

QUALITY of SERVICE. Introduction

Overview of Networks

Multimedia Storage Servers

ES623 Networked Embedded Systems

Changing your point of view SNC-P5. Sony Network Camera.

CS519: Computer Networks. Lecture 9: May 03, 2004 Media over Internet

Extensions to RTP to support Mobile Networking: Brown, Singh 2 within the cell. In our proposed architecture [3], we add a third level to this hierarc

L2: Bandwidth and Latency. Hui Chen, Ph.D. Dept. of Engineering & Computer Science Virginia State University Petersburg, VA 23806

Module Introduction. Content 15 pages 2 questions. Learning Time 25 minutes

Multimedia Applications. Classification of Applications. Transport and Network Layer

Recommended Readings

NTSC/PAL. Network Camera SNC-P1

Chapter 6: Congestion Control and Resource Allocation

Transport Layer Protocols TCP

Lecture 13: Transportation layer

CineLink HD-D IP Decoder

Assuring Media Quality in IP Video Networks. Jim Welch IneoQuest Technologies

Modelling a Video-on-Demand Service over an Interconnected LAN and ATM Networks

Chapter 13 TRANSPORT. Mobile Computing Winter 2005 / Overview. TCP Overview. TCP slow-start. Motivation Simple analysis Various TCP mechanisms

CODING METHOD FOR EMBEDDING AUDIO IN VIDEO STREAM. Harri Sorokin, Jari Koivusaari, Moncef Gabbouj, and Jarmo Takala

How to achieve low latency audio/video streaming over IP network?

Mobile Transport Layer

Asynchronous Transfer Mode

Introduction to LAN/WAN. Application Layer 4

Distributed Queue Dual Bus

MA5400 IP Video Gateway. Introduction. Summary of Features

Multimedia Networking ECE 599

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

Multimedia Networking Introduction

SNC-DF40P High Resolution Minidome Color Camera with 10/100 Base T Ethernet

Multimedia Networking Research at UNC. University of North Carolina at Chapel Hill. Multimedia Networking Research at UNC

Multimedia. Multimedia Networks and Applications

A Predictable RTOS. Mantis Cheng Department of Computer Science University of Victoria

LIVE MUSIC PERFORMANCES OVER HIGH- SPEED IP NETWORKS

Multimedia Communications

Multimedia Applications over Packet Networks

MODELING AND SIMULATION OF MPEG-2 VIDEO TRANSPORT OVER ATM NETWOR.KS CONSIDERING THE JITTER EFFECT

COMP 249 Advanced Distributed Systems Multimedia Networking. Multimedia Applications & User Requirements

Module 6 STILL IMAGE COMPRESSION STANDARDS

Agenda. Introduction Market trend and application 1394 Market Analysis Data 1394 and industry Applications. Other Technologies USB DVI

Administrivia. CMSC 411 Computer Systems Architecture Lecture 19 Storage Systems, cont. Disks (cont.) Disks - review

Megapixel Networking 101. Why Megapixel?

Quality of Service (QoS)

OPERATING SYSTEMS CS136

Image and video processing

Module objectives. Integrated services. Support for real-time applications. Real-time flows and the current Internet protocols

Multimedia Networking and Quality of Service

Scalable Multi-DM642-based MPEG-2 to H.264 Transcoder. Arvind Raman, Sriram Sethuraman Ittiam Systems (Pvt.) Ltd. Bangalore, India

Implementing Real-Time Transport Services over an Ossified Network

Real-time scheduling of a tertiary-storage jukebox

CS 528 Mobile and Ubiquitous Computing Lecture 4a: Playing Sound and Video Emmanuel Agu

CS4700/CS5700 Fundamentals of Computer Networks

CSE 461 MIDTERM REVIEW

Congestion Manager. Nick Feamster Computer Networks. M.I.T. Laboratory for Computer Science. October 24, 2001

Fast RTP Retransmission for IPTV - Implementation and Evaluation

VMDC Version 7.0 Performance Guide

Async Programming & Networking. CS 475, Spring 2018 Concurrent & Distributed Systems

Outline 9.2. TCP for 2.5G/3G wireless

Making IP Networks Voice Enabled

Security Management System Camera Configuration Guidelines (Document Version )

Fundamentals of Video Compression. Video Compression

Hardware. Multimedia computers. Embedded devices. Mobile Phones. Multimedia PC (MPC) Current equipment

Adaptive Playout Buffering for H.323 Voice over IP Applications

SE300 SWE Practices. Lecture 10 Introduction to Event- Driven Architectures. Tuesday, March 17, Sam Siewert

Network Management & Monitoring

A Scalable Multiprocessor for Real-time Signal Processing

Multimedia: video ... frame i+1

Streaming Video and Throughput Uplink and Downlink

Application and Desktop Sharing. Omer Boyaci November 1, 2007

Transcription:

Multimedia Sape J. Mullender Huygens Systems Research Laboratory Universiteit Twente Enschede 1

What is Multimedia? Videophone CD Interactive One socket, many services (video) phone TV video on demand Teletext Computer Computer-supported cooperative work 2

Good Multimedia Two classes of media Static: text, images, 3D graphics Continuous: audio, video, sensor data Media are integrated Operating system supports all media Applications can process all media 3

Continuous Media Issues in processing continuous media Minimize Latency Minimize Jitter Achieve necessary Throughput Late data is useless data 4

Bad data Corrupted data (transmission errors) or late data (retransmissions) is useless and will be dropped on the floor. But if the amount of data lost is small, e.g., one video frame or 20 ms of audio, It is barely noticeable. 5

Small Data Units There are two reasons for using small units of data: Reduction of end-to-end latency Loss of an occasional unit is not noticed Error recovery through retransmission is not useful: retransmitted data would arrive too late 6

Some Numbers Acceptable end-to-end latency for audio Acceptable length of missing audio unit Acceptable end-to-end latency for video Acceptable length of missing video (one frame) 40 ms 10 ms 40 ms 25 ms Telephone-grade audio, 8-bit samples at 8 KHz CD-quality stereo, 2 16-bit samples at 44.1 KHz Uncompressed video, 25 fps, 640 480 24-bit pixels JPEG-compressed video, 25 frames per second MPEG-compressed video, 25 frames per second 64 Kbp 1.4 Mb 200 Mb 8 Mb 4 Mb 7

Audio and Video over ATM networks An ATM cell contains 48 bytes Telephone-grade audio audio CD-quality stereophonic audio Uncompressed full-colour video JPEG-compressed video, 25 frames per second MPEG-compressed video, 25 frames per second 6 ms/cell 275 µs/cell 2 µs/cell 60 µs/cell 200 µs/cell 8

Cells and Frames If cells were sent separately, hosts might have to deal with up to half a million interrupts per second. Not a good idea. ATM Adaptation Layers group cells in larger units called frames. There were four types of AAL: AAL1: Continuous bit-rate transmission AAL2: Variable bit-rate transmission AAL3: Connection-oriented data AAL4: Connection-less data AAL3 and AAL4 have been merged: AAL3/4. AAL5, the Simple and Efficient Adaptation Layer (SEAL) has been standardized. The need for AAL2 is unclear so it never got standardized. 9

Buffering Necessary for jitter elimination Smaller buffers improve latency Bigger buffers improve throughput 10

Workstation Architecture 11

Problems with bus-based architectures Processing audio and video implies very low cache hit rates Memory is a bottleneck (Pegasus File Server) CPU is involved in all data transfers 12

The Desk-Area Network 13

Using the Desk-Area Network Memory no longer a central resource CPU manages connections, not transfers Devices can interact directly, without CPU mediation 14

The Pegasus Project Sept 1993 Sept 1996: Pegasus I University of Twente and University of Cambridge Nov 1996 Nov 1999: Pegasus II University of Cambridge, University of Glasgow, University of Twente, Swedish Institute for Computer Science, APM Ltd. Cambridge 15

Pegasus Architecture 16

ATM Camera 17

ATM Camera CCD Arrary Scan / Digitise Output Buffers Optional Compress ATM fabric 18

Tiles 19

ATM Display 20

ATM Display 160Mbps 960Mbps Video Frame Buffer ATM fabric CPU Memory 21

Display Management 22

Experience with ATM Devices All positive: Simple, potentially cheap hardware Does not interfere with processors on the bus Window management mechanisms in hardware, policies still under control of the user Integrates well with existing systems (using ATM networks) ATM will become more standardized than processor buses 23

Multimedia and Real time Real-time systems Hard deadlines Bounded load Run times known (and bounded) Schedule guarantees deadlines Multimedia systems Soft deadlines Load is not known a priori Run times often estimated Statistical guarantees at best 24

Example Application Federico is an application that runs on Nemesis and shows its capabilities: Process Multimedia Streams in Real Time Synchronize Multimedia Streams Provide Quality-of-Service ( ) Control to Applications Capture, Transport and Render Multimedia Streams with No Discernible Latencies 25

Goal Federico is an application that controls multiple cameras in a room. Based on the camera input and the input from several microphones, Federico controls the cameras (pan/tilt/zoom) and, in a continuous process, selects one of the camera outputs for transmission to a remote site. It is Federico s goal to provide to remote viewers, through the lenses of alternating cameras, an effective overview of what is happening in the room. To achieve this goal, Federico tracks people as they move about and locates speakers in space by analysing the microphone inputs. 26

Situation 27

Federico Architecture 4 ATM audio streams Audio filtering & timing Stream of sightings Audio skeptic Better stream of sightings Geometry queries Geometry queries Geometry server Federico Camera selection Stream of sightings Better stream of sightings Camera tracker Video skeptic Camera control Camera commands Pan/tilt/zoom control... Geometry queries Camera control Pan/tilt/zoom control Camera tracker Stream of sightings Video skeptic MM stream Data stream RPC 28

Face Tracking 29

Face Tracking 30

Face Tracking 31

Quality of Service Applications tell the system what resources they need as a function of the quality they can deliver. The system provides resources to applications. The allocation may change over time, but the applications are notified of these changes beforehand. Applications adapt to the resources allocated to them. 32

Adaptation Minor: frame rate, frame size, stereo/mono. Major: compression, hardware support Continuous adaptation is hardly useful 33

Multimedia Streams A multimedia stream is usually composed of synchronized substreams, e.g., lip-synchronized audio + video. These can be coded in a single data stream. But that makes adaptation to changing resources harder 34

Separate Streams Pegasus encodes a set of synchronized multimedia data streams separately and adds a single control/synchronization stream. 35

How applications adapt Typically, each stream type has just a few QoS settings, audio, for example, could have 44 KHz stereo, 20 KHz mono, and 8 KHz mono. The applications have code for each setting and switch between them when QoS settings change. Applications consist of management threads that are scheduled conventionally and media threads that are scheduled periodically 36

Resource Allocation Each application has a small number of QoS settings. Each QoS setting has a value indicating its QoS and a list of resource quantities necessary to achieve that QoS. Typical resources: CPU bandwidth, Network bandwidth, Special devices (compression device, rendering device, etc.) and device bandwidth (e.g., for an ATM camera). The best resource allocation is that which maximizes the sum of the Qualities of Service of the applications. 37

Calculating Resource Allocation Typically, a machine will have no more than eight applications with, perhaps, three or four QoS setting each. Calculating the optimal resource allocation for a machine is straightforward. However, in a distributed setting, there are complicating factors. 38

Resource Allocation in a Distributed System Different resource allocators linked together by applications. Closure may become very large...... and may involve multiple management domains. 39

Deliverable and Desirable Applications operating under different resource allocators are responsible for obtaining allocations in each allocation domain that can be combined. A setting in an allocation domain is deliverable if the resource allocator can provide it (given its obligations to other applications) A setting is desirable if the application can make of that setting, given what is deliverable in the other allocation domains of concern. Application tells allocator which settings are desirable; allocator tells application which settings are deliverable (and which one is currently delivered). 40

Deliverable and Desirable A setting is desirable in an allocation domain if it is deliverable in all of the others. A setting is deliverable in an allocation domain if the requested resources can be made available to the application. The resources actually allocated are the maxima of the settings that are both desirable and deliverable. 41

Matchmaking between theory and practice Here, the choice of a resource-allocation algorithm was dictated to a large extent by practical factors: QoS settings are discrete rather than continuous. A few settings suffice. Separate audio and video streams make separate QoS management possible Each management domain wants to have its own QoS manager (aka, resource allocator). The system must end up with reasonable APIs. 42