REAL-TIME STREAMING VIDEO FOCUSED ON MOBILE DEVICES

Similar documents
Networking Applications

Introduction to LAN/WAN. Application Layer 4

Chapter 28. Multimedia

Helix DNA Framework. Yann Cadic Quentin Désert. Multimedia Programming Helsinki University of Technology

3GPP TS V5.2.0 ( )

Image and video processing

Streaming Technologies Delivering Multimedia into the Future. May 2014

Optimizing A/V Content For Mobile Delivery

Video coding. Concepts and notations.

OSI Layer OSI Name Units Implementation Description 7 Application Data PCs Network services such as file, print,

IO [io] MAYAH. IO [io] Audio Video Codec Systems

MISB EG Motion Imagery Standards Board Engineering Guideline. 24 April Delivery of Low Bandwidth Motion Imagery. 1 Scope.

Skill Area 325: Deliver the Multimedia content through various media. Multimedia and Web Design (MWD)

CSCD 443/533 Advanced Networks Fall 2017

Digital Asset Management 5. Streaming multimedia

Adaptive Video Acceleration. White Paper. 1 P a g e

MPEG-4: Overview. Multimedia Naresuan University

ITEC310 Computer Networks II

Multimedia Content. Web Architecture and Information Management [./] Spring 2009 INFO (CCN 42509) Contents. Erik Wilde, UC Berkeley School of

EzyCast Mobile Mobile video, made simple.

4 rd class Department of Network College of IT- University of Babylon

The Efficient Point to Point or Multipoint Live Video Streaming in Wireless Devices Mahesh 1 R Rajkumar 2 Dr M V Sudhamani 3

CS 457 Multimedia Applications. Fall 2014

Internet Streaming Media. Reji Mathew NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2006

CODEC AND PROTOCOL SUPPORT HELIX MEDIA DELIVERY PLATFORM

3GPP TS V7.1.0 ( )

Advanced High Graphics

IOCAST video transmission solutions

TRIBHUVAN UNIVERSITY Institute of Engineering Pulchowk Campus Department of Electronics and Computer Engineering

Multimedia on the Web

Multimedia Networking Introduction

Advanced Networking Technologies

Outline Introduction MPEG-2 MPEG-4. Video Compression. Introduction to MPEG. Prof. Pratikgiri Goswami

CS 218 F Nov 3 lecture: Streaming video/audio Adaptive encoding (eg, layered encoding) TCP friendliness. References:

Lecture 9: Media over IP

MPEG-4. Today we'll talk about...

ETSI TS V5.1.0 ( )

Bluray (

ETSI TS V6.0.0 ( )

Georgios Tziritas Computer Science Department

Streaming and Recording Capabilities

Megapixel Networking 101. Why Megapixel?

MITIGATING THE EFFECT OF PACKET LOSSES ON REAL-TIME VIDEO STREAMING USING PSNR AS VIDEO QUALITY ASSESSMENT METRIC ABSTRACT

Android Multimedia Framework Overview. Li Li, Solution and Service Wind River

Chapter 11.3 MPEG-2. MPEG-2: For higher quality video at a bit-rate of more than 4 Mbps Defined seven profiles aimed at different applications:

Optical Storage Technology. MPEG Data Compression

Lecture 27 DASH (Dynamic Adaptive Streaming over HTTP)

15 Data Compression 2014/9/21. Objectives After studying this chapter, the student should be able to: 15-1 LOSSLESS COMPRESSION

Streaming video. Video on internet. Streaming video, live or on demand (VOD)

Ch 4: Multimedia. Fig.4.1 Internet Audio/Video

IPTV 1

ABSTRACT. that it avoids the tolls charged by ordinary telephone service

MPEG-4 is a standardized digital video technology

IMS Client Framework for All IP-Based Communication Networks

Upcoming Video Standards. Madhukar Budagavi, Ph.D. DSPS R&D Center, Dallas Texas Instruments Inc.

Page 1. Outline / Computer Networking : 1 st Generation Commercial PC/Packet Video Technologies

Module objectives. Integrated services. Support for real-time applications. Real-time flows and the current Internet protocols

3GPP TS V ( )

Internet Streaming Media

ETSI TS V8.0.0 ( ) Technical Specification

Types and Methods of Content Adaptation. Anna-Kaisa Pietiläinen

Advanced Video Coding: The new H.264 video compression standard

Chapter 7 Multimedia Networking

Internet Video Delivery. Professor Hui Zhang

INTRODUCTORY Q&A AMX SVSI NETWORKED AV

VoIP. ALLPPT.com _ Free PowerPoint Templates, Diagrams and Charts

UNDERSTANDING MUSIC & VIDEO FORMATS

Perceptual coding. A psychoacoustic model is used to identify those signals that are influenced by both these effects.

Multimedia Networking

Product Overview. Overview CHAPTER

Tema 0: Transmisión de Datos Multimedia

ARIB STD-T V IP Multimedia System (IMS) Messaging and Presence; Media formats and codecs. (Release 13)

Cobalt Digital Inc Galen Drive Champaign, IL USA

Outline. QoS routing in ad-hoc networks. Real-time traffic support. Classification of QoS approaches. QoS design choices

Popular protocols for serving media

Parallelism In Video Streaming

Higher compression efficiency, exceptional image quality, faster encoding time and lower costs

CS519: Computer Networks. Lecture 9: May 03, 2004 Media over Internet

Transporting audio-video. over the Internet

Both LPC and CELP are used primarily for telephony applications and hence the compression of a speech signal.

A MULTIPOINT VIDEOCONFERENCE RECEIVER BASED ON MPEG-4 OBJECT VIDEO. Chih-Kai Chien, Chen-Yu Tsai, and David W. Lin

Compression and File Formats

Streaming Technologies Glossary

MPEG-21: The 21st Century Multimedia Framework

MPEG-4 Structured Audio Systems

Multimedia Applications. Classification of Applications. Transport and Network Layer

Tech Note - 05 Surveillance Systems that Work! Calculating Recorded Volume Disk Space

EE Multimedia Signal Processing. Scope & Features. Scope & Features. Multimedia Signal Compression VI (MPEG-4, 7)

Internet Streaming Media. Reji Mathew NICTA & CSE UNSW COMP9519 Multimedia Systems S2 2007

Envivio Mindshare Presentation System. for Corporate, Education, Government, and Medical

Compression Part 2 Lossy Image Compression (JPEG) Norm Zeck

Professor Laurence S. Dooley. School of Computing and Communications Milton Keynes, UK

About MPEG Compression. More About Long-GOP Video

RECOMMENDATION ITU-R BT.1720 *

Location Based Advanced Phone Dialer. A mobile client solution to perform voice calls over internet protocol. Jorge Duda de Matos

Achieving Low-Latency Streaming At Scale

The Transport Layer: User Datagram Protocol

Chapter 11: Understanding the H.323 Standard

Streaming Video and Throughput Uplink and Downlink

A Personalized HTTP Adaptive Streaming WebTV

Transcription:

REAL-TIME STREAMING VIDEO FOCUSED ON MOBILE DEVICES Pablo Ibañez Verón pin08002@student.mdh.se Javier Martinez Garcia jmz09001@student.mdh.se School of Innovation, Design and Engineering, Mälardalen University Högskoleplan 1, 722 18 Västerås, Sweden ABSTRACT The purpose of this paper is giving an overview of the basic concepts of streaming as well as the protocols and standards involved, particularly focused on mobile devices, considering their constraints and limitations on the context of a packet switched network. One of the uprising services offered by third generation mobile devices consists on the ability to play real-time streaming video, and other broadband services with an acceptable quality of service. When considering streaming services, three elements have to be taken into account: streaming servers, media compression (video and audio) and players. Each of them is analyzed in this paper. 1. INTRODUCTION The term streaming means a technique for transferring data such that it can be processed as a steady and continuous stream. Video and sound can be watched or listened without downloading it to the device beforehand. Basically, streaming takes files and fragments them into smaller packets and sends them out to their destination. This is very similar to how computers send information across a network or the Internet in general. The player is able to read the file stream as it is coming in and begin playing it long before the rest of the file arrives. This technique allows storing the video in a buffer so it can be played straightaway. During buffering a whole bunch of packets are collected before being played. As the player begins to play the file, it continues to collect packets in reserve. This means that even if there are minor delays in getting the information packets to the computer, the experience of the media will be continuous rather than having it stutter along erratically. Streaming is more and more used on the internet. This is possible due to the great increase of the broad-band internet connections which allows high quality and reliability streaming. In the context of queuing theory, we can explain streaming in terms of two variables. The arrival rate λ, and the service rate μ [SB02]. The first one is the amount of information that arrives to the server expressed in packets, bits or bytes per second, while the latter represents how much information can be processed by the server. In this case the server is represented the bandwidth of the link. Either way, in order for the dataflow to be continuous, the service rate has to be greater than the arrival rate. The bigger the service rate is the better streaming performance we will get. Thus, in order to improve the flow, either we increase the service rate, or decrease the arrival rate. Streaming servers deal with the bandwidth performance, while encoders and media compression reduce the throughput, and hence the arrival rate. However, when compressing the media, video and audio quality decreases, so there is a trade-off between media quality and the streaming QoS (Quality of Service). As we can see in the Figure 1 there are several stages in the process. Each media is divided in two flows, video and sound. They are compressed by the codecs. Media compressed is essential for getting a good quality streaming flow. The flow can be static or dynamic suitable to the state of the network where there are QoS services. Each codec has its own speed coding apart from the quality. The needed elements for setting a streaming service are: Media compression Quality of Service (QoS) Streaming servers Streaming protocols 1

Figure 1: Stages of streaming. In this paper we will focus on mobile streaming. First we will see the general aspects of it. Then we will go into the streaming servers and finally a study about streaming standards will be presented. 2. MOBILE STREAMING The term includes mobile streaming from a server to a device mobile, laptop, PDA, etc, on a wireless network. A mobile device must have the ability to connect to a packet switched network [FH07] and have the software to playing a streaming flow. Streaming to wireless short-range as Bluetooth or WLAN (Wireless Local Area Network) will not be considered in this study. Mobile devices have the following: Low capacity CPU Limited memory Limited energy consumption Different input/output devices such as screen and keyboard Mobile networks also have their restrictions: Low bandwidth Longer latency A less stable connection Unpredictable availability The mobility of terminals These restrictions make harder working on mobile streaming [JB06]. One of the major problems of this type of streaming is the limit of the bandwidth network. This problem can be solved with codecs with more efficient compression algorithms. Furthermore, mobile devices have a major limitation because of the capacity limited processor which is linked to energy consumption. Thus, the power of calculation is restricted to decode. Another problem of this wireless networks is that the reliability is not guaranteed. Over mobile phone networks are cellular networks. In these networks there is a phenomenon of handover, which is the transition from one cell to another. In this case the connection is not cut but there is a considerable delay. This can affect the flow, with delays or losses in the distribution package. In addition, if we consider an environment with many small cells supporting a large number of users e.g. a city, where mobility continuous change of the cells to make streaming possible. 3. STREAMING SERVERS Web Server vs. Streaming Server There are two important methods for streaming media over the web. The first one uses a standard Web server to stream the video and audio to the media player. The second one uses a separate streaming media server which is specialized to the video and audio streaming task. Streaming with a web server There is not a big difference between streaming with this kind of servers and the download-andplay model. Initially the audio and video is not compressed and first they are compressed into a single media file for delivery over a specific network bandwidth (usually low bitrate). Then this media file is hosted in a standard web server. The next step is creating a web page with the media file s URL (Uniform Resource Locator) in the server. This web launches the client-side player 2

and downloads the media file. The difference with download-and-play model is how the client functions. The streaming client starts the media while it is still downloading. However it has to wait for buffering. It allows to having no interruptions during the whole playing. Note that not all the media file formats support this progressive playback. The protocols used in this way of streaming are HTTP (HyperText Transform Protocol) and TCP (Transmission Control Protocol). HTTP operates on top of TCP, which handles all the data transfers. The main aim of TCP is to maximize the data transfer rate while ensuring overall stability and high throughput of the entire network so this is not the best solution for a real-time application like ours. The algorithm used, known as slow start, consist on that first TCP sends data at a low rate and checks what is the bandwidth limit by increasing the rate until the destination reports packet loss. Anyway, it cannot ensure that all present packets will arrive at the client in time to be played. Streaming with a media streaming server At first, it is similar to streaming web server, except that the compressed media file is produced and copied to a specialized streaming media server. Then a web page with a reference to the media file is placed on a web server. From here on, the process is very different to the web server approach. The data is actively and intelligently sent to the client. It means that it streams the content at the exact data rate associated with the compressed video and audio streams. The server and the client stay in close touch during the delivery process, and the server can respond to any feedback from the client. Although streaming media servers can use HTTP and TCP protocol, they usually use other protocols like UDP (User Datagram Protocol) to make better the streaming experience. UDP is much faster than TCP because UDP is a lightweight protocol without any re-transmission or data-rate management functionality. Moreover UDP traffic gets higher priority than TCP traffic on the Internet. Foresight, we can see that UDP is the ideal protocol for transmitting real-time video and audio, which can tolerate some loss of packets. The only advantage of streaming with a web server is that it requires less hardware components (i.e. the streaming media server) to learn and manage. It means that no new software infrastructure needs to be installed. On the other hand, streaming with a streaming server has several advantages: More efficient network throughput Better video and audio quality to the user Support for advanced features Cost effective scalability to large number of users Protection of content copyright Multiple delivery options Most popular server analysis In this paper we are going to analyze the most popular servers in the market including some server specific for streaming on mobile: DSS (Darwin Streaming Server) and QTSS (QuickTime Streaming Server) by Apple, Helix by RealNetworks, VLC by VideoLan, PvServer by PacketVideo Network Solutions. This server is specific for mobile Streaming. There are several proprietary servers such as Microsoft Windows Media Services 9 Series. It has not been included in the analysis because it does not support the 3GPP standard. Another server owner, Flash Media Server 2 from Adobe, proposes solutions for mobile streaming, but this solution has been ruled out because it is very limited by the use of Flash Player. Table 1 shows the supported formats by each server [PR08]. Note that all the analyzed servers support 3GPP (3 rd Generation Partnership Project) standard. Helix QTSS DSS VLC pvserver 1 YES YES YES YES YES 2 YES NO NO YES NO 3 YES YES YES YES YES 4 YES NO NO YES YES 5 YES NO NO NO NO 6 YES YES YES YES NO 1-MPEG-4; 2-MP3; 3-3GPP; 4-Windows Media; 5-RealMedia; 6-QuickTime Table 1: Supported formats by the servers As we can see all the servers support RTP (Realtime Transport Protocol) and RTSP (Real-Time Streaming Protocol) protocols. Nevertheless, only VLC and pvserver can make the encoding and recording by dynamical flux. It should be noted that because of the owner license of the AMR (Adaptive Multi-Rate) codec is incompatible with the GPL (General Public License). It is important because VLC server is licensed under the GPL so VLC must be compiled manually by included AMR. Darwin Streaming Server (DSS) and QuickTime Streaming Server (QTSS) Both have been developed by Apple but the main difference between DSS and QTSS is that DSS is the open source version of QTSS. 3

Darwin Streaming Server allows the client to send streaming media to clients across the Internet using the industry standard RTP and RTSP protocols. Based on the same code base as QTSS, DSS provides a high level of customizability and runs on a variety of platforms allowing you to manipulate the code to fit your needs. DSS it is a good idea if the user need to stream QuickTime and MPEG-4 media on alternative platforms such us Windows, Linux, and Solaris or those developers who need to extend and/or modify the existing streaming server code to fit their needs. DSS is only supported by the open source community and is not eligible for technical support from Apple. Source code is available as a release download or as a development code via CVS (Concurrent Version System). Quick-Time Streaming Server is Apple's commercial streaming server delivered as part of Mac OS X Server. QTSS provides users with enhanced administration and media management tools as a result of the tight integration with Mac OS X Server; these tools are not available as part of the open source project. Technical support is available for QTSS as part of the AppleCare support plans provided for Mac OS X Server and Xserve. Both DSS and QTSS are built on a core server that provides state of the art quality of service features with Skip protection and Instant-On, and support for the latest digital media standards, MPEG-4 (Moving Picture Experts Group 4) and 3GPP. Helix DNA Streaming Server The Helix DNA Server is a universal delivery engine supporting the real time packetization and network transmission of any media type to any device. It contains a wide array of features and capabilities that make it simple, cost-effective, and reliable for your use. It has a robust streaming media engine and IP (Internet Protocol) access control list and IP interface binding capabilities. It supports the most important audio and video formats such us MP3 (Moving Picture experts group-1 audio layer 3), MEPG-4, 3GP, RealAudio, RealVideo. It uses several streaming protocols. For example RTSP and RTP are used for delivering support for standards compliant clients and proxies. RTCP (Real-Time Control Protocol) cloaked protocol is supported via HTTP for compliant clients. Media data is delivered via TCP, UDP unicast and UDP multicast transports. Other important feature of this server is that Local (native OS) file system access is supported for on-demand media delivery. It includes comprehensive API for development of application extensions and additional services. 4. STANDARDS FOR STREAMING There are basically two standards related to mobile streaming. The most common is 3GPP PSS (Third Generation Partnership Project Packet Switched Streaming Service). However there is another important one, MPEG-4 (Moving Picture Experts Group 4). There are other video formats available for mobile streaming, such as RealVideo but they are not as outstanding as 3GPP. Third Generation Partnership Project Packet Switched Streaming Service (3GPP PSS) The 3GPP Packet-Switched Streaming Service (PSS) [MK04] is a standard for audio and video streaming to handheld 2.5G and 3G terminals and provides a complete streaming and download framework for commercial content. 3GPP PSS provides an entire end-to-end streaming and download framework for mobile networks, spanning from server files for storage of streaming sessions, streaming servers for delivery, to streaming clients for reception. The main specification defines protocols for servers and clients and all media codecs, whereas the 3GP file format defines a storage format for servers. The PSS standard also includes separate specifications defining the timed text format for subtitling and the 3GPP SMIL (Synchronized Multimedia Integration Language) Language Profile for scene descriptions. The main scope of PSS is to define an application providing synchronized streaming of timed media, such as speech/audio, video, and text. However, PSS also defines a SMIL-based application for multimedia presentations, combining the above mentioned audiovisual streams with downloaded images, graphics, and text, as well as an application for progressive download of 3GP files containing audiovisual presentations. Figure 2 provides an overview of a PSS client[pf06]: In addition to defining transport mechanisms, the scope of PSS is to define a set of media codecs. AMR and H.263 are the required codecs for clients supporting speech and video, respectively. For higher quality, PSS recommends more advanced codecs such as Extended AMR- WB (Adaptive Multi-Rate Wideband) and Enhanced aacplus for audio and Advanced Video Codec (AVC), also known as H.264, for video. Other codecs include AMR-WB for wideband speech, MPEG-4 Visual for video, JPEG, GIF (Graphics Interchange Format, and PNG (Portable Network Graphics) for images and graphics, timed text for visual annotations of timed media, SVG (Scalable Vector Graphics) for vector graphics, and SP-MIDI (Scalable Polyphony MIDI) for 4

synthetic audio. PSS also standardizes an optional mechanism to provide confidentiality and integrity protection for commercial content. Encryption and integrity protection prevents unauthorized access or attacks, while OMA DRM (Open Mobile Alliance Digital Rights Management) 2.0 is used for digital rights management. Other features of PSS include media selection, where a client may choose from alternative bit rates and languages, and quality-of-experience reporting, which gives service providers the means to evaluate the enduser experience. Figure 2: Overview of a 3GPP streaming client. Moving Picture Experts Group 4 (MPEG-4) MPEG-4 is a standard for storing and delivering multimedia content. It was first designed as a successor for the MPEG-1 and MPEG-2 standards. The goal is to make a standard for low bit rate applications. The MPEG-4 standard was developed by the Moving Picture Experts Group a working group of ISO/IEC (International Organization for Standardization/International Electro technical Commission). The MPEG-4 standard is open, which means that anyone can acquire the specifications and implement it. This leads to a competition of implementations, which, ideally, it should lower prices and increase the quality of the products. An open standard also allows avoiding the pitfalls of single sourcing, such as the lack of updates and bug fixes, and the bundling of unwanted features. The whole specification of the standard can be bought from ISO. Video Compression A video file is made by a lot of consecutive images called frames, or VOP (Visual Object Planes) in MPEG-4 terms. MPEG-4 Visual codecs are block based. It means that the VOPs are divided into small equally sized squares (8x8 or 16x16 pixels), called macro blocks. Macro blocks are encoded using an algorithm based on the DCT (Discrete Cosine Transform). DCT transforms each block into the frequency domain, where about 90% of the information is gathered in the first coefficients. This allows us to discard most of the upper coefficients, reaching very high compression rates [GW02]. On the other hand, in a video, two consecutive frames often look very similar to each other with just movement of objects between the two frames. Due to this, modern codecs use different coding techniques that exploit these similarities in a series of frames. This inter-coding is used to achieve higher compression ratios. For example, predictive coding, stores only the differences between two images. However predictive coding is not a good way in situations where large portions of the image have been moved between two frames. In this case motion-compensated prediction is used. It assigns a motion vector to each macro block and tries to find the best possible representation of the frame with macro blocks from a reference frame which has been moved in some direction by the motion vector. A predicted VOP (P-VOP) is a frame that uses an earlier frame as a reference. Audio Compression MPEG-4 uses Advanced Audio Coding, which is a standardized lossy compression encoding scheme for digital audio. MPEG-4, like mp3, is based on a pshycoacoustic model of human hearing. Since the human hearing range has maximum frequency of around 20 KHz, the audio signal is sampled with Nyquist frequency, i.e. twice the maximum frequency, in order not to lose information. Thus, standard sample frequency is 44.1 KHz, although other frequencies are regarded on the standard 16, 32 and 48 KHz. Bit depth is also variable, allowing a wide combination of bitrates. Given the raw digitalized audio signal, it is splitted in different sub-bands logarithmically distributed along the human hearing range, and passed through an optimum filter for each band. These filters, referred to as bank filters, transform the signal by eliminating redundancies and certain 5

armonics which are beyond the auditory resolution of human hearing. This method is called perceptual coding. So, in spite of the fact that lossy compression is used, it does not have an impact on the quality perception of the sound. The three main components of streaming media are encoders, servers and players. Encoders compress the video and the servers distribute the compressed video to players. Then they decoder and render the video. They three converge as Figure 3 shows. Figure 3: Streaming components needing standardization. There are 15 different Visual profiles. The profile for mobile devices is the Simple profile. It doesn t require a fast processor and enough memory for decoding. The profiles are divided into levels. The Simple Visual profile has a special level that conforms to the ITU (International Telecommunication Union) specification H.263 baseline which was incorporated in the MPEG-4 standard and it is used in mobile devices. H.263 baseline is the short header in the MPEG-4 standard. All the Visual Profiles have some basic tools. They provide the mechanism for natural video coding with low memory. The only tools which the Simple Visual profile contains are I-VOP and P- VOP. The composition of scenes is run by BiFS (Binary Format for Scenes). It is used to define the location and rotation of a media object in 2D or 3D space. It is used to modify the parameters of existing objects during the progress of the representation. MPEG-4 also includes a tool called MPEG-J. MPEG-J is compiled Java byte code included in the presentation which allow us manipulate the scenes. As Java can be run on several operative systems without compiling a different binary for each platform it can be run on Windows, Linux, and MAC. Part 1 defines ISO base media which is a general file format for time-based, and the MP4 file format. The ISO base media is the basis for 3GP file format and MP4 file format. The 3GP file format is a simplified version of the MPEG-4 format with different codecs and reduced functionality. There are several methods for the delivery of rich media over IP networks such as downloading, progressive downloading, but the important way for this project is streaming. The content in ISO base media is stored in tracks. The tracks are stored inside MP4 file. The MP4 file cannot be streamed directly because one whole track has to be downloaded before the downloading of the second track starts. Streaming servers that support MPEG-4 deliver only the content inside the MP4 file to the client, not the file itself. The delivery of MPEG-4 content is handled by the Delivery Multimedia Integration Framework defined in Part 6 of the standard. The most important use of MPEG-4 is in natural video and audio coding. One of the best additions to MPEG-4 was part 10, AVC (Advanced Video Coding). It was developed by ITU and the codec is called H.264 by them. However MPEG calls the same codec MPEG-4 part 10 or AVC. AVC purpose is to achieve higher compression ratios than those achieved with MPEG-4 visuals. AVC is 2-4 times more complex and 1.5-2 times more efficient than MPEG-2. MPEG-4 general audio coding is based on the MPEG-2 AAC (Advanced Audio Coding) technology, but is not backwards compatible with it. MPEG-4 added new features to AAC in order to improve the efficiency of the codec particularly in low bit rate applications. There are six different object types. MPEG-2 and MPEG-4 are the most important standards for streaming video on the internet and mobile networks. This field has been dominated by private companies solutions such us Real Media, RealNetworks s, Apple QuickTime and Microsoft Windows Media. Anyway the most important standards in mobile networks are MPEG-4 and 3GPP. RealNetworks is a technology provider in mobile networks too but its use is less than the other two. 5. FUTURE WORK As we said before mobile streaming is not still very developed. We have the basis for doing it and there are several ongoing researches on this topic, so it will not be a virgin area for a long time. In this context, it could be interesting to extend the study by analyzing other standards such as DVB-H (Digital Video Broadcasting Handheld) or DMB (Digital Multimedia Broadcasting), which have not been examined in this paper for length reasons. 6

6. SUMMARY AND CONCLUSION This paper introduces us the concepts of streaming media and its implementation into mobile devices. After taking a shortly brief of general streaming we introduced mobile streaming techniques. Once we have analyzed the different type of servers we can say that, in general, streaming servers should be preferred over web servers, in terms of QoS. However, when studying the different standards, we can not a priori determine none of them since it depends on the system requirements. In any case, this is only the beginning of the research about mobile streaming. This is a new field which is to be widely developed in upcoming years. 7. REFERENCES [GW02]Gregory K.Wallace. The JPEG still picture compression standard. 2002. [JB06]Jens Brandt. Adaptive Video Streaming for Mobile Clients. 2006. [FH07]Frank Hartung. Delivery of Broadcast Services in 3G Networks. 2007. [MK04]Markus Kampmann. Status of the 3GPP PSS (Packet-Switched Streaming) Standard. 2004. [PF06]Per Fröjdh. Adaptive Streaming within the 3GPP Packet-Switched Streaming Service. 2006. [PR08]Pablo Riegos Ferreiro. Plate-Forme de streaming pour les dispositifs mobiles avec adaptation dynamique. 2008. [SB02]Sanjay K.Bose. An Introduction to Queuing Systems. 2002. 7