EFFICIENT ARCHITECTURES AND CACHING STRATEGIES FOR MULTIMEDIA DELIVERY SYSTEMS

Size: px
Start display at page:

Download "EFFICIENT ARCHITECTURES AND CACHING STRATEGIES FOR MULTIMEDIA DELIVERY SYSTEMS"

Transcription

1 EFFICIENT ARCHITECTURES AND CACHING STRATEGIES FOR MULTIMEDIA DELIVERY SYSTEMS A Thesis Submitted by M. DAKSHAYINI For the award of the degree of DOCTOR OF PHILOSOPHY In Computer Science and Engineering Dr. MGR Educational and Research Institute (Deemed University) N.H. 4, Periar E.V.R. Road, Maduravoyal, Chennai May 2010

2 Dr. M.G.R. EDUCATIONAL AND RESEARCH INSTITUTE (DEEMED UNIVERSITY) (Declared U/S-3 of the UGC Act-1956) CHENNAI BONAFIDE CERTIFICATE Certified that this thesis titled EFFICIENT ARCHITECTURES AND CACHING STRATEGIES FOR MULTIMEDIA DELIVERY SYSTEMS is the bonafide work of Mrs. M. Dakshayini who carried out the research under my supervision. Certified further, that to the best of my knowledge the work reported herein does not form part of any other thesis or dissertation on the basis of which a degree or award was conferred on an earlier occasion on this or any other candidate. SIGNATURE Dr. T. R. Gopalakrishnan Nair Supervisor Director, Research and Industry Incubation Centre, Dayananda Sagar Institutions, Bangalore , Karnataka, India. DECLARATION This is to certify that the thesis titled EFFICIENT ARCHITECTURES AND CACHING STRATEGIES FOR MULTIMEDIA DELIVERY SYSTEMS submitted by me to the Dr. MGR Educational and Research Institute (Deemed University) for the award of the degree of Doctor of Philosophy is a Bonafide record of

3 research work carried out by me under the supervision of Dr. T. R. GopalaKrishnan Nair. The contents of this thesis, in full or in parts, have not been submitted to any other Institute or University for the award of any degree or diploma. Signature of the Research Scholar [M. Dakshayini] CERTIFICATE This is to certify that the thesis / dissertation entitled Efficient Architectures and Caching Approaches for Multimedia Delivery Systems that is being submitted by Mrs. M. Dakshayini in partial fulfillment for the award of Ph.D. in Computer Science and Engineering to the Dr. M.G.R University is a record of bonafide work carried out by her at our organization/institution. Signature of the Director of Organization/Institution Dr. T. R. Gopalakrishnan Nair, Research and Industry Incubation Centre, Dayananda Sagar Institutions, Bangalore , Karnataka, India.

4 ACKNOWLEDGEMENT I take great pleasure in expressing my deep sense of gratitude to my research supervisor Dr. T. R. Gopalakrishnan Nair, Director, Research and Industry Incubation Centre, Dayananda Sagar Institutions, Bangalore for his valuable guidance and discussion throughout my research. I am indebted to him for his timely discussions, exemplary patience, perseverance and guidance that have gone a long way towards the completion of this thesis. He has been a constant source of inspiration. His clarity of thought, logical analysis and taste for perfection are qualities that are worth being emulated. I would like to express my sincere thanks to Dr. V Cyril Raj Prof. and Head Dept. of CSE, Dr. S Ravi Prof. and Head dept. of E&C for their valuable suggestions and comments. I would like to thank Management, Staff of Dr. MGR University for their cooperation. I extend my deep sense of gratitude to the Management, Principal, Mr. H. S. Guruprasad, Asst. Prof and HOD, all my colleagues, Dept. of ISE, BMS College of Engineering, Bangalore for encouraging me to carry out the research work. I am indebted to Dr. P. Geethavani Prof. Dept. of Mathematics, BMSCE, Bangalore, Dr. A. S. Manjunath CEO, Manvish Pvt. Ltd, Bangalore for their valuable guidance and encouragement during this research work. I am extremely grateful to Dr. R. Selvarani, Dean - Research, Mrs. M. Vaidehi, Mrs. N. Sowjanya Rao and Mrs. K. Lakshmi Madhuri from Research and Industry Incubation Centre, Dayananda Sagar Institutions, Bangalore for their vital support and encouragement during the course of my research work. I would like to render my heartfelt gratitude to my mother Smt. Chennammamallaiah, husband Mr. D Shanthakumar, daughters Ms.Varshitha S and Ms.Vanditha S for their kind cooperation, valuable moral support and encouragement given to me and the role they have played in completing my research work. Finally, I am grateful to all my friends and others who are directly or indirectly involved for their inspiration, encouragement and support during my research work. ABSTRACT M. DAKSHAYINI

5 The demand for multimedia service is increasing dramatically with the popularity of the World Wide Web (WWW). The streaming of high quality videos consume a significant amount of network resources. Hence, this service may cause a bottleneck in the communication network due to the high bandwidth demand. Proxy Servers are suggested as a solution to intermediate buffering schemes and they have been used to speed up the transactions and reduce the remote server bandwidth demand, service delay and networking error. The streaming of video and audio data over the communication network has become much popular. Now usually, the demand for video storage is very high and it posses several challenges for video data management. Hence, the design of efficient video-on-demand architectures and an optimal caching and streaming strategies for video contents at Proxy Server has become increasingly important. The large size of the video and the storage limitation of the Proxy Servers have made caching of the complete video on the Proxy Server impossible. This work proposes innovative strategies for efficient proxy caching and buffer management to override the challenges. Certain efficient distributed VoD architectures, load sharing algorithms and effective streaming approaches are developed here to improve the Video-on-Demand engineering. Simulation results show that, the proposed strategies greatly improve the performance when compared to the commonly used strategies. Chapter one presents an introduction to multimedia applications and in particular it provides an insight into Video on Demand (VoD) concepts. The various needs and the current challenges of Video on Demand systems are discussed. General VoD architectures and various components to implement the VoD application are also discussed. The protocols required for Proxy Servers to communicate with each other are also presented. It also deals with the motivation and objective of this research work. Chapter two gives a review of current literature on various algorithms for efficient utilization of Proxy Server storage, for balancing the load among the Proxy Servers, and also various distributed architectures to improve the overall performance of the VoD system. A detailed survey of various proxy caching techniques and load sharing mechanisms are made. An in-depth review of various streaming approaches like batching, patching, broadcasting, piggybacking and chaining are presented here. As the Proxy Server storage capacity is limited, dynamic buffer allocation algorithms are required for efficient utilization of Proxy Server buffer to achieve high video availability. In chapter three we deal with the design of such complexities. Another buffer allocation algorithm using VBR characteristics of the video is also presented. This algorithm uses the Frame Differencing Technique to improve the storage rate at the Proxy Server. To improve the performance of the system, efficient VoD architectures are proposed in which closely located Proxy Servers are interconnected to achieve high availability with the increased aggregate

6 storage of video prefixes among the Proxy Servers. This architecture combines the advantages of both client server and peer to peer approaches with the help of the central coordinator called the Tracker. Also an efficient load sharing algorithm in combination with the dynamic buffer allocation technique for the proposed architecture is presented. This algorithm achieves increased service rate at the Proxy Server, reduced client waiting time and reduced bandwidth demand from the Central Multimedia Server. Chapter four presents the following algorithms Regional popularity based replication and Placement (RPR-P), Regional Popularity Based Proxy prefix caching and Load sharing algorithm (RPPCL) and Stochastic model based Prefix Placement Strategy to Achieve Reduced Transmission Cost. It is not efficient to cache the entire video at the Proxy Server as the size of the video is huge and the cache size of the Proxy Server is limited. Hence the initial portion (prefix) of the video can be stored in the Proxy Server to serve the client requests from the Proxy Server immediately. Thus, the downloading of the complete video from the remote Central Multimedia Server can be avoided. The proposed algorithm efficiently partitions the video and determines the video prefix and the size of the prefix to be cached at the Proxy Server along with efficient prefix distribution schemes. These algorithms reduce the client response time, network traffic and also the main server load. Chapter five presents algorithms, which combine peer to peer techniques with the current server-client streaming approach to build a new system that is both scalable and robust. Specifically, we propose an algorithm C2C-Chain: Client-to-Client chaining protocol for VoD Applications. Here we explore the combination of proxy prefix caching and load sharing scheme to chain the end points in the proposed coordinator based cooperative Proxy Server s architecture. This architecture uses a proxy-to-proxy and client-to-client streaming approach to cooperatively stream the video using chaining technique with unicast communication among the clients. This approach considers two major issues of VoD 1) Prefix caching scheme to accommodate more number of videos closer to the client maximizing the service rate at the Proxy Server and minimizing the load of the remote main server 2) Cooperative proxy and client chaining scheme for streaming the videos using unicasting. This approach minimizes the request-service delay, the client rejection rate and bandwidth requirement on server to proxy and proxy to client path. Chapter six presents a concluding summary and discusses the scope for further research work in this field.

7 Finally, a list of references and a list of publications made are presented. TABLE OF CONTENTS ACKNOWLEDGEMENT... ii ABSTRACT... iii LIST OF TABLES... xii LIST OF FIGURES... xiii 1. INTRODUCTION 1.1. MULTIMEDIA SYSTEMS VIDEO-ON-DEMAND Types of Interactive Services Proxy Caching Partial Caching Buffer Management at Proxy Server Digitized Video Buffer Management Video-on-Demand Network Configurations Single Level vs. Hierarchical Caches Caching Protocol-ICP Load Balancing Local Vs Remote Load balancing Technique... 13

8 Performance Overhead Relocation of the Request Centralized versus Decentralized Models Streaming Approaches Batching Patching Periodical Broadcasting Piggybacking Chaining MOTIVATION ORGANIZATION OF THE THEESIS LITERATURE SURVEY 2.1. OVERVIEW PROXY CACHING METHODOLOGIES BUFFER MANAGEMENT SCHEMS PROXY CACHING FOR DISTRIBUTED VoD ARCHITECTURES AND LOAD BALANCING TECHNIQUES STREAMING APPROACHES BUFFER MANAGEMENT METHODS AND APPROACHES FOR VoD 3.1 OVERVIEW Introduction Motivation Contribution BUFFER MANAGEMENT FOR DISTRIBUTED VoD ARCHITECTURE Distributed VoD Architecture Efficient Buffer Allocation and Reallocation Method LRU-k replacement Technique Proposed Algorithm Simulation Model Performance evaluation with Results Improved Buffer Allocation and Reallocation Method Proposed Algorithm... 46

9 Simulation Model Performance evaluation with Results and Discussion Summary LOAD SHARING FOR VIDEO CACHING BROTHER NETWORK (VCBN) ARCHITECURE Distributed VCBN Architecture Load sharing Strategy Proposed Algorithm Experimentation Simulation Model Performance evaluation of Load Sharing Strategy for VCBN Summary COMBINATION OF BUFFER MANAGEMENT AND LOAD SHARING FOR COORDINATOR BASED PROXY SERVERS ARCHITECURE Coordinator Based VoD Architecture (LPSG CLOPS) Dynamic Buffer Allocation (DBA) Based on Scene Change (SC) Improved Cache Utilization Dynamic Buffer Allocation for Real Time MPEG videos Reducing the Number of Renegotiations Load sharing approach for LPSG CLOPS Architecture Introduction Load Sharing Strategy for LPSG CLOPS Dynamic video replacement algorithm Proposed Integrated Algorithm Proposed Load Sharing with DBA+SC Algorithm Scene Change-Based Caching Algorithm using DBA Experimentation Simulation Model Results and Discussion Summary OPTIMAL PREFIX CACHING AND DISTRIBUTION POLICIES FOR PROXY SERVERS CLUSTER 4.1 OVERVIEW Introduction... 71

10 4.1.2 Motivation Contribution OPTIMAL PREFIX REPLICATION STRATEGY Introduction Video Partitioning Problem Definition Stochastic Model Zipf-Distribution Prefix Replication and Placement with Load Sharing Proposed Algorithms Algorithm - Regional popularity based replication and Placement of prefix-1 (RPR-P) Algorithm - Regional Popularity Based Proxy prefix caching and Load sharing algorithm (RPPCL) Experimentation Algorithm - RPR-P Simulation Model Performance Evaluation of RPR-R with Results Algorithm - RPPCL Simulation Model Performance Evaluation of RPPCL with Results Summary STOCHASTIC MODEL BASED TRANSMISSION COST REDUCTION STRATEGY FOR PROXY SERVERS Introduction Problem Definition Stochastic Model Prefix Placement Strategy to Achieve Reduced Transmission Cost Proposed Algorithm Experimentation Simulation Model Performance evaluation and Results Analysis Summary EFFICIENT PREFIX BASED STREAMING SCHEME FOR DISTRIBUTED VoD

11 5.1 OVERVIEW Introduction Motivation Contribution OPTIMAL STREAMING APPROACH Efficient Video Streaming Problem System Model PROPOSED ARCHITECTURE AND ALGORITHM Introduction Overview of the Architecture Proposed C2C-Chain Algorithm Client Admission phase Algorithm - C2C-Chain Prefix Streaming Phase Algorithm - Streaming Closing Phase Client Termination Algorithms Experimentation Simulation Model Performance evaluation of C2C-Chain with Results Drawback of Normal Chaining Scheme Client Failure Recovery Protocol to represent the C2C-Chain system Experimentation Simulation Model Results and Analysis SUMMARY CONCLUSIONS 6.1 SUMMARY CONTRIBUTION SCOPE FOR FUTURE WORK REFERENCES LIST OF PUBLICATIONS

12 LIST OF TABLES 1.1 Interactive Multimedia Applications Video formats The simulation model The simulation model Parameters for DBA+SC Simulation Parameters for LPSG CLOPS Input Output Stochastic variable used in the Simulation Model Simulation parameters used for the Model Parameters of the System Model C2C-Chain Algorithm Streaming Algorithm Client Termination Algorithm Case Client Termination Algorithm Case Simulation Model Simulation Model Simulation results LIST OF FIGURES 1.1. Components of a VoD System Centralized VoD Architecture Distributed VoD Architecture... 10

13 1.4. Hierarchical Caching ICP Caching Protocol Distributed VoD Architecture Total No. of Videos cached at Proxy Server Avg. Cache Blocks Allocation for all the Videos Avg. Cache Blocks Allocated for Most Popular Videos Total No. of Videos Cached Vs Time Avg. Buffer Allocation for all the Videos Avg. Buffer Allocation for Most Popular Videos Video Caching Brother VoD Architecture Total No. of Videos Vs Time (mins) No. of videos cached and streamed from PS No. of Videos streamed from LPS No. of Videos streamed from RPS No. of Videos streamed from CMS Total No. of videos streamed from (PS+LPS+RPS) Vs CMS Coordinator Based VoD architecture of group of Proxy Servers (LPSG/CLOPS) A Video trace Scenes and frames of a Video Average number of requests served from CLOPS using DBA with scene change Video Hit Ratio in CLOPS using DBA with scene change Average waiting time for the client in CLOPS using DBA with scene change Average Transmission cost when CLOPS using DBA with scene change Average Number of Access to CMS with CLOPS using DBA with scene change Average Number of Replacements CLOPS with DBA+SC Vs Without DBA+SC Different parts of Video Stochastic simulation Model Part of VoD Architecture Modules of Proxy server and Tracker Average I a d with RPR-P, ZipfR-SLFA and CR-RR Algorithms Average cache miss rate of RPR-P, ZipfR-SLFA and CR-RR Algorithms Average Rejection rate of RPR-P, ZipfR-SLFA and CR-RR Algorithms Average cache utilization in RPR-P Algorithm Average Waiting time with RPPCL,GWQ and PRLS Algorithms Average waiting for videos from PS-Client, PS-PS, TR-PS, TR-TR and CMS-PS. 87

14 4.11 Average Video Hir ratio with RPPCL,GWQ and PRLS Algorithms Average Network bandwidth usage by RPPCL,GWQ and PRLS Algorithms Total Number of requests served from PSq,LPSG and CMS Stochastic Simulation Model Average amount of video data treamed from PS, LPSG+NBR(LPSG) CMS Vs Time(hrs) Average Video Hit ratio Vs Time(hrs) Average Number access to Main server Vs Time(hrs) Average Network Transmission Cost Vs Time(hrs) System Simulation Model Proposed VoD Architecture Modules of Tracker, Proxy Server and Client Active Client Chain of video 56: C1-C2-C3-C4 Active Client Chain of video 14: C1-C2-C3-C4-C Average Client rejection ratios with time Average Request-service delays V/s Time Average Bandwidth usage Vs Time Detection of failure of C C2C-Chaining Protocol No. of requests served Vs Prefix Size of Popular videos Video Hit Ratio at LPSG, PC+Chaining Vs PC-Chaining Reduction of Client Rejection Ratio as the Size of the Prefix increases (PC with Chaining Vs without Chaining Average Prefix Size[(pref+1)+(pref+2)] Vs WACN bandwidth usage Average Client Waiting Time Vs Time Reduction of Main Server Load as the No. of videos cached at LPSG increases \

15 CHAPTER 1 INTRODUCTION 1.1 MULTIMEDIA SYSTEMS Multimedia systems have become one of the prominent modes of information exchange. Presently, multimedia application like Video-on-Demand has entered a stage of rapid growth. Video-on-Demand is one of the major streams in which significant development activities are progressing. This growth is being directed by two forces. The first factor is the fast falling cost of computer and network hardware along with multimedia applications accessible to large number of users. The second factor is the increasing capacity of the communication network that has made it possible to distribute multimedia content comparatively in an inexpensive way to a larger audience [52]. For the exploitation of multimedia applications, three activities are needed. The first activity refers to the creation of the actual audio-video clips and images with special effects that will be viewed by the user. Second activity is referred to as application creation, in which audio and video clips have to be combined into applications before being presented to the user. The third activity is designing and building of the systems for operating multimedia applications to deliver multimedia content efficiently. This is the focus of this research work. Multimedia servers are differentiated from conventional application servers in many ways that influence every aspect of their design [67]. Capturing even a small multimedia object demands a large amount of bandwidth and storage. Generally, this data is stored in a server or storage system from which the data is streamed continuously to the client at a given playback rate. Hence, delivery of multimedia data is time-sensitive, due to which clients will notice glitches if audio or video data is not delivered on time. This implies that management of every component of the multimedia system must consider the time criticality of the data [128]. A second factor is that managing this large amount of data requires more resources that differ from those of conventional systems. Therefore, both of these factors need to be taken into account in designing policies for managing system resources like storage hierarchy, caching, network bandwidth and so on.

16 As the use of multimedia applications increases, the demand for the resources required to support them also increases. This may limit the number of users the server is able to support simultaneously. For instance, minimum 6 Gbps of server I/O bandwidth and network bandwidths are required if thousands of users simultaneously access a 6Mbps MPEG-2 video (the DVD quality) [56]. Such extensive bandwidth requirements are not affordable in almost all network platforms. Hence providing scalable Video-on-Demand streaming services over the communication networks has become a challenging issue. Resource allocation should be done in a cost-effective manner. Bandwidth requirements and better QoS demands on both server and client side have increased. Hence, providing a good quality service to numerous groups of users keeping the server and network resources in feasible operating limits has become a major challenge. The straight forward solution is to provide dedicated connection to each client whenever a request is arrived at the server. But, the parameters of the server like bandwidth and buffer are finite. As more clients tend to demand its service, more bottlenecks will be created in the server I/O bandwidth in order to provide the service [51]. Advances in computer and communication technologies have resulted in a number of new multimedia applications. Some of the basic interactive multimedia applications are listed below in Table 1.1. Application Table 1.1 Interactive Multimedia Applications Description Videos-on-Demand Customers can select and play videos with full VCR capabilities. Interactive video Customer can play downloadable computer games without games having to buy a physical copy of the game. Interactive news Newscasts tailored to customer tastes with the ability to see television more detail on selected news. Interactive selection and retrieval. Catalogue browsing Customer examines and purchases commercial products. Distance learning Customers subscribe to courses being taught at remote sites. Interactive advertising Customers respond to advertiser surveys and are rewarded with free services and samples. Video conferencing Customers can negotiate with each other. This service can integrate audio, video, text and graphics. 1.2 VIDEO-ON-DEMAND Video-on-Demand (VoD) is an important multimedia application. This service is commercially possible due to a huge market and is one of the first multimedia services to enter the home environment. In this service, customers can watch any video of their choice any time and also enjoy VCR like functionalities. The main goal of a VoD system is to support the

17 maximum number of users possible with minimal waiting time, as waiting can be often annoying to users [47]. A typical VoD system consists of a video server with digitally stored videos in high capacity storage devices such as optical disks and a communication network connecting the users to the server. The components of a typical VoD system are shown in Figure 1.1. The Customer Premises Equipment (CPE) consists of a set-top box and a monitor [41]. The functions of a set-top box include decoding a compressed video, demodulation, descrambling and program storage and so on. Figure 1.1 Components of a VoD System Video-on-Demand is similar to electronic video rental store. It allows the clients who are situated geographically apart to use their television sets interactively. They can get connection to the video server by using a set-top box and can start viewing the video at any time independent of others. Video-on-Demand uses client-server architecture with a large video server capable of storing and displaying a large number of videos simultaneously, and with a high bandwidth distribution network. This consists of a set of switches and transmission lines between the server and the clients who are situated worldwide Types of interactive services Based on the level of interaction, interactive services can be classified into several categories [19]. Broadcast (No-VoD) service: This is similar to broadcast TV, in which the user is a passive participant and has no control over the session. Pay-Per-View (PPV) service: In this service, the user signs up and pay for specific programming. Quasi Video-on-Demand (Q-VoD) service: In this service, users are grouped based on a threshold of interest.

18 Near Video-on-Demand (N-VoD) service: In this service, if different clients are requesting for the same video and if the requests are separated by a small time interval, then by delaying the first client s request, the same channel can be used to serve all the clients [95]. This reduces the cost of system by sharing the bandwidth required by one connection among various clients. It is difficult to provide VCR controls by using this technology. True Video-on-Demand (T-VoD) service: In this service, a True VoD system allocates a dedicated channel for every user to achieve short response time [13]. This enables the user to select which video to play, when to play it and can perform interactive VCR like controls at will. This system is very expensive but it ensures a high quality of service to the clients and also the delay for the start of viewing the video is reduced. These requests put a considerable load on the server and the network resources. This is because of the high bandwidth requirement and long duration of the video content. Network resources may not be used in a cost effective manner when multiple streams are required for different customers, even when they are watching the same video. This is very much similar to using a TV as a VCR without physically having one. The demand for the server and network bandwidth has increased due to the rapid growth of multimedia data traffic over the networked world. The growing popularity of streaming media places an increasing strain on both network and server resources. Streaming media requires large amount of network and disk bandwidth to successfully transmit streams from a server to the client. The characteristics of continuous media are very different from traditional text-based or image-based files. Typically, continuous media imposes high bandwidth and real-time requirements. Although audio and video can be used to present information, entertain and educate people, the use of media applications is not widespread yet. This is largely due to the limited system and network resources. The most common solution is to have a Proxy Server between the main server and the client [34], [36]. Even though the proxy caching effectively delivers the static text-based content, it has complexity in delivering streaming media content. One main reason for this is that the size of the multimedia object, which is usually much larger than that of a text-based multimedia object [24]. Consequently, caching the entire multimedia objects can rapidly exhaust the storage space of the Proxy Server, making it unfeasible. The other reason is that any request from the client for a multimedia object demands continuous streaming delivery. The occasional delays that occur when transferring data over the communication networks are acceptable for text-based Web browsing. However, for streaming multimedia data like audio and video, this transmission delay

19 causes the client to experience playback jitter. This is frustrating and could make clients move away from the streaming service. A download-before-watching solution definitely provides continuous playback, but it also introduces a tremendous startup delay. In addition, it requires the client to have a huge buffer space. To overcome these hurdles in streaming multimedia delivery system, people have resorted to purchasing the services of proprietary content delivery networks (CDNs). These CDNs can easily deliver multimedia content with their dedicated high bandwidth communication networks and large storage capacities, but they are costly [28]. At the same time, the success of proxy caching text-based Web objects has seen a number of deployed Proxy Servers across the Internet. These intermediate Proxy Servers have plenty of resources such as computing power, storage, and bandwidth which can cache commoninterest content to serve different clients more quickly than the client directly accessing the main servers. As an alternative to expensive CDNs, these existing Proxy resources can deliver media content inexpensively through effective resource management strategies, the content of a multimedia object does not change with time [29], [19], [20] Proxy Caching The Proxy Server is used to cache the videos in turn trying to reduce the bandwidth occupation to the remote main servers and consequently the volume of video data transmitted over the Wide Area Network [44]. In a VoD system, a central video server provides a list of prerecorded videos and delivers video contents to the user. A video Proxy Server residing close to the client can assist the delivery by taking advantage of its storage and proximity to the client. The Proxy Server stores a complete video or a portion of it. On the other hand, the Proxy Server transmits cached data to the client using its abundant and less expensive bandwidth [40]. The major problem of end-to-end video delivery system is to reduce WAN bandwidth requirement [16]. A Proxy Server can be considered as an intermediate node along the server-client path. This in effect will partition the server-client path into a server-proxy path and a proxy-client path. When a video is accessed by a client, the whole video or a portion of the video may have already been cached in the Proxy Server. If the video is not stored in the Proxy Server, it needs to be accessed from the Central Multimedia Server and delivered to the client. At the same time the video can be cached partially or completely in the Proxy Server to improve the future accesses. Transmission of videos requires a high network bandwidth which is one of the most expensive resources in Video-on-Demand system. Therefore the critical part of Video-on- Demand is to optimize network bandwidth.

20 Partial Caching User can make a request for any of the video at any time. Huge storage space is required to cache all the videos at the Proxy Server [43]. But the storage capacity of the Proxy Server is limited. Hence complete video caching approach is not feasible and efficient [42]. To illustrate, a one-hour standard MPEG-1 video has a volume of about 675 MB [79]. Storing several such large videos will quickly exhaust the cache space of a Proxy Server. Hence, it is necessary to design partial caching algorithms or group proxies to enlarge cache space [14], [18], [33], [38]. However, these approaches require adaptiveness to the dynamically changing popularity of videos and users access patterns [35]. Because of the static nature of the video content and localized access interests, partial proxy caching scheme [75] can help the streaming systems to achieve significant improvement in the performance Buffer management at Proxy Server Digitized Video In the physical world, video and audio data is continuous in nature. To be able to store this data at the client storage system, the audio and video data need to be digitized. The digitized data obtained by this process is typically too large to be stored or delivered over a network. For example, consider a monochrome display where the brightness at each pixel is represented by a single byte. Each pixel in such a display can have 256 brightness levels. Storage of a single picture will require 1 MB of storage, and delivery of a video will require 30 MB per second. The resources required for storage and delivery can be reduced by taking into consideration the large degree of redundancy in the digitized data as well as the presence of fine and unnoticeable details [52] Buffer Management When a video is requested by a client, the complete video or portion of the video may have already been cached in the Proxy Server. In that case the user request can be immediately serviced from the Proxy Server. Otherwise if the video is not stored in the Proxy Server, it needs to be accessed from the Central Multimedia Server and then delivered to the client. At the same time, the initial portion of the video or the complete video can be cached at the Proxy Server to

21 improve the efficiency of the system. This is possible if sufficient cache space is available at the Proxy Server. Otherwise, immediately some replacement technique must be used to make room for this new video. This may increase the frequent replacements which in turn can increase the frequent access to the remote main server. This increases the amount of data transmitted and the network bandwidth demand over the main server to Proxy Server path. The efficient allocation of scarce storage space of the Proxy Server to the videos to be cached at the Proxy Server is very important. There are two possible solutions to this problem. 1) Static buffer management method, in which fixed number of cache blocks are allocated to each video data regardless of the system s load and video s status [58]. Whenever there are no sufficient cache blocks for the newly downloaded video, some replacement technique need to be used to make space for this new video. This may not be an efficient algorithm as it increases the frequent replacements and does not allow the Proxy Server to cache more number of videos. Hence, the Proxy Server may have to directly contact the central server increasing the bandwidth demand and network traffic of the system. 2) Dynamic buffer allocation technique [27] dynamically allocates cache space for the downloaded videos based on their probability of request arrivals and load of the system. This increases the video availability at the Proxy Server and decreases the video replacement frequency and also the frequent access to remote main server [58], [59] Video-on-Demand Network Configurations In order to achieve higher user capacity and lower network transmission cost, distributed servers architecture can be used, in which multiple local servers (Proxy Server) are placed close to user pools and, according to their local demands, dynamically cache the contents streamed from the repository [48]. By adjusting the size of the buffer, such caching is able to achieve better tradeoff between network bandwidth and local storage as compared to the traditional caching in which a movie is treated as an entity. In a Video-on-Demand system, a video repository server stores all the video contents of interest to a large number of geographically distributed users. If videos were to be streamed directly to the users, the user capacity in the system would be limited by the streaming capacity of the repository. Such capacity can be increased by using a hierarchy of servers, in which multiple streaming servers cache the movies delivered from the repository and stream them to the users [49]. If the streaming servers were co-located with the repository, the transmission cost incurred in streaming videos to remote users might be high. In order to overcome this problem and to take advantage of the access locality features along with the demand characteristics of the user pool, the streaming Proxy

22 Servers may be placed close to the user regions forming a distributed server s architecture. This system is able to achieve scalable storage and streaming capacities by introducing more local Proxy Servers, as the traffic increases. A Video-on-Demand system can be designed using any of the 3 major network configurations - centralized, networked and distributed. As shown in the Figure 1.2, in a centralized system configuration, all the clients are connected to one central server which stores all the videos. This central server controls, manages and serves all the client requests. Therefore if there are 1000 requests at a given time, the server should be able to transfer 1000 streams of video simultaneously. This also means that the network bandwidth required has to be very high in order to support 1000 streams. If we are sending MPEG-1 video encoded at 1.5Mbps, then in order to send 1000 streams, a total bandwidth of 1.5Gbps is required. In a network system configuration as given the Figure 1.3, many video servers exist within the network. Each video server is connected to a small set of clients, and this video server manages a subset of the videos. In a distributed system configuration, there is a central multimedia server which stores all the videos and the smaller servers are located near the network edges. When a client requests a particular video, the video server responsible for the requests ensures continuous playback for the video [2]. Traditional file system considers the Proxy Servers at a single level. That is, if the Proxy Server does not contain the data requested by the client, it attempts to retrieve the data directly from the server [11], [23]. An alternative is to have Proxy Servers at hierarchical levels, where proxy caches can retrieve needed multimedia data from other proxy caches in the next level [52].

23 Figure 1.2 Centralized VoD Architecture Figure 1.3 Distributed VoD Architecture Single-Level vs. Hierarchical Caches In a hierarchical design of Proxy Servers as shown in the Figure 1.4, the proxy cache closest to the client are preferred for the data retrieval. Therefore, the most frequently accessed multimedia data would tend to be cached at the Proxy Server closest to the client. Less frequently accessed multimedia data is cached at the Proxy Servers of higher levels. However, if the hit rate to the Proxy Servers higher in the hierarchy is low, hierarchical caching may not be worthwhile [53] Caching Protocol ICP In order to facilitate the Proxy Servers to communicate with each other, interproxy server communication is required so that the proxy servers can exchange video data from one to another [50], [52]. Hence caching protocols have been developed to allow proxy servers to communicate with each other. The same functionality can also be achieved with the protocols used by the clients to retrieve multimedia data.

24 Figure 1.4 Hierarchical Caching Figure 1.5 ICP Caching Protocol The most commonly used inter Proxy Server caching protocol today is the Internet Cache Protocol (ICP).

25 The Real-Time Streaming Protocol (RTSP) is an important protocol for controlling Video-on-Demand applications over the communication network. This protocol is intended to be used by clients to request playback video presentations or video clips from multimedia servers. The protocol contains commands for opening and closing sessions with the main multimedia server as well as play back control commands [52]. Figure 1.5 illustrates how proxy server that support the RTSP multimedia protocol can use ICP. Initially, the client establishes a TCP connection with proxy cache PC1 and then sets up an RTSP connection through the setup command. After this, the client sends a Describe command requesting a description of the video file to be played. If the requested data is not present at the proxy server cache, PC1 sends an ICP_OP_QUERY command to PC2. If PC2 has the requested data in its cache, it will reply with the ICP_OP_HIT command. PC2 could also have responded with ICP_OP_MISS if it did not have the requested data or with ICP_OP_HIT_OBJ and enclosed the object in the response. Subsequently, PC1 relays the client requests to PC2 and relays the response back to the client. In particular, the play command is relayed to PC2. In response, PC2 sends multimedia data which is relayed to the client and possibly cached by PC1 [52] Load Balancing Load balancing is another important mechanism to improve the efficiency of the distributed VoD systems and it has two main objectives. One is to improve the system overall performance by utilizing the video objects cached at all the closely located Proxy Servers. The other objective is to maintain the load balancing algorithm as efficient as possible by assigning the users of fully loaded Proxy Server to lightly loaded Proxy Server. The Distributed VoD system [81] has various characteristics such as Local Vs Remote, Load Balancing Technique, Performance Overhead and Relocation of the Request that necessitate a different load balancing and load sharing algorithms Local Vs Remote In a distributed Video-on-Demand system providing a service to the local request from the remote server invites service overheads. It consumes additional resources (e.g., bandwidth) while servicing the remote request. This is due to video object retrieval involves dragging the video data frames from a server, and conveying them over a communication network to a requested client. Hence, serving a client request remotely produces traffic over the Central Multimedia Server to Proxy Server network (typically a MAN or a WAN). Thus, a distributed Video-on-Demand system needs to be careful about assigning local requests to remote servers.

26 Load Balancing Technique In a conventional distributed VoD system, load can be balanced by shifting requests to be served from a fully-loaded server to a lightly-loaded server because those requests can be served from the lightly loaded servers without much delay. However, in a multimedia system client requests must be serviced at fixed rates. For example, an MPEG video should be played back at the rate of 30 frames/sec. If adequate resources are not assigned to a request, it results in jittered play back. When the Proxy Server is not operating at full capacity and there are some remote requests to be served, the Proxy Server begins to serve the remote requests. Once a server starts servicing remote requests, it cannot simply preempt the remote requests when new local requests arrive. Thus, the local requests may be delayed by remote requests. Hence, servicing of the remote requests incur unnecessary commitments in trying to even out the workload Performance Overhead In a multimedia system, when a request arrives at a local Proxy Server, if Proxy Server does not have enough bandwidth to serve it, the request cannot be serviced immediately. Therefore a multimedia system can assign an incoming local client request to a remote Proxy Server that has spare bandwidth. Otherwise, the user may have to wait for an unacceptably long time until the local Proxy Server finishes one of its existing requests. This job of assigning local request to a remote Proxy Server requires scheduling overhead. This scheduling overhead is relatively negligible compared to the large amount of system resources required to service the video request from the remote main server. Therefore performance is considered to be more important than scheduling overhead. A multimedia system can afford the scheduling overheads necessary to achieve load sharing that is optimal (i.e., minimizes waiting time) and hence there is a trade-off between performance and scheduling [1], [3], [8], [12], [15], [34] Relocation of the Request When new local requests arrive at a Proxy Server, which is currently busy in servicing the remote requests (fully loaded), the local requests cannot be serviced immediately. This results in high service delay. Hence multimedia systems can improve the performance by reassigning the executing remote requests among the Proxy Servers (not fully loaded) according to changing load condition (Request relocation). This request relocation mechanism needs to identify an alternate Proxy Server that can service this request with sufficient bandwidth, and then transfer the request from the current Proxy Server to the new Proxy Server.

27 Centralized Versus Decentralized Models The load balancing algorithm of a distributed system can adopt either a centralized or a decentralized model. In a centralized model single coordinator maintains the complete information and controls all user request assignments. All the servers have to report their load status to the coordinator. In order to handle the user requests in case of cache miss, the Proxy Servers communicate with the coordinator, rather than with each other. In this model, if coordinator fails, the model may have to undergo tedious recovery mechanisms. In this model, the Proxy Servers have to update the information with the coordinator and they need not have to communicate with other Proxy Servers. Information sharing across the Proxy Servers is done through the coordinator. In a decentralized model, all Proxy Servers participate equally in the load balancing protocol, and each Proxy Server can communicate with every other Proxy Server to exchange information. This may increase the job of each Proxy Server in coordination with the other Proxy Servers for assigning a local request to remote Proxy Server. This also increases the network traffic. But a decentralized model is more robust as there is no single point of failure Streaming Approaches Streaming is an approach in which a peer-to-peer communication paradigm allows economical clients to contribute their local resources (storage spaces and bandwidth) for streaming. Specifically, the video data originally provided by a server are spread among clients of asynchronous demands, and each client can store the full or partial versions of the video stream in its local cache. Then, one or more clients can collectively supply cached data to other clients, thus amplifying the system capacity with increasing suppliers over time. However, in contrast to the reliable and dedicated servers or proxies, these loosely-coupled autonomous endhosts are not highly reliable since these end-hosts can fail or may leave the network without any notice. Given the fact that a media playback lasts a long time and consume huge resources, a pure peer-to-peer system may not provide the desirable information availability in the Internet environment. Another reason for not adopting the pure peer to peer approach is that there are no authoritative parties. It is also difficult to identify and penalize the clients who intentionally insert forged data [105], [114]. The constraint of server bandwidth limitation sets a hard limit on the number of users the server is able to support simultaneously. Whenever there is a request for the video, if the requested video is available at Proxy Server, then the service is given immediately. Otherwise the requested video should be downloaded from the main server. This increases the bandwidth

28 demand, transmission cost, service delay and also the load on the central main server. Many VoD schemes have been proposed to address this problem: batching [54], [100], patching [102], [107], periodical broadcasting [111], [112], piggybacking, prefix caching [38], [92] and chaining [120], [122] Batching In the batching scheme, the server batches the requests for the same video together if their arrival times are close, and serve them by a single multicast channel to allow resource sharing. Typically, the arriving requests are first queued until their associated batch is initiated. The duration between its arrival time and the batch start time is known as its service delay. Once a batch is launched, an I/O stream is set up to retrieve the video from the storage subsystem. Then with the network multicast facility, the data is streamed to the group of clients by the packet store-and-forward mechanism [123]. However, batching has the following limitations: (1) The number of batches can be served at a time is still constrained by the network-i/o bandwidth. (2) Requests arriving early in a batch are unfairly made to wait for the late arrivals. As a result, many users of such a system are likely to experience long delay in their services. (3) The interests of the viewers may be quenched and they would cancel their requests (or renege) if they have been kept waiting for too long. The achievement of batching schemes will be leveled off by reneging behaviors Patching Patching is another on-demand multicast scheme. When a client requests a video, it joins an on-going multicast of the video. The multicast data is temporarily cached in the local disk as the client plays back the leading portion of the video arriving on a separate channel called the patching channel. When the playback of the patching data is completed, the client switches to playing the multicast data cached in the local buffer. This approach also can offer true ondemand services. It is simpler than chaining because clients do not have to serve other clients. However, since the server is the only source for the data, this approach cannot utilize the innetwork bandwidth as effectively as chaining can [103] Periodical Broadcasting Periodical broadcasting is another innovative technique. In this approach, popular videos are partitioned into a series of segments and these segments are continually broadcast on several

29 dedicated channels. To receive a service, a client tunes to the appropriate channels to download the desired video. Before clients start playing videos, they usually have to wait for a time length equivalent to the first segment. To ensure acceptable delays, this segment can be made small. A major advantage of this approach is that the required server bandwidth is independent of the number of users the system is designed to support. Each video, however, requires substantial bandwidth. This requirement renders this approach suitable only for very popular videos and therefore, only near VoD service is provided Piggybacking This technique merges users on separate transmission channels by slightly increasing the playback rate of the latecomer (and/or slightly decreasing the playback rate of the early starter), so that it eventually catches up with another user, and hence both can then be served using the same multicast channel. This technique exploits user s tolerance on playback rate variations and does not require additional buffer on the client side as in the case of patching Chaining Another efficient streaming technique is chaining. In chaining, each client caches a small portion of recently received video content. Newly arriving clients can stream from an earlier client as long as the earlier client still has the first block of the video cached forming a chain of clients (active chain). Clients of the active chain cooperate to create a sharable video cache as the primary source of video content for subsequent client requests. To select peers and dynamically adapt to network fluctuations, Proxy Server-assisted chaining schemes can be developed. 1.3 MOTIVATION The main objective of this research work is formulated to achieve optimal utilization of the resources like Buffer and Bandwidth of VoD system. To realize this objective, our current research work focuses on New architecture of interconnected Proxy Servers for distributed environment Load sharing strategies for the proposed architectures Efficient buffer allocation mechanisms Video prefix caching and distribution approaches Chaining- An efficient streaming technique. 1.4 ORGANIZATION OF THE THESIS

30 The thesis is organized in the following way. Chapter One gives a brief introduction to applications of Multimedia and discusses the limitations of Video-on-Demand system. General multimedia architectures with the proxy caching, load sharing and streaming approaches are also discussed. In chapter Two, a consolidated literature survey on the existing models relating to the above topics, and the architecture and models which we have developed are discussed. The Third chapter discusses the proposed architecture of interconnected Proxy Servers with the coordinator. Also, the proposed algorithms for load sharing and buffer management at Proxy Server are discussed. In chapter Four, proposed algorithms for the distribution of video prefixes among the Proxy Servers of the proposed architecture are discussed. Chapter Five presents the efficient streaming approach proposed for the proposed architecture. Chapter Six contains a brief summary, contribution of the work and future directions to research. This thesis deals with implementation and analysis of an effective VoD system using Proxy Server and Central Multimedia Server emphasizing on the bandwidth requirement minimization on the costly line and maximizing storage for client distribution on low class lines. We emphasize the management of videos using current popular videos like MPEG-2. Notwithstanding the fact that the emergence of MPEG-4 on compression arena has its own influence, the analysis presented here stick to the MPEG-2 based calculation as given in the Table 1.2. However, the same model could be applied for MPEG-4 based video system also and the advantages can be made profitable by increasing the video programs and its numbers for profitable proposes. Table 1.2 Video Formats Media type (specifications) Bit Rate Storage requirement MPEG encoded audio 384 Kb/s MPEG-1 encoded video 512Kb/s - 1.5Mb/s 675MB (1hour video) MPEG-2 encoded video Mb/s 1.4GB (90 minutes video) MPEG-4 encoded video 40Kb/s -r <1Mb/s

31 CHAPTER 2 LITERATURE SURVEY 2.1 OVERVIEW In this chapter, a review of the current literature on limitations of Video-on-Demand system and possible solutions are presented. It includes video caching, methodologies at Proxy Server, buffer management schemes, distributed VoD architectures, load sharing techniques and various streaming approaches. 2.2 PROXY CACHING METHODOLOGIES In a Video-on-Demand system, the limited capacity of the multimedia server and the unpredictable network environment, however, make it a challenging task to propose and deploy an efficient and scalable on-demand media streaming service [1],[2],[3], [4]. An effective approach to reduce server/network loads is to cache frequently used data at Proxy Servers that are close to the clients [6], [7]. Proxy Server caching technique for video streaming has played an important role in reducing the service delay, network traffic, transmission cost, main server load and the client rejection rate. Hence, this proxy caching scheme has attracted much attention in the past decade, and many algorithms have been proposed. Some of them store the complete videos at the Proxy Server considering the static nature of video contents and their intensive I/O demands [5],[21],[81]. Video objects require huge caching resource, because of their high data rates and long playback durations. Hence, storing the entire contents of few videos would exhaust the capacity of the proxy cache. Instead, partial caching mechanisms storing only a portion of each video stream can be used. Many of the algorithms, like segment caching [28],[33], prefix caching [6],[8],[9],[10],[11],[16],[17],[29],[42] employ a semi-static partial caching approach [79][12], where popular video portions are cached over a relatively long time period. Bing Wang, et. al [12], has proposed an optimal proxy prefix cache allocation to the videos that minimizes the aggregate network bandwidth cost. This work integrates proxy caching with traditional server-based reactive transmission schemes such as batching, patching and stream merging to develop a set of proxy-assisted delivery schemes. Li Zhu et al. [13] has proposed a wide-scale cost model for proxy caching that takes bandwidth consumption into consideration over an entire network for different multicasting tree topologies. The new cost model quantifies the overall usage of network resources more accurately.

32 Zhi-Li Zhang, et al.[16] have developed rate-split caching algorithm called video staging via intelligent utilization of the disk bandwidth and storage space available at Proxy Servers. Using this video staging technique, only part of a video stream is retrieved directly from the central video server across the backbone WAN whereas the rest of the video stream is delivered to users locally from Proxy Servers attached to the LANs. As the request arrival rate for the videos changes over the time, static caching of videos may increase the user rejection rate. Hence, to achieve reduced client rejection rate and better system throughput, many dynamic partial caching algorithms [14],[15],[18],[17],[24],[40] have been proposed. S. Chen et al. [55] have proposed an algorithm that caches a sliding interval of a media object to exploit sequential access of streaming media. As the cached portion is dynamically updated with playback, the sliding-interval caching involves high disk bandwidth demands in the worst case. It would double the disk I/O due to the concurrent read/write operations. To effectively utilize available cache resources, Tewari et al. [15] proposed a resource based caching (RBC) policy. The policy characterizes each object by its space and bandwidth requirements, and models the cache as a two-constraint knapsack. A heuristic algorithm was developed to dynamically select the caching granularity of an object with the objective of balancing its bandwidth and space usages. Depending on the object s characteristics and the available resources, the selected granularity could be a sliding interval or the full object. Huang, et al. [17] have proposed a layered cache scheme and a replacement scheme for video proxy, where client can specify the quality of the requested video. To meet the QoS concern, delay factor is considered in this cache scheme to reduce the waiting time at the client side, and a corresponding cache scheme that does caching according to media s deserved storage size is devised. Lin Wujuan et al. [21] have proposed a novel caching strategy, referred to as clientassisted interval caching (CIC) scheme, to balance the requirements of I/O bandwidth and cache capacity in a cost-effective way. The CIC scheme tends to use the cache memory available in clients to serve the first few blocks of streams so as to dramatically reduce the demand on the I/O bandwidth of the server. Jussara M. Almeida et al.[20] have proposed simple cost models for provisioning content distribution networks that uses the simple and highly scalable bandwidth skimming protocol for streaming. They have concentrated on the cost effectiveness of the Proxy Servers in multicast streaming systems, an effective streaming protocol and optimization of the proxy content. Subhabrata Sen et al. [22] propose a prefix caching technique whereby a proxy stores the initial frames of popular clips. Upon receiving a request for the stream, the proxy initiates

33 transmission to the client and simultaneously requests the remaining frames from the server. In addition to hiding the delay, throughput, and loss effects of a weaker service model between the server and the proxy, this novel yet simple prefix caching technique aids the proxy in performing work ahead smoothing into the client playback buffer. By transmitting large frames in advance of each burst, work ahead smoothing substantially reduces the peak and variability of the network resource requirements along the path from the proxy to the client. Javadtalab et al. [23] propose a new caching scheme for Video on Demand (VoD) systems to improve the network throughput by storing video files in the proxy with scalable video file sizes. Wei Tu, et al. [25] have proposed a novel proxy caching scheme based on the observation that streaming video users searching for some specific content or a scene pay most attention to the initial delay, while a small shift of the starting point is acceptable. Based on the dynamically changing popularity of video segments, an efficient segment-based caching algorithm is also proposed, which maximizes the user satisfaction by trading off between the initial delay and the deviation of starting point. Hyung Rai, et al. [27] have proposed a novel dynamic and scalable caching replacement algorithm of Proxy Server with a finite storage size for multimedia objects. In the fast caching process, caching sequences for videos are obtained to decrease both the buffer size and the required bandwidth and saved into metafiles in advance. Shudong Jin, et al. [32] present a novel caching architecture and associated cache management algorithms that turn edge-caches into accelerators of streaming media delivery. A salient feature of this caching algorithm is that they allow partial caching of streaming media objects and joint delivery of content from caches and origin servers. The caching algorithms are both network-aware and stream-aware; they take into account the popularity of streaming media objects, their bit-rate requirements, and the available bandwidth between clients and servers. Taeseok Kim et al. [37] have proposed an efficient buffer management scheme for multimedia streaming servers. The proposed scheme exploits the reference popularity of multimedia objects as well as the time interval between two consecutive requests on an identical object. Through trace-driven simulations, it is shown that the proposed scheme improves the performance of multimedia servers significantly. Gerassimos et al. [39], proposes a multi server, multi-installment (MSMI) solution approach (sending the document in several installments from each server) to the delivery problem and achieves a minimization of the client waiting time. By using multiple spatially distributed servers, they have exploited slow connections that would otherwise prevent the deployment of Video-On-Demand-like services, to offer such services in an optimal manner.

34 Additionally, the delivery and playback schedule that is computed by this approach is loss-aware in the sense that it is flexible enough to accommodate packet losses without interrupts. J. Wang et al. [43] have made a survey of web caching But in case of cache miss at Proxy Server, most of the existing algorithms execute the replacement method [26], [30], [31], [35], [36] to accommodate the newly downloaded video. This may increase the frequent communications with the remote main server and also increasing the number of replacements resulting in increased request to service delay and inefficiency in utilizing the buffer at Proxy Server. 2.3 BUFFER MANAGEMENT SCHEMES Client Service rate at the Proxy Server is mainly influenced by the storage capacity of the Proxy Server. Hence many algorithms have been proposed to make the efficient buffer management at the Proxy Server. S. Chen et al. [55] propose two techniques based on shared running buffers in the Proxy Server. Considering user access patterns and characteristics of the requested media objects, the proposed techniques adaptively allocate memory buffers to fully utilize the currently buffered data of streaming sessions. Chen-Lung Chan et al. [56] propose a new multicast infrastructure, called buffer-assisted on-demand multicast, to allow receivers accessing a multicast stream asynchronously. A timing control mechanism is integrated on intermediate routing nodes (e.g., routers, proxies, or peer nodes in a peer-to-peer network) to branch time-variant multicast sub-streams to corresponding receivers. Besides, an optimal routing path and the corresponding buffer allocations have been done to maximize the throughput of the multicast stream. Sungyoung Lee et al. [57] present a network channel buffer scheduling algorithm to support VOD server based on the dynamic Critical Task Indicating algorithm which was developed by these authors. The goal of the proposed algorithm is to get fast response time for non-continuous media (NM) while guaranteeing the deadlines of continuous media (CM). Sang-Ho Lee et al. [58] proposes a dynamic buffer allocation scheme which allocates buffers of the minimum size to the user requests in a partially loaded state as well as in the fully loaded state. In this work, the size of the buffer to be allocated is determined based on the number and the sizes of the buffers to be allocated in the next service period. Predict-andenforce strategy has been used, where the number and the sizes of future buffers are predicted based on inertia assumptions and enforces these assumptions at runtime. Chong Leng Goh et al. [59] address the problem of supporting extensibility of buffer replacement policies. A framework for modeling buffer replacement policies is also proposed.

35 This work concentrates on two aspects. First, by providing a uniform and generic specification of buffer replacement policies, here the proposed framework unifies existing work in this area. Second, this work introduces a new level of extensibility at the buffer management level. Cyrus C. Y et al. [60] have described and evaluated a highly scalable VoD system with a low per-user cost. This has been done in two steps, first the performance degradation problems using recently proposed VoD systems have been analyzed, namely batched and centralizedbuffer VoD systems that occur during the handling of interactions. Then a new system called the Multi-Batch Buffer (MBB) system has been proposed to solve these problems. Zongming Fei et al. [61] have proposed an active buffer management technique to provide interactive functions in broadcast VoD systems. In this scheme, the client can selectively prefetch segments from broadcast channels based on the observation of the play point in its local buffer. The content of the buffer is adjusted in such a way that the relative position of the play point is kept in the middle part of the buffer. Gary D. Schultz [62] has developed a stochastic model for the process of dynamic buffering for inbound messages in a computer communications system. Two buffer assignment schemes have been proposed, both are dynamic but differ in binding strategy. Tsang-Ling Sheu et al. [63] have presented a multiple-class buffer allocation scheme for ATM networks with VBR (Variable Bit Rate) traffic. The dynamic buffer allocation scheme proposed in this work consists of two parts. The first part is used to estimate the buffer size required for each class of traffic. The second part is used to dynamically rearrange the buffer space among different priorities of traffic classes. Sangdon Lee et al.[64] proposes a new buffer allocation technique to improve the system s global performance. This work mainly aims at maximizing the effectiveness of the buffer allocation. Hence, in order to reduce the waiting time portion of query processing through the flexible buffer allocation, the less utilized buffers of queries are partially preempted and reallocated to the other queries which can utilize them more effectively Paul Bocheck et al. [65] have proposed a new policy for dynamic resource allocation scheme to increase the link utilization and decrease the required network buffering. In this work, the visual content of video is used to determine the bandwidth required for the transmission. Hence, in our research work we have proposed dynamic cache blocks allocation algorithms to achieve high availability at Proxy Server close to client through a) reducing the communications with remote main server and b) reducing the network usage on Central Multimedia Server to Proxy Server path.

36 2.4 PROXY CACHING FOR DISTRIBUTED VoD ARCHITECTURES AND LOAD BALANCING TECHNIQUES Almost all of the techniques discussed above are based on relatively simple proxy architecture with no collaboration or cooperation among the Proxy Servers. Although hierarchical/ distributed [71],[72],[73],[74] web caching techniques work efficiently for traditional data (caching a whole file), these techniques do not work well in support of continuous data media. It is well-recognized that Proxy Servers grouped together can achieve better performance than independent standalone proxies [68], [69]. Load balancing is another effective mechanism in achieving high service rate by servicing the requests from the Proxy Server which is closely located to the client. The main objective of load balancing algorithms in a VoD distributed system is to minimize the waiting time for a service to begin and to maximize the service rate at the Proxy Server. This is achieved by allowing the Proxy Servers with small retrieval bandwidth to help out Proxy Servers that are temporarily overloaded. Media caching in MiddleMan [70] operates a collection of Proxy Servers as a scalable cache cluster media objects are segmented into equal-sized segments and stored across multiple proxies, where they can be replaced at a granularity of a segment. There are also several local proxies responsible to answer client requests by locating and relaying the segments. To achieve better load balance and fault tolerance, a data layout is suggested in [76], which partitions a media object into segments of increasing sizes, stores more copies for popular segments, and yet guarantees at least one copy stored for each segment. P. W. Lie et al. [45] have focused on the design of an efficient collaborative proxy/ caching architecture for streaming media objects. To achieve a much higher byte-hit ratio with less cache management overhead, Anna Satsiou,et al [77] focuses on an environment of more than one Proxy Server that serve homogeneous or even heterogeneous client preferences for streaming of video files. Under a hierarchical tree topology system of proxies, the prefixes of the videos are stored in small size proxy caches each located very close to the corresponding client community, while larger caches located further away from the client communities are used to cache the latter segments of the videos requested by more than one client community. Frequency-Based Cache Management Policies are used to efficiently and dynamically cache the content of the most popular videos among the various proxies. To minimize service delays and to reduce the loads placed on the network resources, Wallapak Tavanapong, et al. [78] have proposed a Video Caching Network (VCN) that utilizes an aggregated cache space of distributed systems along the delivery path between the server and

37 the users for caching popular videos. VCN is set up and adjusted dynamically according to user s locations and request patterns. Alan T.S, et al. [79] discusses another approach to reduce the aggregate transmission cost by considering the cooperation among the Proxy Servers. This work has proposed an approach to cache the prefix and the prefix of suffix at Proxy Server and the client respectively. Since the clients are not trustable, and can fail or may leave the network at any time without any notice, they have adopted an additional mechanism to verify the client and cached data at client, which increases the overhead of such verification. Both searching of the video in the whole cluster of Proxy Servers, and the verification process increases the client's waiting time. In order to provide users with low-latency and high-quality video-streaming services, Naoki Wakamiya, et al. [80] have investigated mechanisms in which Proxy Servers cooperate with each other. The proxy is capable of adapting incoming or cached video blocks at the user s request by means of transcoders and filters. On receiving a request from a user, the proxy checks its own cache. If an appropriate block is not available, the proxy retrieves a block of a higher quality from the video server or a nearby proxy. The retrieved block is cached, its quality is adjusted to that which was requested as necessary, and is then sent to the user. Each proxy communicates with the others and takes the transfer delay and video quality into account in finding the appropriate block for retrieval. Y.C Tay, et al [81], have explored the feasibility of linking up several small multimedia servers to a (limited-capacity) network, and allowing servers with idle retrieval bandwidth to help out servers that are temporarily overloaded. The goal is to minimize the waiting time for service to begin. This work has introduced an algorithm called Global waiting Queue (GWQ) load balancing algorithm. It puts all pending requests in a global queue, from which a server with idle capacity obtains additional jobs. Also they propose an enhanced GWQ+L algorithm that allows a server to reclaim active local requests that are being serviced remotely. The main characteristics of the GWQ algorithm are: 1. A server serves a remote request only when it is fully loaded, if it receives a local request, and it does not have enough resources to attend to this request. 2. The first priority of every server is to service requests from its Local Queue. Remote Requests from the Remote Queue are attended only when its Local Queue is empty. 3. Each request is serviced by the same server throughout its lifetime. Some disadvantages of this algorithm are:

38 Each video object is replicated at every server. Actually, replication of complete video object at every server is not a prerequisite for this scheme, but the case of partial replication is not analyzed All videos are requested with the same probability and hence the popularity of the videos is not considered. When there are several servers, the remote request is assigned to the first server that returns a positive answer. Thus, the current load of each applicant server is not considered. Hence, the assigned request may increase the load on the server and also increases the possibility of generating new remote requests for that server. With the goal of minimizing the waiting time for a service to begin, S. Gonzalez, et al. [82] considered a distributed VoD system in which only the most popular videos are replicated in all the servers, whereas the rest of them are distributed through the system, following some allocation scheme. This work also presents an algorithm to efficiently balance the load in the proposed system. This balancing by assigning the jobs to other servers increases the network traffic. In order to accomplish, low-delay and high-quality video distribution without imposing extra load on the system, the video streaming system has been proposed by Yoshiaki Taniguchi, et al. [83]. This system consists of a video server and multiple Proxy Servers. In this mechanism, the proxies communicate with each other and retrieve missing video data from an appropriate server by taking into account transfer delay and offerable quality. In addition, the quality of cached video data is adapted appropriately at a proxy to cope with the client-to-client heterogeneity in terms of the available bandwidth, end-system performance, and user preferences on the perceived video quality. Minseok Song, et al. [84] proposes an adaptive data retrieval scheme for load sharing in clustered video servers. They have analyzed how the data retrieval period affects the utilization of disk bandwidth and buffer space, and then develop a robust period management policy to satisfy the real-time requirements of video streams. This work also has proposed a new data retrieval scheme in which the period can be dynamically adjusted so as to increase the disk bandwidth capacity of heavily loaded clusters and increase the number of clients admitted. Yin-Fu Huang, et al. [85] focused on dynamic load balancing among the servers in the VoD system with clustering servers. This is to support more clients with reduced average response time of requests. The load balancing among the servers can be achieved if a load balancing mechanism is triggered to perform file migration and request migration when a load unbalance is detected.

39 Jun Guo, et al., [86] have established a conjecture on how to balance the movie traffic load among combination groups of disks to maximize the level of disk resource sharing. For a given file replication instance, the conjecture predicts in general an effective lower bound on the blocking performance of the system. The design of a numerical index that measures quantitatively the goodness of disk resource sharing on allocation of multi-copy movie files is also proposed. This work has proposed the design of a greedy file allocation method that decides a good quality heuristic solution for each feasible file replication instance. Chen-Lung Chan, et al. [87] propose a cooperative cache framework to automatically select appropriate cache sources and paths for various network environments. Jonathan Dukes, et al. [88] describe in detail the implementation of dynamic replication in a server cluster environment. They also describe the architecture of the HammerHead multimedia server cluster. HammerHead has been developed as a cluster-aware layer that can exist on top of existing commodity multimedia servers this prototype takes the form of a plugin for the multimedia server in Microsoft Windows Server 2003TM. Replicated state information is maintained using the Ensemble group communication toolkit. Xiabo Zhou, Cheng-Zhong xu [91] have proposed an algorithm (zipfr-slfa) for video replication and placement algorithm, which utilizes the information about Zipf-like video popularity distribution. They have replicated the complete videos evenly in all the servers, for which the storage capacity of individual Proxy Server should be very large to store all the videos. This may not allow each server to store replicas of more number of videos. C.F.Chou, L, et al. [79] have proposed an algorithm Classical Replication with Round Robin technique (CR-RR) to replicate the videos. In this algorithm Proxy Server will be idle until it gets its turn. Also, they have replicated the complete video which requires large storage space. Our work is motivated by these cooperative systems, and we enhance them by integrating proxy prefix caching technique, load sharing approach and efficient prefix replication distribution schemes with dynamic buffer allocation algorithm, which greatly expands the aggregated cache storage with contributions of cooperative Proxy Servers and the Tracker. 2.5 STREAMING APPROACHES Peer-to-peer communications have recently become a popular alternative to the traditional client/server paradigm. To address the issues of Proxy Server caching limitations and limited service scalability, many peer-to-peer streaming approaches have been proposed as an

40 alternate to proxy caching from past few years. These schemes are multicasting, batching, patching, periodic broadcasting and chaining. In multicasting, server is able to serve all the user requests, which have arrived at the same time for the same video stream [94]. Multicast offers an efficient means of distributing a video program to multiple clients greatly improving the VoD performance. However, there are many problems to overcome before the development of multicast VoD systems. Huadong Ma,Kang G Shin [93] evaluates and discusses the recent progress in developing multicast VoD systems. They have proposed the concept and architecture of multicast VoD, and have introduced the techniques used in multicast VoD systems. They also analyze and evaluate problems related to multicast VoD service. In order to optimize the number of multicast streams many batching techniques [123], [125], [127] have been proposed. In batching, the server makes the batches of requests for the same video together if their times of arrival are closer and multicasts the video to these requests to save the network I/O bandwidth. Vincent C. H. Lee et al. [95] have proposed a unified Videoon-Demand (UVoD) architecture generalizing the traditional true Video-on-Demand (TVoD) and near Video-on-Demand (NVoD) architectures. In a traditional video server, the available resources are divided into a number of video channels. Each user is allocated a dedicated channel for the entire viewing duration in a TVoD system and the user can perform interactive VCR controls. By contrast, multiple users share a multicast video channel in a NVoD system. This reduces the resource requirement at the expense of long startup latency and limited interactive controllability. This UVoD architecture divides the available channels into unicast and multicast channels. Using intelligent client buffering, UVoD has tried to achieve latency similar to TVoD while at the same time reduces the resource requirement. Another proposal of integration of batching into the unicast channels to further reduce startup latency is also suggested. Meng Guo et al. [96] have told that, batching is one of the key technique in reducing the server resources as well as server access and network bandwidth. VoD server replication is another approach that can allow a VoD service to handle a large number of clients, even though at the additional cost of providing more servers. While replication is an effective way to increase the service capacity, it needs to be coupled with appropriate selection techniques in order to make efficient use of the increased capacity. Hence, they investigate the design of server selection techniques for a system of replicated batching VoD servers. Also they have evaluated a range of selection algorithms applying to three batching approaches: Batching with Persistent Channel Allocation, Patching, and Hierarchical Multicast Stream Merging (HMSM). They also

41 have tried to prove that server replication combined with appropriate server selection scheme can increase the capacity of the service leading to improved performance. But the disadvantage of this batching scheme is that, it makes the requests arrives early in a batch to wait for the late coming requests unfairly. It is a challenge for VoD system designers and developers to keep the time of service within the user's reneging tolerance with limited server resources. As a solution, Gennaro Boggia et al [97] have given a report on the results of a study made by analyzing batching and buffering techniques. This involves serving all video requests issued during a short interval of time with a single stream. Using a mathematical model, based on queuing networks, they have evaluated the main system performance as a function of load and batching interval duration. Vrinda et al. [98] have made study on this and brings out a model to analyze proposed batching policy in view of different user reneging behaviors and gives out the optimum value of batching interval to maximize the average number of uses serviced and minimize reneging probability. Another contribution from Charu et al.[99] has analyzed the system performance with explicit constant batching, and demonstrated that a system without explicit constant batching performs better in terms of delays. They also have proposed a dynamic batching policy to improve the system performance both in mean and in maximum serving times. Hadas Shachinai et al. [100] have analyzed two classes of scheduling schemes, the maximum batch and minimum idle scheme that provide two alternative ways for using a given stream capacity for effective batching. This work has considered the next stream completion time, as well as the viewer wait tolerance. They have compared this scheme with the two previously studied schemes: (1) first-come-first-served (FCFS) that schedules the video with the longest waiting request and (2) the maximum queue length (MQL) scheme that selects the video with the maximum number of waiting requests. Making streaming services economically viable, many patching algorithms have also been proposed for minimizing the incremental cost of serving a new client, particularly for popular content. Conventional patching [102] reduces server and network overhead by allowing a client to receive (part of) a multimedia stream by listening to an ongoing transmission of the same clip, without increasing client playback delay. However, some of the patching schemes [103], [104] do not fully exploit the client buffer space or the ability to listen to more than one ongoing transmission, for reducing bandwidth overheads. Subhabrata Sen, et al. [105] have introduced Periodic Buffer Reuse (PBR) patching that maximizes the amount of data that a client can retrieve from the ongoing transmission. Similar to the other patching schemes, PBR employs a threshold to determine when to start a new complete transmission of the stream. They have derived a closed-form expression for the transmission bandwidth requirements for PBR

42 patching, and show how to determine the optimal threshold value. Another algorithm called Greedy Buffer Reuse (GBR) is proposed to allow clients to patch to multiple ongoing transmissions to minimize the server and network transmission bandwidth requirements. S M Farhad,et al.,[106] have proposed multicast communication technique in an Enterprise Network where multimedia data are stored in distributed servers. A novel patching scheme called Client-Assisted Patching is presented, where client s buffer of a multicast group can be used to patch the missing portion of the clients who will request the same movie immediately. This scheme significantly reduces the server load without requiring larger client cache space than conventional patching schemes. Clients can join an existing multicast session without waiting for the next available server stream which reduces service latency. Dongliang Guan, et al. [107] have proposed a new technique of two-level patching scheme, in which patching channels are rearranged through merging and further patching. Zhi-Wen Xu, et al. [108] have proposed a method to economize the resources of the Internet. The cache policies influence the effect for proxy cache. The policies for batch and batch patching using dynamic cache are presented based on the client's request rate. This scheme enlarges the width of the batch and patching by combining dynamic cache based on segment with the excellence of patch. Huadonget al. [109] have proposed a new patching scheme, called Best-Effort Patching (BEP) that offers a TVoD service in terms of both request admission and VCR interactivity. They have used a novel dynamic merging algorithm with BEP to improve the efficiency of TVoD interactivity for popular videos. To provide Video-on-Demand service over the Internet in a scalable way, Yang Guo, et al. [110] has proposed P2Cast, an architecture which uses a peer-topeer approach to cooperatively stream video using patching techniques, while only relying on unicast connections among peers. P2Cast address the following two key technical issues: (1) construction of an application overlay appropriate for streaming and (2) providing continuous stream playback (without glitches) in the face of disruption from an early departing client. To address the scalability issue in Video-on-Demand systems, many broadcasting schemes [111], [112], [113] have been proposed. In this scheme, each video is partitioned into a number of segments, each repeatedly broadcast on its own communication channel (e.g., multicast group). To receive a service, a client tunes to the appropriate channels to download the desired video. This strategy guarantees the service delay to be no more than the broadcast period of the first segment. To ensure acceptable delays, this segment can be made small. A major advantage of this approach is that the required server bandwidth is independent of the number of users the system is designed to support. Each video, however, requires substantial bandwidth.

43 This requirement renders this approach suitable only for very popular videos. Thus, broadcasting requires relatively very high bandwidth and buffer space at the client. Edward Mingjun Yan, et al. [111] have proposed a new broadcast scheme, named Generalized Fibonacci Broadcasting (GFB), to address the issue of limiting the user-side bandwidth requirement. For any given combination of the server and user bandwidths, authors have tried to achieve the least user waiting time with GFB. Yang Guo, et al. [112] have proposed a scalable and flexible framework which integrates the proxy-based prefix caching with periodic broadcast of the suffix of a video from the server, for efficiently streaming a set of popular videos to a large number of asynchronous clients. They have developed a methodology for (i) determining appropriate prefix and suffix transmission schemes based on a principle of decoupling the two transmissions from each other, and (ii) optimally allocation of proxy buffer space among the set of videos. To take the advantage of skewed popularity of videos, Salahuddin et al. [113] have proposed a hybrid transmission scheme. This delivers the most popular videos through periodic broadcasting and the least popular videos through on-demand multicasting. While videos delivered through multicasting usually share a pool of server channels, but broadcasting of each video demands one or more channels dedicated to it. In both multicasting and batching, every user has to wait for the next multicast to get the service. Also number of concurrent multicasts is still constrained by the bandwidth limitation of the server. Another approach allowing a media server to support number of clients requests simultaneously is chaining [121], [124], [126]. In this approach, each client is capable of caching portion of the video and forwards (streaming) it to other clients on demand at a later time. This strategy allows the application to scale far beyond the physical limitation of the video server avoiding a new server stream for each new request in well-connected networks. Geun Jeong, et al [114] have proposed a new scalable proxy caching scheme, called P2Proxy, for efficient multimedia streaming service in P2P environment. The proposed P2Proxy scheme is composed of a group of clients that requests the same media stream in a server. Each client in the group stores a different part of the stream through a regular channel into its local buffer starting at the beginning of the request until the client buffer becomes full. A client receives the request stream from other clients as long as the parts of the stream are available in the client group. The only missing parts of the stream which are not in the client group are received directly from the server through the other patching channel. All clients in the group share the parts of the media stream. Jehan-Francois,e t al. [115] have presented a cooperative distribution protocol requiring clients that watch a video to forward it to the next client. As a result, the video server will only

44 have to distribute parts of a video that no client can forward. This protocol works best when clients have sufficient buffer capacity to store each video they are watching until they are done: when this is the case, the instantaneous server bandwidth never exceeds the video consumption rate. They have shown how multicasting can further reduce the server and the network bandwidth requirements of the protocol Chitra Venkatramani,et al. [116] have addressed the problem of efficiently streaming video assets to the end clients over a distributed infrastructure consisting of origin servers and proxy caches. They have proposed a scheme by incorporating the known server scheduling algorithms (batching/patching/batch-patching) and proxy caching algorithms (full/partial/no caching with or without caching patch bytes) analyzed the minimum backbone bandwidth consumption under the optimal joint scheduling and caching strategies. SU Te-Chou, et al. [117] has proposed an approach by forwarding the server stream client by client, and has proved that the minimum number of required server streams in such schemes is n - k + 1, where n is the number of client requests and k is a value determined by client buffer sizes and the distribution of requests. In addition, this work presents an optimal chaining algorithm using a dynamic buffer allocation strategy. This scheme utilizes the backward (basic chaining) and/or forward (adaptive chaining) buffer, and also exploits the buffers of other clients in order to extend the chain as much as possible. Jen-Kai, et al [118] extends the basic chaining scheme with two new techniques: twoway bridging and multicast chaining. The two-way bridging method employs video buffers as forward and/or backward bridges to extend each video chain as long as possible. Santosh Kulkarni, et al. [119] has presented a stream tapping protocol that involves clients in the video distribution process. As in conventional stream tapping, this protocol allows new clients to tap the most recent broadcast of the video they are watching. While conventional stream tapping required the server to send to these clients the part of the video they missed, this protocol delegates this task to the clients that are already watching the video, thus greatly reducing the workload of the server. This protocol works with clients that can only upload video data at a fraction of the video consumption rate and includes a mechanism to control its network bandwidth consumption. Chen-Lung,et al [56] proposes a new multicast infrastructure, called buffer-assisted ondemand multicast, to allow receivers to access a multicast stream asynchronously. A timing control mechanism is integrated on intermediate routing nodes (e.g., routers, proxies, or peer nodes in a peer-to-peer network) to branch time-variant multicast sub-streams to corresponding receivers.

45 Panayotis Fouliras, et al. [120] have proposed a new scalable application-layer protocol, LEMP. This is specifically designed for data streaming applications with large client sets. This is based upon a control hierarchy of successive levels for the clients to reduce the overhead with constant number of messages per client. LEMP also have proposed a client chaining mechanism and a solution for handling client failure situation involving too many messages which increases the waiting time for playback start to tw. As the need to support VCR operations is increased,. Hyunjoo Kim, et al. [121] has proposed an approach for supporting VCR operations on the Internet. This service scheme is based on chaining, in which clients as well as the server provide streaming services. In this scheme, services are provided by unicast and managed locally using node lists. This scheme mainly supports frequent VCR operations without incurring significant overhead in the server workload. Joel Jeffry, et al. [122] proposes a chaining based media content delivery algorithm which supports VCR operations. This algorithm explicitly balances the client-side requirements to support the VCR functionalities, while preserving the advantages of using the client playback device as a proxy video server. This work also investigates the effect of supporting VCR operations on chaining. Te-Chou, et al. [117] prove that the minimum number of required server streams in such schemes is + 1, where is the number of client requests and is a value determined by client buffer sizes and the distribution of requests. In addition, we present an optimal chaining algorithm using a dynamic buffer allocation strategy. All these existing schemes have tried to reduce the duration of broadcasting or the number of additional remote video server channels for the same request from a set of closely located clients. Also they have demonstrated the superior scalability of shifting all functionalities to end-hosts. Yet, we are aware that, in contrast to the reliable and dedicated servers or proxies, the loosely-coupled autonomous end-hosts can easily crash, leave the network without notice, or even refuse to share its own data. Given that a media playback lasts a long time and consumes huge resources, dedicated proxies could still play an important role in building high-quality media streaming systems. Many of the existing chaining techniques are based on single Proxy Server. Some of them have not considered the case of client failure. Hence in this work we propose an approach, C2C-Chain by integrating proxy prefix caching and load sharing approaches with dynamic buffer allocation strategy to chain the set of clients to share the video stream. Client failure case is also discussed.

46 CHAPTER 3 BUFFER MANAGEMENT METHODS AND APPROACHES FOR VoD 3.1 OVERVIEW Introduction Transmission of videos over communication network requires a high network bandwidth which is one of the most expensive and scarce resources in Video-on-Demand system [75]. Therefore one of the most important problems of Video-on-Demand system is to optimize the network bandwidth. Hence keeping the Proxy Server between the remote Central Multimedia Server (CMS) and the client can reduce the load of the Central Multimedia Server, service delay for the user and also the bandwidth demand between the CMS and the Proxy Server (PS). A Proxy Server can be considered as an intermediate node along the server-client path. This in effect will partition the server-client path into a server-proxy path and a proxy-client path [16]. When a video is accessed by a client, the whole video or a portion of the video may have already been cached in the Proxy Server. If the video is not stored in the Proxy Server, it needs to be accessed from the Central Multimedia Server and then delivered to the client. At the same time, the initial portion of the video or the complete video can be cached at the Proxy Server to improve the efficiency of the system. This is possible if sufficient cache space is available at the Proxy Server, otherwise some replacement technique must be used to make room for this new video. This may increase the frequency of replacements and in turn increases the communication with remote Central Multimedia Server Motivation The motivation for the study of continuous video delivery system and the objective of this work is to reduce Wide Area Communication Network (WACN) bandwidth requirement from the remote Central Multimedia Server and load of the Central Multimedia Server by increasing the video availability at the Proxy Server. However, the storage space of the Proxy Server is limited and hence the efficient allocation of storage space for the videos to be cached at the Proxy Server is essential. There are two possible solutions to this problem. 1) Static buffer management method: In this method, a fixed number of cache blocks are allocated to each video. This does not allow the Proxy Server to cache more number of videos and hence this may not be an efficient algorithm [58]. Whenever cache blocks available at the Proxy Server are not sufficient to cache the new video to be downloaded, some replacement technique needs to be used to cache this new video. This increases the

47 frequency of replacements. Thus, the Proxy Server may have to directly contact the Central Multimedia Server increasing the bandwidth demand and network traffic of the system. 2) Dynamic buffer allocation technique [27]: In this method, sometimes the cache blocks available at the Proxy Server may not be sufficient to cache the new video to be downloaded. In such cases, the cache space required for this new video is provided by reallocating the cache blocks among the videos. This increases the video availability at the Proxy Server Contribution Repeated replacements of the videos at the Proxy Server increase the frequency of accessing the remote Central Multimedia Server and hence increasing its load [46]. This in turn increases the network traffic. To address these problems, we have developed three different buffer allocation algorithms as shown below. 1. Efficient Dynamic Buffer Allocation and Reallocation Algorithm 2. Improved Buffer Allocation and Reallocation Algorithm 3. Scene Change based Dynamic Buffer Allocation Algorithm These algorithms provide cache space for the new video by efficiently reallocating the buffer among the videos based on their popularity. Hence, the video availability at the Proxy Server increases and reduces the bandwidth demand by reducing the direct communication with the Central Multimedia video Server. 3.2 BUFFER MANAGEMENT FOR DISTRIBUTED VoD ARCHITECTURE This work initially proposes an efficient buffer allocation algorithm based on the popularity of the videos. This algorithm aims at allocation of more cache blocks for more popular videos and fewer cache blocks for less popular videos. This in turn maximizes the cache hit rate and proxy buffer utilization, irrespective of the load on the Video-on-Demand system Distributed VoD Architecture The Video-on-Demand system architecture considered in this work is as shown in Figure 3.1. This VoD system consists of a Central Multimedia Server, which is connected to a group of

48 Proxy Servers (PSs) and each Proxy Server (PS proxy) is in turn connected to a large number of users. The Central Multimedia Server stores all the video contents. The Proxy Server caches the videos that are frequently in demand by its users. The Central Multimedia Server is connected to each Proxy Server through a fiber optic cable. Figure 3.1 Distributed VoD Architecture Efficient Buffer Allocation and Reallocation Method When a request for a video arrives at the Proxy Server, there can be three possibilities as follows: The requested video may be present in the Proxy Server partially or completely The requested video is already being streamed from the Central Multimedia Server The requested video may not be present in the Proxy Server If the requested video is present in the Proxy Server, then streaming of the video starts immediately from the Proxy Server to the client. If the requested video is already being streamed to some client from the Proxy Server, then also the video will be streamed immediately from the Proxy Server to the client. If the requested video is not present in the Proxy Server, it should be downloaded from the Central Multimedia Server and then it can be streamed to the requested client. While streaming the downloaded video to the client, initial portion of the video is cached at the Proxy Server. This is possible if sufficient space is available at the Proxy Server cache. Otherwise, the following popularity based buffer allocation mechanism can be used to provide the required storage space for the new video.

49 Popularity of the video is defined as the number of hits to the video. That is the popularity of a video is directly proportional to the number of hits for that video. Initially, when all the cache blocks at the PS are free, the required number of cache blocks for the new video is allocated. If the number of cache blocks required for the video is not available, then available number of cache blocks is allocated. If these allocated cache blocks are not sufficient to stream the video, we find the lowest popular video with more than minimum number of blocks, which is completely present in the Proxy Server and is currently not being streamed (completely offline). If the popularity of the requested video is more than the popularity of the lowest popular completely offline video having more than the minimum number of blocks, then the blocks except minimum number of blocks are deallocated from the lowest popular completely offline video and these deallocated blocks are allocated to the requested video. Otherwise, minimum number of blocks are deallocated from the lowest popular completely offline video and these deallocated blocks are allocated to the requested video. If we cannot find the lowest popular completely offline video, then we find the lowest popular video, which is partially present in the Proxy Server and is currently not being streamed (partially offline video) and is allocated more than minimum number of blocks. If the popularity of the requested video is more than the popularity of lowest popular partially offline video having more than minimum number of blocks, then blocks except minimum number of blocks are deallocated from the lowest popular partially offline video and these deallocated blocks are allocated to the requested video. Otherwise, minimum number of blocks are deallocated from the lowest popular partially offline video and these deallocated blocks are allocated to the requested video. If we cannot find the lowest popular video which is completely or partially offline in the Proxy Server, then we find the lowest popular video, which is completely or partially present in the Proxy Server and is currently being streamed (completely or partially online video) and is allocated more than minimum number of blocks. If the popularity of the requested video is more than the popularity of lowest popular completely or partially online video having more than minimum number of blocks, then the cache blocks except minimum number of blocks are deallocated from the lowest popular completely or partially online video and these deallocated blocks are allocated to the requested video. Otherwise, minimum number of blocks are deallocated from the lowest popular completely or partially online video and these deallocated blocks are allocated to the requested video. If we cannot find the lowest popular completely or partially online video having more than minimum number of blocks in the Proxy Server, then LRU-k replacement algorithm is used to make room for the new video.

50 LRU-k Replacement Technique A history of the previous k accesses to each video in the cache is maintained by the LRUk algorithm [70]. The k-distance of a video at a given time is defined as the difference between the current time and the time at which the k th access was made to that video. LRU-k chooses to replace the end-most block in the video with the largest k distance. LRU itself is a special case of LRU-k where k is Proposed Algorithm [Nomenclature: B free B min B req V m R m P(R m ) : Number of free blocks : Minimum number of blocks : Required number of blocks : Video m : Request for the V m : Popularity of requested video Sz(V i ) : Size of i th video Blksall(V i ) : Number of blocks allocated i th video Flag m V lpco V lppo V lps CB proxy : = 0, Video m not present in the Proxy Server = 1, Video m is present in the Proxy Server : Least popular completely offline video : Least popular partially offline video : Least popular, currently being streamed video : Proxy cache blocks] When a request R m arrives at a particular time t for the video m, V m the following steps should be executed: if ( CB proxy are completely free) Allocate B req to V m else if (flag m == 0 and B free > Sz(V m ) ) Allocate B req to V m else if ( flag m == 0 and B free > B min ) allocate B min to V m else [ Reallocate the CB proxy based on popularity ] find the set of V lpco

51 if ( found ) For all these videos do the following { if ( (Blksall(V lpco ) - B min ) > B min ) if ( P(V lpco ) > P(V m ) ) { deallocate B min from V lpco allocate this B min to V m } else { deallocate except B min from V lpco allocate these blocks to V m } } else find the set of V lppo if ( found ) for all these videos do the following { if ( (Blksall(V lppo ) - B min ) > B min ) if ( P(V lppo ) > P(V m ) ) { deallocate B min from V lppo allocate this B min to V m } else { deallocate except B min from V lppo allocate these blocks to V m } } else find the set of V lps if ( found ) { if ( (Blksall(V lppo ) - B min ) > B min ) { if ( P( V lps ) > P(V m ) ) { deallocate B min from V lps allocate this B min to V m } else deallocate except B min from V lppo & allocate these blocks to V m } else { if ( P( V lps ) < P(V m ) ) deallocate all the blocks from V lps & allocate these blocks to V m

52 } else use LRU-k replacement technique Simulation Model The simulation model used here consists of a single multimedia server and 5 Proxy Servers. The following are the assumptions made in the model. The size of each cache block and the number of cache blocks considered for each Proxy Server is same. The user requests are distributed in a random fashion among the Proxy Servers. The user request arrival pattern is randomly distributed over a range for all the Proxy Servers. The size of the video is uniformly distributed over a range. The performance parameters are the average buffer allocation for each video, the average buffer allocation for the most popular videos and popularity of the video. The values considered for simulation are as shown in the Table 3.1. Table 3.1 The simulation model Parameters Size of one cache block Total cache blocks considered in the Proxy Server (CB proxy ) Size of i th video (MPEG-2)( Sz(V i )) Minimum number of blocks allocated to a video (B min ) simulation duration Values 10MB 18000MB U (300MB, 934MB) 30 30,000 Secs/ 500Mins Performance evaluation with Results The results presented below are an average of several simulations conducted on the model. Consider Figure 3.2 for the total number of videos cached, Figure 3.3 for average buffer allocation for each video and Figure 3.4 for the average buffer allocation for most popular videos. In Figure 3.2, initially when the number of videos cached are few, the average buffer allocation for each video is more and as the number of videos cached increases, the average buffer allocation for each video decreases. Since the minimum number of blocks allocated to a video is 30, the average buffer allocation for each video will not go below 30 as shown in Figure 3.3. In Figure 3.4, initially when the number of videos cached is few, the average buffer allocation for most popular videos is more and also it is more than the average buffer allocation for each video which can be seen in Figure 3.3 and Figure 3.4.

53 As the number of videos cached increases, the average buffer allocation for most popular videos decreases, but still it is more than the average buffer allocation for each video. Figure 3.2 Total No. of Videos cached at Proxy Server Figure 3.3 Average Cache Blocks Allocation for all the Videos Figure 3.4 Average Cache Blocks Allocated for Most Popular Videos Improved Buffer Allocation and Reallocation Method We propose an improved version of the dynamic buffer allocation algorithm based on the popularity of the videos. Here, the number of cache blocks allocated for the most popular videos are further increased resulting in the increased client acceptance rate at PS. Fewer cache blocks are allocated for the less popular videos. Hence, increased client acceptance rate and reduced client rejection rate are achieved at the Proxy Server. Buffer utilization is maximized irrespective of the load on the Video-on-Demand system. In this algorithm, if the cache blocks available at the Proxy Server are not sufficient to accommodate a new video, then a set of completely or partially offline videos which have been allocated more than the minimum number of cache blocks are found. The additional cache blocks apart from minimum cache blocks are collected from all these videos based on relative popularity. This process is continued until the collected cache blocks are sufficient to cache the new video. If these collected cache blocks are not sufficient to cache the new video data, then a set of completely or partially online videos which are allocated more than minimum number of

54 cache blocks are found. The additional cache blocks except the minimum cache blocks are collected from all the less popular videos relative to the popularity of new video to be cached. This process is continued until the collected cache blocks are sufficient to cache the new video. If these collected cache blocks are not sufficient to cache the new video data, then LRU-k replacement algorithm is used to cache the new video data. Hence, in the beginning more cache blocks are allocated for the most popular videos and fewer cache blocks are allocated for the less popular videos. Finally, as the number of videos to be cached is increased, the minimum number of cache blocks required to stream the video are maintained. As compared to the static buffer allocation for partial videos at proxy with LRU-k replacement method, the proposed algorithm allows more number of partial videos to be stored dynamically at the Proxy Server according to the popularity achieving maximum video hit rate. Buffer utilization is also maximized irrespective of the load on the Video-on-Demand system Proposed Algorithm When a request Req arrives at time t for a video V (V req ) if (No. of cache blocks free > the no. of cache blocks required by V req ) Assign the required no. of cache blocks based on popularity to V req. else Assign the available no. of cache blocks to V req. if (the assigned cache blocks are sufficient to stream the V req ) Identify the set of videos which are partially or completely offline having more than minimum no. of blocks if (found) While (cache blocks are not completely allocated to V req based on popularity and still there are some videos with popularity< V req ) { consider the least popular video V lpop if (popularity of V req >popularity of V lpop ) free except minimum no. of blocks from V lpop add these blocks to V req } if (V req does not get the required no. of cache blocks based on popularity) { Identify the set of videos{v i } which are being streamed and having more than minimum no. of blocks. if (found) { While (cache blocks not completely allocated to V req and still there are some videos with popularity< V req ) { consider the least popular video V lpop If (popularity of V req >popularity of V lpop )

55 } free except minimum no. of blocks from V lpop add these blocks to V req } } else if (V req gets at least min no. of blocks ) assign the collected blocks to V req. else Use LRU-k replacement method to cache the new video Simulation Model The simulation model for this approach consists of a single Central Multimedia Server and 5 Proxy Servers. The following are the assumptions made in the model. The size of each cache block and the number of cache blocks considered for each Proxy Server is same. The user requests are distributed in a random fashion among the Proxy Servers. The user request arrival pattern is randomly distributed over a range for all the Proxy Servers. The size of the video is uniformly distributed over a range. The performance parameters are the average buffer allocation for each video, the average buffer allocation for the most popular videos and the popularity of the video. The values considered for simulation are as shown in the Table 3.2. Table 3.2 The simulation model Parameters Size of one cache block Total cache blocks considered in the Proxy Server (CB proxy ) Size of i th video (MPEG-2) ( Sz(V i )) Minimum numbers blocks allocated to a video (B min ) simulation duration Values 10MB 18000MB U (300MB, 934MB) Secs/500Mins Performance evaluation with Results and Discussion The results presented below are an average of several simulations conducted on the model. Consider Figure 3.5 for the total number of videos cached, Figure 3.6 for average buffer allocation for each video and Figure 3.7 for the average buffer allocation for most popular videos.

56 In Figure 3.6, initially when the number of videos cached are few, the average buffer allocation for each video is more and as the number of videos cached increases, the average buffer allocation for each video decreases. Since the minimum number of blocks allocated to a video is 30, the average buffer allocation for each video will not go below 30 as shown in Figure 3.6. In Figure 3.7, initially when the number of videos cached is few, the average buffer allocation for most popular videos is more and also it is more than the average buffer allocation for each video as shown in Figure 3.6 and Figure 3.7. As the number of videos cached increases, the average buffer allocation for most popular videos decreases but this is still more than the average buffer allocation for each video. Finally, when the number of videos that can be cached reaches the maximum limit, the average buffer allocation for even the most popular videos reaches 30. Thus, more number of cache blocks is allocated to the most popular videos and the utilization of the cache blocks in the Proxy Server is almost 100%. Figure 3.5 Total No. of Videos Cached Vs Time Figure 3.6 Average Buffer Allocation for all the Videos Figure 3.7 Average Buffer Allocations for Most Popular Videos

57 Summary This work has concentrated on the dynamic buffer allocation technique to utilize the limited Proxy Server cache space more efficiently. Simulation results have shown the promising results. The proposed algorithm reduces bandwidth requirement on Central Multimedia Server to Proxy Server path by maximizing the video availability at Proxy Server close to the client. It also allocates the maximum number of blocks to the most popular videos increasing the cache hit rate. However, a video has high data rates requirements and long play back durations, which jointly require a huge caching resource. Storage of such videos will exhaust the limited cache space of a standalone Proxy Server very fast. Hence, when the load on the above distributed VoD system increases the number of user request rejections also increases. In order to improve the video hit rate and to reduce client rejections, a new VoD architecture of connected Proxy Servers with load sharing algorithm has been proposed. 3.3 LOAD SHARING FOR VIDEO CACHING BROTHER NETWORK (VCBN) ARCHITECURE The performance of the Proxy Server cache is not only affected by cache management at proxy but also can be determined by the effective cooperation and coordination of Proxy Servers. An improved video hit rate and reduced the network traffic can be achieved by obtaining a scalable distributed VoD architecture of interconnected neighboring Proxy Servers. If these interconnected Proxy Servers cooperate with each other to share the videos present among the neighboring Proxy Servers, the desired video may be obtained from the left or right neighboring Proxy Server. Hence, reducing the client s waiting time for service to begin, reducing the transmission of quantity of data from the remote server and also the load on the Central Multimedia Server significantly Distributed VCBN Architecture A new VoD architecture with an efficient load sharing algorithm proposed is as shown in Figure 3.8. In this VoD architecture, each Proxy Server is connected to its left and right neighboring Proxy Servers (LPS, RPS) enabling the Proxy Server to share the videos available with neighboring Proxy Servers (brothers). This architecture with the proposed load sharing algorithm achieves reduced communication demand at the Central Multimedia Server by increasing the service rate at the Proxy Server.

58 Figure 3.8 Video Caching Brother VoD Architecture Storage of redundant videos at Proxy Server is also reduced by sharing the videos present at neighboring Proxy Servers and utilizing the bandwidth between the neighboring Proxy Servers. This proposed proxy caching algorithm has considered the popularity of the videos. This VoD architecture consists of a Centralized Multimedia Server which contains all the videos and is connected to a group of Proxy Servers. All these Proxy Servers are interconnected in a ring fashion, and each of these Proxy Servers is connected to many users. The Proxy Server caches the video content, currently requested by its users. The Centralized Multimedia Server is connected to all these Proxy Servers and all the Proxy Servers are assumed to be interconnected through fiber optic cables Load sharing Strategy When a request arrives for a particular video at the Proxy Server, one of the following 3 possibilities may occur: The requested video is present in the Proxy Server The requested video is not present in the Proxy Server, but is present in left Server or in right Proxy Server or in both Proxy

59 The requested video is not present in any of the Proxy Servers (Proxy Server, left and right neighboring Proxy Servers) If the requested video is present in the PS, then the video can be streamed immediately to the client from the PS. If the requested video is not present in the PS, then we check whether it is present in the left or in the right neighboring Proxy Servers. If the requested video is present only in the left neighboring Proxy Server, then the streaming of the video starts from the left neighboring Proxy Server to the PS and then to the client. If the requested video is present only in the right neighboring Proxy Server, the streaming of the video starts from the right neighboring Proxy Server to the PS and then to the client. If the requested video is present at both the left and right neighboring Proxy Server, then the streaming of the video starts from the neighboring Proxy Server depending on the availability of buffer and bandwidth. If the requested video is not present in the Proxy Server, left neighboring Proxy Server(LPS) and in right neighboring Proxy Server(RPS), then we check for buffer availability in the PS. If sufficient buffer is available in the Proxy Server to cache the new video, then the requested video will be downloaded from the Central Multimedia Server to PS, then the streaming of the video starts from the PS to the client. If sufficient buffer is not available in the Proxy Server, then we check for the buffer and bandwidth availability at left and right neighboring Proxy Servers. If sufficient buffer and bandwidth is not available at both left and right neighboring Proxy Servers, then the dynamic buffer allocation algorithm is applied to provide space for the new video. If sufficient buffer and bandwidth is available only at LPS, then the buffer is allocated in the LPS and downloading of video starts from the Central Multimedia Server to LPS and then, the video will be streamed from the LPS to the client through PS. If sufficient buffer and bandwidth is available only at RPS, then the buffer is allocated at RPS and the video is downloaded from the Central Multimedia Server to the RPS and then, the video will be streamed from the RPS to the client through PS. If sufficient buffer and bandwidth is available both at the left and right neighboring Proxy Servers, then the neighboring Proxy Server which has more buffer and bandwidth is selected. The buffer is then allocated from the selected neighboring Proxy Server and the video is downloaded from Central Multimedia Server to the selected neighboring Proxy Server and immediately the video will be streamed from the selected neighboring Proxy Server to the client through PS. If sufficient bandwidth and buffer is not found at both LPS and RPS, then the request is rejected.

60 3.3.3 Proposed Algorithm [Nomenclature: PS : Proxy Server LPS : Left neighboring Proxy Server RPS : Right neighboring Proxy Server V req B req B PS BW LPS BW RPS V off : Requested video : Buffer required : Buffer of Proxy Server : Bandwidth availability at LPS : Bandwidth availability at RPS : Offline video] When a request R m for a video m arrives at a particular time t, the following steps should be executed: if (V req is present at PS) Start streaming from PS else if (V req is present at both LPS and RPS) if (BW LPS > BW RPS ) && (B req for V req is sufficient at PS) Start streaming from LPS to the user else Start streaming from RPS to the user else if (V req is present only at LPS) if (BW LPS and (B req for V req at PS) are sufficient) Start streaming from LPS to the user else if (V req is present only at RPS) if (BW RPS and (B req for V req at PS) are sufficient) Start streaming from RPS to the user else if (B req for V req is available at PS) { Start downloading from CMS to PS Start streaming from PS to the user } else [Reallocate the B PS based on the popularity of videos] Find the V off at PS such that (popularity (V off ) < popularity (V req )) if (found) { Deallocate the Buffer from this V off and

61 Allocate this deallocated buffer to V req } else Reject the request Experimentation Simulation model Our simulation model consists of a single Central Multimedia Server and five Proxy Servers. The following are the assumptions made in the model: The size of each cache block and the number of cache blocks for each Proxy Server are assumed to be same. The user requests are distributed in a random fashion among the Proxy Servers. The user request arrival pattern is randomly distributed over a range for all the Proxy Servers. The size of the video is uniformly distributed over a range. The performance parameters considered are reduction of load on the Central Multimedia Server, bandwidth and buffer utilization at the neighboring Proxy Servers Performance evaluation of Load Sharing Strategy for VCBN Figure 3.9 Total No. of Videos Vs Time (min) Figure 3.10 No. of videos cached & Streamed from PS Figure 3.11 No. of Videos streamed from LPS Figure 3.12 No. of videos streamed from RPS

62 Figure 3.13 No. of Videos streamed Figure 3.14 Total No. of videos from CS streamed from (PS+LPS+RPS) Vs CMS Figure 3.9 shows the total number of videos streamed from the Proxy Server with respect to time. Figure 3.10 shows the total number of videos cached and streamed from the PS and this does not include the videos streamed from LPS and RPS to PS. Figure 3.11 and Figure 3.12 show the total number of videos streamed from LPS through PS and RPS through PS respectively. Figure 3.13 shows the number of videos being streamed from the CMS through PS. This number being minimal throughout indicates the reduction in communication with the CMS and hence, load on the Centralized Multimedia Server is reduced. Figure 3.14 shows the total number of videos streamed through PS and the number of videos streamed from (PS+LPS+RPS) and CMS, which increases the cache hit rate Summary In this VoD architecture, neighboring Proxy Servers are interconnected. This architecture with the proposed load sharing algorithm achieves reduced communication demand at the Central Multimedia Server. This has in turn reduced the load of the Central Multimedia Server and results in reduced traffic on Central Multimedia Server to Proxy Server path. Storage of redundant videos at Proxy Server is also reduced by sharing the videos present at neighboring Proxy Servers and utilizing the bandwidth and buffer between the neighboring Proxy Servers. New architectures can be designed with a group of interconnected closely located Proxy Servers to enlarge the storage space. With this, video availability close to the client can be made further increased. 3.4 COMBINATION OF BUFFER MANAGEMENT AND LOAD SHARING FOR COORDINATOR BASED PROXY SERVERS ARCHITECURE Coordinator Based VoD Architecture (LPSG CLOPS) A promising way to achieve increased service rate and high availability is by interconnecting the closely located Proxy Servers to a clustered network of Proxy Servers. This

63 creates a distributed VoD system of loosely coupled Proxy Servers. In this, partial video objects can be stored and shared among the Proxy Servers based on the local demand for the videos [89], [90]. This reduces the waiting time for the client to begin the service, network traffic and increases the storage capacity of the system, video availability and service rate. Proxy caching technique used with this architecture uses the popularity of the videos to select the videos to cache at Proxy Servers of the cluster. But the interconnected Proxy Servers may increase the client waiting time due to the delay involved in searching for the requested video present among the other Proxy Servers of the cluster. Hence, in order to maintain the information about the presence of videos among the cluster of interconnected Proxy Servers, a novel three layer architecture of distributed Proxy Servers with a coordinator called Tracker is proposed in this work is shown in Figure The coordinator also manages the distribution of videos among the Proxy Servers by periodically updating the popularity of videos. This architecture consists of a remote Central Multimedia Server [CMS] which is far away from the user and is connected to a set of Trackers [TR]. Each Tracker is in turn connected to a group of Proxy Servers and these Proxy Servers are assumed to be interconnected in a ring pattern. To each of the Proxy Server a group of clients are connected. This arrangement of cluster or group of Proxy Servers is called as Local Proxy Servers Group [LPSG] or Cluster of Proxy Servers (CLOPS). Each of such LPSG CLOPS, which is connected to CMS, is in turn connected to its left and right neighboring LPSG CLOPS in a ring fashion through its Tracker. This enables the LPSG CLOPS to share the videos available at Proxy Servers of the other LPSG CLOPS with the coordination of the Tracker, thus avoiding accessing the remote Central Multimedia Server frequently. Arranging the Proxy Servers in the form of a cluster or group of Proxy Servers (LPSG CLOPS) provides the following advantages. Reduced service delay and transmission cost: Distribution of videos among the PSs of LPSG CLOPS based on the local popularity of the videos, and sharing of these videos among the PSs of LPSG enables the system to store more number of videos and to service maximum number of requests from LPSG itself. This integrated approach reduces the service delay for the user and communication demand at CMS. Hence, significant reduction in the client waiting time, network traffic and transmission cost of the system can be achieved. Increased aggregate storage space: By distributing the large number of videos across the PSs and TR of LPSG, increased overall storage capacity of the system and high cache hit rate can be achieved. For example, if 10 PSs within a LPSG can manage 500 Mbytes

64 each, then the total space available would be 5 GB. Hence, 200 proxies of LPSG could store about 100 GB of movies. Figure 3.15 Coordinator Based VoD architecture of group of Proxy Servers (LPSG CLOPS) Reduced Load on CMS: Caching the videos at PSs of LPSG based on their local demand, enables the system to provide service to more number of clients, hence avoiding the download of the complete video from the Central Multimedia Server, therefore reducing the network bandwidth requirement on CMS to PS path and load on CMS. Scalability: By adding more number of PSs, the capacity of the system can be expanded. Interconnected TRs increases the system throughput. The load balancing among the fully loaded PSs and the lightly loaded PSs increases the performance overhead and the network traffic across the PSs. Hence, an efficient load sharing algorithm is proposed to achieve a higher client acceptance rate with lower request-service delay. This architecture serves the videos with a target to optimize the request-service delay for the user. This scheme provides increased scalability and availability of videos in the cluster of Proxy Servers and provides terabytes of content to thousands of clients utilizing the aggregate storage space of Proxy Servers of LPSG CLOPS. In order to achieve better quality and high video availability close to client, we also propose an efficient dynamic buffer allocation algorithm based on scene change technique.

65 3.4.2 Dynamic Buffer Allocation (DBA) Based on Scene Change (SC) This is the second buffer allocation method proposed. It proposes two approaches to improve the buffer utilization at the Proxy Server. This is used in combination with the load sharing strategy for the proposed coordinator based architecture shown in Figure The first approach is based on scene change identification, in which buffer is allocated based on the maximum and mean size of the scene. In the second approach buffer is dynamically allocated based on the current size of I, B and P frames (MPEG) using Frame Differencing Technique (FDT). [FDT takes the advantage of similarity between successive video frames. If two successive frames have the same background, there is no need to store the background again. Instead, only the differences between the frames need be stored. Instead of describing every pixel in every frame, this technique describes all of the pixels in the first frame, and then for every frame that follows, describes only the pixels that are different from the previous frame. If most of the pixels in the frame are different from the previous frame, then the scene identifier gives indication of the new scene and the requirement of describing the new frame, allowing for every pixel to be described or rendered. Each complete rendered frame is referred to as a key frame]. When the difference between the cache requirements of two consecutive frames is very large, the process of buffer reallocation is initiated. This scheme improves buffer utilization significantly compared to the static buffer allocation irrespective of the frame size. Each video consists of a number of scenes and each scene consists of a number of frames. Figure 3.16 shows a graph of a portion of sequence of frames and scenes of a practical video trace. The vertical lines represent the number of bits per frame, and the horizontal lines represent size of the corresponding frame. It is clear that the data record is composed of stationary segments. The average bit rate of each segment changes abruptly from one segment to another. Visually, these abrupt transitions coincide with scene changes. A scene change is declared if the change in the number of bits between successive frames exceeds certain threshold value in a continuous manner. A scene change identification function T n is calculated as follows with the parameters given the Table if ( B*Fn) - ( B* Fn 1) > T min Tn (1) 0 Otherwise Table 3.3 Parameters for DBA+SC Parameter Meaning F n Size of the n th frame

66 B T min T n The number of bits in the n th frame, Represents the threshold scene change indication function The ratio of the peak rate to the average rate may vary significantly from segment to segment for MPEG coded videos. If constant buffer rate (CBR) approach is used to cache MPEG video segments, it is difficult to guarantee QoS while keeping buffer utilization high. For instance, cache utilization will be low if the size of cache allocated is equal to the peak rate, while the delay or the data loss rate (DLR) will be high if the average number of cache blocks is allocated [66]. It is known from Figure 3.16 that the bit rate does not vary much during a scene. Hence, the following dynamic buffer allocation algorithm is proposed: Algorithm: Dynamic Buffer Allocation Based on Scene Change (DBA+SC) 1. Identify the scene change 2. Determine the maximum buffer needed 3. Initiate a negotiation process for allocating buffer size equal to the maximum size 4. Go to step 1. The maximum cache blocks required for each scene can be determined and stored beforehand. During the retrieval process, if a scene change is detected, the maximum cache blocks required for the new scene can be read and the buffer can be allocated for this scene. Figure 3.16 A Video trace Improved Cache Utilization

67 The algorithm proposed above achieves zero data loss rate (DLR) at the expense of cache utilization. The cache utilization can be improved with the following procedure. Algorithm: Dynamic Buffer Allocation Based on Scene Change with improved Cache utilization 1. Identify the scene change { ( B*Fn)-(B*F n-1 ) >T min } 2. If a scene change occurs, allocate the required number [B*F I ] of cache blocks for I frame and allocate the average number of cache blocks needed (which can be determined in advance for stored videos) for B and P frames 3. Initiate a process of negotiation to acquire [B*F I ] for I frame and mean cache blocks for other B and P frames 4. Go to Step 1. The average number of cache blocks required for each scene should be determined offline. For stored videos, it is very easy to find the number of cache blocks required by the frames of each scene. Figure 3.17 Scenes and frames of Video Suppose the video has N sce scenes, N f number of frames in each scene, and B number of bits in each frame as shown in Figure Then, the total number of bits (M) of the video, the maximum number of bits (C) that can be cached (K number of scenes) is determined as follows. M Nsce Nf j 1 i 1 Bi* Fi (2) C Nsce Nf j 1 i 1 Bi * Fi * K (3) and M C Nsce Nf j 1 i Nsce Nf j 1 i 1 Bi * Fi 1 Bi * Fi * K 1 K (4) Where,

68 F i : The i th frame size. M : Total number of bits of the video C : Maximum number of bits that can be cached K : Number of scenes ρ : Cache utilization It is clear that the cache utilization is the reciprocal of K Dynamic Buffer Allocation for Real Time MPEG videos We can notice that, if the total size of the video M 1 1 Bi* Fi is calculated and Nsce j Nf i associated by the video source, it is easier to realize a more effective buffer allocation algorithm since the size of the immediate frame is known prior to buffer allocation. Suppose, the cache blocks currently allocated is R blocks/scene, the size of the frame to be cached is S bits, and if there are total N frames per scene, then the buffer allocation can be done in the following way: If, number of cache blocks allocated is kept unchanged, otherwise Reducing the Number of Renegotiations Although the buffer utilization has been improved by this dynamic buffer allocation, the renegotiation is still a big burden to buffer management modules. To reduce the renegotiation frequency while keeping the buffer utilization high, an algorithm based on I frames and B frames is introduced in this section. From the detailed analysis of the MPEG video trace, we find that size of I frames are generally large, and the size of B frames (B f )are small. Most of the time, when size of the I frame changes significantly, size of P and B frames also changes accordingly, implying that any increase or decrease in size of I frame indicates the increase or decrease in size of P and B frames. Therefore, we can allocate the buffer based on I frames to improve QoS and buffer utilization. From the analysis of the observed trace we can see that the difference between B frames for each Group of Pictures (GOP) is not large. We can use this characteristic to increase the buffer utilization. That is, first index all the B frames to make retrieval easier. Then make n different groups (G i : i=1..n) of B frames of almost equal size together, find the B frame of maximum size ( mx i B f ) in each group and then allocate buffer of this mx size to all B frames (Sz(B f )) in that respective group (G). Compared to the algorithm mentioned in the previous section this scheme drastically reduces the buffer reallocations.

69 Algorithm: To Reduce the Number of Renegotiations 1. Index all the B frames 2. Make different groups of B frames of almost equal size together 3. for i = 1 to n i G B Sz( B ) i f f T fsz where i= 1 to n i 4. Find mx max Bf(Gi) B f 5. for i= 1 to n 6. Allocate buffer of size mx i B f mx i to all B frames (B f ) of i th group (G i ) 7. Go to step Load sharing approach for LPSG/CLOPS Architecture Introduction In a distributed VoD system with interconnected cluster of Proxy Servers, how to service more number of requests locally with very few communications with the remote Central Multimedia Server is a critical topic. Frequent communications with the Central Multimedia Server increases the network traffic, cost and also waiting time for the client. To overcome these problems, the better solution is to dynamically share the videos stored among the Proxy Servers of the interconnected Local Proxy Servers group (LPSG CLOPS) The efficient load sharing among the cluster of interconnected Proxy Servers can be achieved with the help of a centralized coordinator TR, which will have the complete information like which video is present in which Proxy Server and also its size, by maintaining the database. Hence, the coordinator can allow every Proxy Server to service the request even in case of cache miss by exploiting the video data available at other Proxy Server of the cluster. This avoids the frequent downloading of the complete video data from the remote Central Multimedia Server. Consequently the bandwidth usage on CMS-PS path is reduced. Proposed dynamic buffer management algorithm can be used to provide the cache space for the new video to be down loaded. This increases the availability of number of videos at the cluster close to the client which in turn decreases the load of Central Multimedia Server, network traffic, cost and service delay for the user Load Sharing Strategy for LPSG CLOPS In this section, we explain how the proposed load sharing algorithm helps the Proxy Server to service the request immediately even in case of a cache miss. Whenever a client wishes to play a video, it first sends a request V req to its parent Proxy Server (PS q ). If the requested video is present in PS q at which the request has arrived, then the streaming of the video starts immediately and hence the initial startup latency is very low. Otherwise, the

70 request is forwarded to TR. Then the TR checks whether this V req is present in any of the PS in that LPSG CLOPS. If so, the TR initiates the streaming of the V req from that PS to the requested PS q and the same is intimated to the requested PS q. The PS q streams this video to the requested client and hence the request-service delay is very small. If the V req is not present in any of the PSs in that LPSG CLOPS, then the Tracker of the CLOPS (TR(CLOPS)) passes the request to the Tracker of neighboring CLOPS TR(NBR(CLOPS)). The TR (NBR (CLOPS) ) then checks whether the V req is present in its CLOPS using perfect hashing. If the V req is present in any of the PSs, then the TR(NBR(CLOPS)) initiates the streaming of the V req from its CLOPS to the TR(CLOPS req ) through the optimal path found. The TR (CLOPS req ) in turn continues the streaming of V req to the requested PS q and the same is intimated to the requested PS q. The PS q streams this video to the requested client and hence the request-service delay is relatively higher, since this reduces the bandwidth usage between the Central Multimedia Server and the Proxy Server it is acceptable with high QoS.. If the V req is not present in the NBR (CLOPS) also, the TR(CLOPS req ) decides to download the V req from CMS and stream to PS q. Here, TR initiates the downloading of the V req from CMS to the user through PS q. While downloading, the initial portion of the video can be cached at the PS q, provided cache space is available. Otherwise apply the dynamic buffer allocation method to cache the initial portion of the video. Hence, the request-service delay and the bandwidth usage between the CMS and the PS are high, but very few requests are served from CMS. Whenever sufficient buffer and bandwidth is not available in the above operation the user request is rejected, which is a very rare possibility as shown by the simulation results Dynamic video replacement algorithm We search for the least frequently accessed video at the CLOPS, which is present in the PS since T 1 minutes and is currently not being streamed (off-line) and has not been asked since last T 2 minutes. If such video is found, the new V req replaces this off-line video. If such video is not found, DBA can be used to collect (deallocate) the cache blocks required for the V req. These blocks should be collected from the set of comparatively lower popular videos, leaving the minimum buffer required to stream those videos. This collected buffer can be allocated to V req Proposed Integrated Algorithm Proposed Load Sharing with DBA+SC Algorithm [Nomenclature:

71 PS q : q th Proxy Server PS(CLOPS) : Proxy Server in which V req is found in CLOPS V req : Requested Video BUF req : Required buffer based on popularity Req-Ser : Request to Service delay W 1 (V req ) : First W 1 minutes of V req Pop(V req ) : Popularity of requested video] When a request for a video V (V req ) arrives at a particular PS q under TR do the following: if (V req is present at PS q ) //Present at PS q Stream the V req to the user immediately from PS q. else Pass the request to the TR if (V req Є TR CLOPS ) TR initiates the Streaming of the V req to the user from the PS in which V req is found. else Pass the request to TR of neighboring CLOPS if (V req Є NBR-CLOPS) TR initiates the Streaming of the V req to the user from the NBR-CLOPS. else { TR initiates the downloading of the V req from CMS to the user through PS q while downloading it stores W 1 min(v req ) α pop(v req ) at the PS using DBA+SC if free cache space is available, otherwise executes the video replacement algorithm to make room for V req } Scene Change-Based Caching Algorithm using DBA (DBA+SC) 1. Identify the scene change 2. if { ( B*Fn)-(B*F n-1 ) >T min } If a scene change occurs Determine cache blocks for I (CB I ), B (CB B ) and P (CB P ) frames 3. for i = 1 to N f CB I = [B*F I ] (F I : I Frame) CB B = [B*F B ] (F B : B Frame) CB P = [B*F P ] (F P : P Frame) 4. Initiate a process of negotiation to acquire the required number of cache blocks for I, B and P frames 5. if sufficient cache blocks are found Allocate the required number of cache blocks to I, B and P frames

72 else Using Dynamic Buffer Allocation (DBA) of section allocate the cache blocks Experimentation Simulation Model The simulation model consists of a single CMS and a set of TRs. All these TRs are interconnected among themselves in a ring fashion. Each of these TR is in turn connected to a set of PSs. These PSs are also interconnected among themselves in a ring fashion. Each Proxy Server is connected to a group of users. We use the video availability that is the total number of videos served from CLOPS, video hit ratio (VHR) that is the ratio of number of requests arrived to the number requests served immediately to measure the performance of our proposed approach more correctly. Parameters considered for the simulation are shown in the Table 3.4. In addition we also use the average client waiting time, average number of replacements, average number of access to Central Multimedia Server and the network transmission cost as performance metrics Results and Discussion The simulation results presented below are an average of several simulations conducted on the model. Consider Figure 3.18, which shows the average number of requests served from various proxies of CLOPS and NBR (CLOPS) that is about 84%, and the average number of requests served from CMS which is only about 16%. Table 3.4 Simulation Parameters for LPSG CLOPS Parameters values Number of Central Multimedia Server 1 Number of Proxy Servers 6 Number of users 600 Number of videos 200 Request Distribution Zipf-like Transmission delay between the proxy and the client 100ms Transmission delay between proxy to proxy and TR to PS 200ms Transmission delay between the TR and the proxy 300ms Transmission delay between the CMS and the proxy 1200ms The size of the cached video 375MB to 1868MB (25min 2hr)

73 Figure 3.18 Average number of requests served from CLOPS using DBA with scene change. Figure 3.19 Video Hit Ratio in CLOPS using DBA with scene change Figure 3.20 Average waiting time for the client in CLOPS when DBA with scene change is used

74 Figure 3.21 Average Transmission cost when DBA with scene change is used in CLOPS Figure 3.22 Average Access rate to CMS when DBA with scene change is used in CLOPS Figure 3.23 Average Number of Replacements in CLOPS With DBA+SC Vs without DBA+SC As the maximum amount of the most frequently asked videos have been cached and streamed from the CLOPS with the cooperation of PSs, and the coordination of TR of CLOPS, this scheme has increased the video hit ratio and has reduced the access rate to CMS by 35%-

75 45% as shown in Figure 3.19 and Figure Hence, the network traffic along the Central Multimedia Server-client path is reduced, in turn transmission cost and transmission delay are also reduced significantly when compared to the system with a single proxy and CLOPS without DBA as shown in Figure Use of DBA method and load sharing technique have achieved 35%-61% reduction in the average number of replacements at the Proxy Servers when compared to single proxy and CLOPS without DBA and load sharing techniques as shown in Figure More number of blocks of frequently requested videos are cached and shared from the proxies of CLOPS. So when there is a request for i th video, streaming starts from one of the PSs immediately and hence the service delay, network bandwidth usage on CMS server to Proxy Server path is very less. If the requested videos are present at OTR(PS q ), then these videos are streamed from OTR(PS q ) to the client through PS q, so the client s waiting time for these videos is relatively higher. Otherwise also, some good numbers of videos are served from NBR (CLOPS), which reduces the frequent downloading of requested videos from CMS to the PS q which in turn reduces the initial play out delay for the clients for the requested videos which are not present at PS q. Also very few blocks of videos are streamed from the CMS, when the V req is neither present in that CLOPS, nor in NBR (CLOPS). Even though the initial startup delay and transmission cost seems to be more as shown in Figure 3.20 and Figure 3.21, it is acceptable because on an average 68%-84% of the videos are cached and streamed from CLOPS and NBR(CLOPS) by assuring high QoS as shown in Figure 3.18 and Figure Only 16%-32% of the videos are downloaded from CMS, this reduces the number of accesses to CMS and hence, the transmission cost and time are also reduced significantly Summary This work proposes an integration of efficient dynamic buffer allocation algorithms based on popularity with load sharing strategy for the proposed scalable VoD architecture of cluster of Proxy Servers. These proposed buffer allocation schemes utilize the limited Proxy Server cache space more efficiently. This also maximizes the amount of video data cached and served directly from the Proxy Server cache compared to the static buffer allocation scheme. With these approaches, we also have achieved higher cache hit rate, lower client latency, very low request rejection rate and fewer number of replacements compared with other schemes, where there is no buffer management technique. These algorithms enable the VoD services to meet the ever increasing demand of consumer request for large scale multimedia programs efficiently with available technology.

76

77 CHAPTER 4 OPTIMAL PREFIX CACHING AND DISTRIBUTION POLICIES FOR PROXY SERVERS CLUSTER 4.1 OVERVIEW Introduction Complete videos cannot be stored in the Proxy Server, as the size of the buffer at the Proxy Server is limited. Hence, partial caching approach with good cooperation among the Proxy Servers has become increasingly important. In a distributed VoD system of loosely coupled Proxy Servers, partial video objects can either be replicated [90] at all the Proxy Servers based on their local demand, reducing the waiting time for the client or videos can be cached uniquely at any one of the Proxy Servers among the closely located Proxy Servers, increasing the video availability, service rate and also reducing the transmission cost and the network traffic. The request rate for the particular video may vary with time and the relative popularities of the videos may also vary across different Proxy Servers. Hence, there are two main challenges in achieving reduced service delay for the user and high video hit rate under the resource constraints. 1) Determining the size of the video prefix to be cached at the Proxy Server 2) Efficient distribution of the video prefixes among the cluster of Proxy Servers [101] Motivation The following challenges were identified during the study of proxy prefix caching techniques. - Partitioning the video - Determining the size of the video prefixes against the available Proxy Server resources - Distribution of the video prefixes among the Proxy Servers of the LPSG - Searching and retrieving the cached video prefix at different Proxy Servers - Minimizing the service delay, transmission rate and cost of the overall VoD system - Minimizing the user requests rejection rate - Dynamic reconfiguration of the distribution of the video prefixes based on the popularity of the videos

78 A detailed study of the existing techniques and scope for further improvement was the motivation for our research work to develop the efficient video prefix placement and caching strategies Contribution In this work, we propose efficient video prefix caching and placement algorithms for the proposed dynamic and scalable architecture of coordinator-based cooperative Proxy Servers. Specifically, 1. We formulate the strategies for caching and placement of video prefixes among the Proxy Servers of LPSG as a combinatorial optimization problem 2. We propose an optimal algorithm by integrating the regional popularity-based prefix distribution and caching approach with the proposed load sharing technique to improve the efficiency of the system 3. We have designed efficient algorithms for video prefix distribution with an efficient prefix caching technique. The first algorithm is proposed to increase the video hit rate at each Proxy Server, and reducing the service delay for the user. Second algorithm is proposed to increase the storage capacity of the system and video availability close to the client reducing the bandwidth requirement and transmission cost of the system. This algorithm also minimizes the redundancy of video prefixes among the group of Proxy Servers. 4. We conduct a performance evaluation of the algorithms and demonstrate their efficiency through simulations. 4.2 OPTIMAL PREFIX REPLICATION STRATEGY Introduction Efficient distribution of the video prefixes among the interconnected Proxy Servers based on the local demand can reduce the waiting time for the client substantially. Whenever there is a request for a particular video, service can be immediately initiated if the requested video is available at the Proxy Server, else the requests have to be queued until their service is initiated and then the video has to be downloaded from the remote Central Multimedia Server. As a result, users of the system are likely to experience long service delay in their service [123]. The duration between the arrival time and the service start time of the request is known as service latency. The users may cancel their requests (renege) if they are made to wait for a longer duration. To reduce this service delay, we propose regional popularity based replication

79 strategy. This scheme partitions the video into different parts and replicates these video parts at different levels of LPSG increasing the video hit rate and service capacity of the system, thereby reducing the waiting time for the client Video Partitioning Each video V i is of size S i partitioned into 3 parts as shown in the Figure 4.1, 1. Part1: First W 1 minutes of a video V i is referred to as prefix-1 of V i [(pref-1) i ] and is cached at the Proxy Server (PS) level. 2. Part2: The next W 2 minutes of the video V i is referred to as prefix-2 of V i [(pref-2) i ] and is cached at Tracker (TR) of LPSG. 3. Part3: The remaining portion of the video V i is referred to as suffix of V i [Si-((pref-2) i + (pref-2) i )] and is stored at Central Multimedia Server (CMS). Figure 4.1 Different parts of Video Problem Definition The major problems to be considered are - Determination of the average number of replicas (replication degree) of videos - Determination of the size of the replica (W 1 min(v i )) to be cached - Formulating the distribution of the video replicas among the Proxy Servers of the LPSG. In this work, video prefix (pref-1) is replicated among the Proxy Servers of LPSG to enhance the availability of videos and request-servicing ability of the proposed system. Increasing the size of pref-1 achieves high service quality but decreases the replication degree due to the storage constraint. Hence, we consider the following approaches to tackle the above challenges. As a first approach, video prefix replication technique in integration with the proposed dynamic buffer management approach and load sharing algorithm is proposed to achieve optimal utilization of buffer and bandwidth among the Proxy Servers of LPSG. Depending on the regional demand for the videos at a PS, the popularity of the videos and sizes of the (pref-1) and (pref-1) to be cached at Proxy Server and Tracker respectively are determined for each video as follows.

80 d 1 α n i and d 2 α n i d 1 = size of (Pref1), d 2 = size of (Pref-2) So d 1 = x i x s i where 0<x i <1 d 2 = x i x (s i -d 1 ) where 0<x i <1 After finding the sizes of the prefixes d 1 and d 2, pref-1 (replica) should be distributed among the Proxy Servers of the LPSG. Hence, the request for a particular video at Proxy Server can be immediately serviced with minimum service delay. We consider a cluster of M Proxy Servers, PS={PS1,PS2, PS M }, and a set of N various videos V = {V 1,V 2, V N } for distribution in LPSG. We use a stochastic variable X(X 1,X 2, ) to represent requests (input). The output stochastic variable I a d represents the average initial access delay for all the requested videos at LPSG. Thus I a d is a sample mean of response times I a d 1, I a d 2, I a d M at all PSs PS 1, PS 2,..., PS M. Table 4.1 Input Output Stochastic variable used in the Simulation Model Parameter Meaning d 1 Size of the prefix-1 d 2 Size of the prefix-2 n i Total number of requests for i th video V i x i The probability of requests arrival for the i th video S i The size(duration) of the complete video V i Q i The total number of requests served at i th PS (I a d) i The initial access delay for i th video R The total number of requests arrived at LPSG (N rej ) i The number of requests rejected at i th PS Q LPSG rp i The average service rate at LPSG Number of replicas of the video V i The main objective of our proposed combinatorial algorithm is to maximize the system throughput by maximizing the service ability of the system, minimizing the request rejection ratio R rej, minimizing the average initial access delay I a d and also minimizing the average cache miss rate. This can be formulated with the parameters shown in the Table 4.1 as follows. Max Objective : Average Q LPSG = 1 M M Q i i 1 (1) Min Objectives: Average I a d LPSG = 1 M 1 Q i 1 M j 1 Q ( I a d ) i, and M 1 i Average (R rej ) LPSG = ( N ) ( R M ) M i 1 rej (2) These objectives are subjected to the following constraints: (1) storage capacity of PS and TR (2) distribution of all replicas of an individual video to different Proxy Servers of LPSG.

81 Let V j i be the index of the Proxy Server (j) on which replica of i th video is placed. Specifically, we give storage constraints from the perspective of Proxy Server and the Tracker. The Tracker and each Proxy Server in the cluster have large enough storage capacity to cache total P and B minutes of H and K number of videos respectively. B = K i (pref 1 1 )i, P = H i (pref 1 2 )i d 1, d 2 > 0 (3) The second constraint is the requirement of distribution of all replicas (pref-1) of an individual video to different Proxy Servers. That is, all replicas (rp i ) of video V i must be placed on rp i PSs. Specifically, V j i 1 V j 2 i 1 j 1, j 2 rp i, j 1 j 2. (4) If multiple replicas of a video are placed on the same Proxy Server, the duplicate copies must be deleted. For this reason, we have one more replication constraint Figure 4.2 Stochastic simulation Model 1 rp i M for every v i V (5) In summary, this work formulates the video prefix replication and placement problem as a maximization of Eq(1) and minimization of Eq(2) subject to constraints Eq(3) to Eq(5) Stochastic Model

82 In this section, we describe our video distribution framework and design principles guiding our approach. Sizes of the pref-1 and pref-2 of the videos are dynamically adjusted based on the video request frequencies. These prefixes are cached in terms of blocks, so that the buffer allocation and reallocation can be done with respect to blocks instead of the entire video and is a more efficient method. We consider the size of the complete video V i in the range from 30 minutes to 120 minutes. The sizes of (pref-1) i and (pref-2) i are considered as d 1 and d 2 respectively. Based on the local popularity of the video, the sizes of the video prefixes d 1 and d 2 can range from 20 minutes to 120 minutes. The popularity of a video is directly proportional to the number of hits to that particular video. One of the main objectives of our prefix replication and caching is to provide high service availability during the peak hours. During these hours, regional popularity based replication of prefix (pref-1) at the Proxy Server provides immediate service to the users. Load sharing is also critical for improving the throughput and service availability during the peak period. The following assumptions are made with respect to the video relative popularity distributions and the request arrival rates. 1. It is assumed that the popularity of a video i, p i is known before the prefix replication and caching. The relative popularity of the video objects follows Zipf-like distributions with a skew parameter of ө where, ө 1 [54]. The probability of choosing the i th video is p i = i N j 1 j θ 2. We assume that the client s requests (X/hr) arrive according to Poisson process with arrival rate λ as shown in Figure 4.2 of the simulation model. Let S i be the size of i th video with mean arrival rates λ 1... λ N respectively. 3. We assume M Proxy Servers in each LPSG Zipf-Distribution To develop popularity-based partial caching algorithms, the term popularity must be defined. Hence, a model which describes the popularity of video objects to perform mathematical calculations and simulations is introduced in this section. The study [129] has shown that the popularity of web objects on the Proxy Server accord with Zipf distribution. Zipf distribution is an experimental law, which originally states that, in a corpus of natural language utterances, the frequency of any word is approximately inversely proportional to its rank in the

83 frequency table. The Zipf distribution is now also extended for video objects. Specifically, we assume that there are N video objects on the Proxy Server, 1 N. The n is their rank, which means, video object 1 is the most popular one and the video object N is the least popular one. So 1 n the probability of the n-th video object being requested is: P ( n, s, N) = s N 1 i s, where s is the exponent characterizing the distribution and assumes the value 0.75 for video object popularity i Prefix Replication and Placement with Load Sharing The proposed VoD architecture shown in Figure 3.15 is considered for this problem also. Part of it is shown here in the Figure 4.3. This architecture consists of a remote Central Multimedia Server (CMS), which is connected to a group of Trackers (TRs). Each TR has various modules as shown in Figure 4.4. Communication Module (CM) It communicates with the PS and CMS. Service Manager (SM TR ) This handles and manages the requests from the PS. Database Stores the complete details of presence and size of replica (pref-1) of videos at all the PSs. Video distributing Manager (VDM) Is responsible for deciding the video replicas, and sizes of the replica (pref-1) and (pref-2) of videos to be cached at PS and TR respectively. This also handles the distribution and management of these video prefixes among the PSs of LPSG, based on the video s global and local popularity.

84 Figure 4.3 Part of VoD Architecture Figure 4.4 Modules of Proxy Server and Tracker Each TR is in turn connected to a set of PSs. These PSs are connected among themselves in a ring fashion. Each PS has various modules as shown in Figure 4.4. Interaction Module (IM) Interacts with the user and TR, Service Manager (SM PS ) Handles the requests from the user, Popularity agent (PA) Observes and updates the popularity of videos at PS as well as at TR,

85 Cache Allocator (CA) Allocates the cache blocks to various video prefixes. Also to each of these PSs a large number of users are connected. Each PS will be a parent PS to its clients. All these LPSGs are interconnected through their TR in a ring pattern. The PS caches the (pref-1) of videos distributed by VDM, and streams this cached portion of video to the client upon the request. The proposed objectives have been done in two steps: 1) Replication or placement of the replica (pref-1) of the video among the PSs and caching of (pref-2) of the video at TR is done in the first step as follows. This is achieved with the cooperation of the modules of Proxy Servers and Tracker and also with the coordination of the Tracker. If a video V i is popular at all PSs of LPSG (globally popular) then VDM of TR calculates the size d 1 of the replica (pref-1) and is replicated at all the M PSs. Otherwise (locally popular) it is replicated only across selected L PSs of LPSG, in which the frequency of accessing the video V i is very high. The size d 2 of (pref-2) is then calculated and is cached at TR. The cache allocator allocates cache blocks using the proposed buffer management algorithm. This approach allows the PS to store more number of video replicas. The popularity agent keeps updating the popularity of the videos with the Tracker based on the user request frequencies for the videos. The size d 1 and d 2 are adjusted (incremented or decremented) dynamically based on popularity. 2) In the second step the efficient distribution and caching mechanism of the replica is integrated with the proposed load sharing technique. This arrangement of replicating the popularity based (pref-1) at the PSs with load sharing technique increases the availability of videos and hence helps the system to provide the service immediately after the request arrives. Thus the proposed approach increases the client service rate and service capacity of the system, decreases the service delay and rejection rate significantly Proposed Algorithms Algorithm: RPR-P Algorithm: Regional Popularity based Replication and Placement of prefix-1 (RPR-P) [Nomenclature (pop(v i )) z : popularity of V i at PS z N(rp i ) : Number of replicas of i th video x k vi : Probability of requests arrival for video v i at k th PS

86 d vi 1 : Size of the replica of i th video d vi 2 : Size of prefix-2 of i th video t p : Threshold popularity B z avl : Buffer available at the PS z Xi : The probability of requests arrival for the i th video List i : List of replicas for i th video ] Process of finding the sizes of prefixes and distribution of these prefixes: { L=0; for z = 1 to M { if (pop (V i ) z > t p ) { L++; List i [L]=z }} if (L=M) (v i is globally popular) { N(rp i ) = M VDM calculates the sizes d vi 1, d vi 2 M 1 d vi 1 = M k 1 k x vi x S i where 0<x i <1 Replicates the (pref-1) vi of size d 1 at all M PSs using buffer management algorithm M 1 d vi 2 = x ( S i 1 ) where 0<x i <1 M k 1 k x vi d Vi Cache the (pref-2) vi of size d 2 at TR V i = (pref-1) vi + (pref-2) vi + (suffi x) vi } else (v i is popular only at some regions) { N(rp i ) = L d vi 1 = 1 L k x vi L k 1 x S i where 0<x i <1 Replicate (pref-1) vi of size d 1 at selected L PSs of List i using buffer management algorithm d vi L 1 2 = x ( S i 1 ) where 0<x i <1 L k 1 k x vi d Vi Cache the (pref-2) vi of size d 2 at TR using buffer management algorithm } if ( B z avl < d vi 1 ) share the (pref-1) vi stored at other PSs of LPSG }

87 Algorithm: RPPCL Algorithm: Regional Popularity Based Proxy prefix Caching and Load sharing Algorithm (RPPCL) [Nomenclature: Lp : LPSG, wt Vreq : Waiting time to get V req ] When there is a request for a video (V req ) at a particular proxy PS q of L p, the following steps can be evaluated: Vreq if (V req PS q ) (pref-1) Vreq is streamed immediately to the user PS-U So wtvreq = wt(pref-1 ) = time required to stream (pref-1) from PS - user else pass the request to the TR(Lp) if (V req PS(Lp)) if (PS(Lp) is left or right NBR(PS q ) SM TR streams (pref-1) Vreq from NBR(ps q ), (pref-2) Vreq from its cache and the remaining portion from CMS (PS PS) (PS-U) wtvreq = wt(pref-1 ) Vreq = Time required to stream (pref-1) from PS-PS & PS - user else SM TR streams the (pref-1) Vreq from OTR(PS q ), (pref-2) Vreq from its cache and the remaining portion from CMS to-user thru PS q using optimal path found (PS-PS) (PS-U) wtvreq = wt(pref-1 ) Vreq = Time required to stream (pref-1) from PS-PS & PS - user else Pass the request to left or right TR(NBR(Lp)) if (V req NBR(Lp)) TR(NBR(Lp)) streams the V req from NBR(Lp)-user thru TR(L p ) (TR TR) (TR PS) (PS U) wt[(pref-1 ) (pref-2 wtvreq= )] Vreq = Time required to stream (pref-1) from TR TR, TR-PS &PS- user else TR(Lp) downloads the complete V req from CMS and streams to the user (CMS-TR) (TR-PS) (PS-U) wtvreq = wt(pref 1) Vreq = Time required to stream (pref-1) from CMS-TR, TR-PS & PS-user Call RPR-P(V req ) for replication and placement of V req and Caching of (pref-1) and (pref-2) of V req will be done using Dynamic Buffer allocation Algorithm Experimentation Algorithm - RPR-P Simulation Model

88 In our simulation model, the VoD cluster-lpsg consists of a single CMS and a group of 6 TRs. Each of these TR is in turn connected to a set of 6 PSs. Each PS is connected to a cloud of 50 clients. The storage capacity of each PS is 30GB to 35GB. The LPSG contains around 300 videos with duration (size) ranging from 20 minutes to 120 minutes. The requests arrivals were generated by a poison process with mean arrival rate λ. The request distribution and video popularity distributions follows Zipf-like distribution with the skew parameter ө=0.75. The user request rate at each PS is requests per minute. The ratio of cache sizes at different elements like CMS, TR and PS is set to C CMS : C TR : C PS = 10:2:1 and transmission delay between the PS and the client, PS to PS and TR to PS as 100ms, transmission delay between the Central Multimedia Server and the PS as 1200ms, transmission delay between Tracker to Tracker 300ms, the size of the cached [(pref-1)+(pref-2)] video as 390MB to 1870MB(25min 2hr) in proportion to its popularity. The simulation employed a simple admission control. In this, If the requested video is not available at LPSG and NBR[LPSG], then dynamic buffer allocation is done to accommodate the new video. If minimum buffer required for that video cannot be allocated, LRU-k replacement technique is used place the new video. Even after that if it is not possible to accommodate the new video the request is rejected. We use the client rejection rate (N rej ), the cache or video miss rate and the average initial access latency (I a d) as parameters to measure the performance of our proposed approach Performance evaluation of RPR-P with Results The results presented below are an average of several simulations conducted. These results show an improved performance over the existing Classical Replication with Round Robin placement [129] (CR-RR) and Zipf Replication with Smallest Load First (ZipfR-SLFA) Algorithms [91]. Client rejection ratio: Figure 4.7 shows the client rejection rate of our proposed system, which is very little when compare to the Classical Replication with RR placement (CR-RR) and Zipf Replication with SLFA (ZipfR-SLFA). We can observe that the proposed Regional Popularity based Replication and Placement (RPR-P) algorithm has achieved 13% reduction in client rejection rate over Zipf replication and nearly 34% reduction when compared to classical replication. This is achieved by replicating the (pref-1) of more number of videos based on their local popularity. Initial access delay and Cache miss rate: Regional popularity based size d 1 of the replica allows all the clients who arrive within 0 - d 1 interval to get the service immediately from PS. Hence, I a d, N rej and the cache miss rate are very low. This is because as the request arrival rate for the video V i increases d 1 of V i also increases.

89 Figure 4.5 Average I a d with RPR-P, ZipfR-SLFA and CR-RR Algorithms Figure 4.6 Average cache miss rate of RPR-P, ZipfR-SLFA and CR-RR Algorithms Figure 4.7 Average Rejection rate of RPR-P, ZipfR-SLFA and CR-RR Algorithms

90 Avg. No. of videos cached Cache Utilization Time(mins) Figure 4.8 Average cache utilization in RPR-P Algorithm In comparison with Zipf Replication and classical replication this approach reduces 25%- 35% video miss rate as shown in Figure 4.6 and it in turn has reduced 3 to 8 seconds of the average initial access delay for the users when compare to Zipf Replication and Classical Replication as shown in the Figure 4.5. Cache utilization: Figure 4.8 shows the average cache utilization at the Proxy Server. As the number of requests increases over the time, number of prefixes also increases. This enables the system to achieve high service rate with the efficient cache utilization Algorithm - RPPCL Simulation Model Simulation model setup described in section is considered here also Performance evaluation of RPPCL with Results The simulation results presented below are an average of several simulations conducted on the model. Consider Figure 4.13, which shows the total number of requests arrived at PS q (Nreqs(TOT)) and number of requests served from PS q, that is almost 70%-80% of the requests are served from LPSG (Nreqs(L P -PS q )) through PS q, by sharing the videos among the PSs of LPSG and NBR[LPSG]. Only about 20%-30% of the requests demanded the attention from CMS (Nreqs(CMS)) which is very less. As the (pref-1) of most frequently asked videos has been cached and streamed from the PS q and (LPSG+NBR[L p ]) with the cooperation of various modules of PSs, and the coordination of modules of TR of LPSG, when compared to the existing algorithms GWQ [81] (Global Waiting Queue) and PRLS [82] (Partial Replication and Load Sharing) the proposed scheme

91 Avg Waiting Time (Sec) Avg Request-Service delay (Sec) (RPPCL) has achieved very high Video Hit Ratio as shown in Figure So the replication of mostly accessed videos at the respective PSs has significantly reduced the Request-Service delay for the user when compared to GWQ and PRLS algorithms as shown in Figure 4.9. Even in the rare case of cache miss at the Proxy Server, if the requested videos are present at NBR(PS q ) of LPSG then these videos are streamed from NBR(PS q ) to the client through PS q, so the clients waiting time (Wt) for these videos is very small. If the requested videos are present at other PSs of LPSG (LPSG-NBR(PS q )), then these videos are streamed from LPSG-NBR(PS q ) to the client through PS q, so the client s Wt for these videos is relatively higher. Otherwise, a good number of videos are served from NBR(LPSG), which reduces frequent downloading of requested videos from CMS to the PS q. This in turn reduces the communication with CMS. Hence, the proposed scheme has achieved significant reduction in network bandwidth demand between the CMS and the PS q, initial service delay at the clients for the requested videos over GWQ and PRLS algorithms as shown in Figure 4.9, Figure 4.10 and Figure RPPCL PRLS GWQ Time(hrs) Figure 4.9 Average Request-Service delay with RPPCL, GWQ and PRLS Algorithms PS-Client PS-PS TR-PS TR-TR CMS-PS Figure 4.10 Average waiting for the videos from PS-Client, PS-PS, TR-PS,TR-TR and CMS-PS

92 Figure 4.11 Average Video Hit Ratio with RPPCL, GWQ and PRLS Algorithms Figure 4.12 Average Network bandwidth usage by RPPCL,GWQ and PRLS Algorithms Figure 4.13 Total Number of requests served from PSq, LPSG, and CMS Summary We have proposed the regional popularity based video replication and placement algorithm (RPPCL) for distributed VoD architecture. This algorithm makes the cached data

93 almost instantly available at the Proxy Server to serve the user s request immediately as it arrives. The simulation results showed that the proposed algorithm has significantly increased the request service rate, reduced the request-rejection rate and video miss rate when compared to the existing algorithms. By sharing the video data present among the PSs of LPSG, the proposed prefix distribution algorithm allows the system to cache more number of replicas of most popular video prefixes based on their local popularity at respective PSs, which has improved the system throughput efficiently. 4.3 STOCHASTIC MODEL BASED TRANSMISSION COST REDUCTION STRATEGY FOR PROXY SERVERS Introduction Remote Central Multimedia Server to Proxy Server channel provides guaranteed constant bandwidth service and this bandwidth service is provided at a higher cost. On the other hand, Proxy Server to client channel is fast and reliable at the even lower and fixed price. Thus, the transmission rate and cost between Proxy Server and client are negligible when compared to those between CMS and PS. Proxy caching is a key technique to reduce transmission rate and cost for on-demand multimedia streaming. The effectiveness of current proxy caching schemes is however limited by the insufficient storage space and weak cooperation among the neighboring PSs and the coordinator [79]. Hence, this work proposes a stochastic model based transmission cost reduction strategy with an efficient caching and placement of the video prefix for the proposed architecture of cooperative Proxy Servers. This efficient video prefix placement strategy increases the system capacity by increasing the availability of video data at LPSG, and hence reduces the load on the remote Central Multimedia Server and the transmission cost between the CMS and the Proxy Servers of LPSG Problem Definition The transmission cost (TCost) required for the video data streamed from the remote Central Multimedia Server to the Proxy Server is very high, as it uses the bandwidth of Wide Area Communication Network. This has to be minimized by reducing the amount of video data served directly from the Central Multimedia Server to the client. The proxy caching of the video close to the client can achieve significant reduction in bandwidth demand between the Central Multimedia Server and the Proxy Server. This in turn can reduce the transmission cost of the VoD system. Hence, to achieve the increased service rate at the Proxy Server and to reduce the communication demand at CMS, the amount of video data served from LPSG through the Proxy

94 Server to the user ( (( pref 1) ( pref 2)) PSq U i ) should be high. The amount of data served from CMS to the user (U) through the TR and the PS ( Suff (Pr ef 1) (Pr ef ( CMS TR PSq U ) 2)) i should be less. With the storage constraints at the Proxy Server and Tracker, the size of the prefixes is determined as given in the section Whenever a user requests for a video V i at PS q, the streaming cost of that video V i may vary depending on the availability of the video. If it is present at PS, then the total transmission cost may be TCost PSq i = TCost ( pref 1) PSq U i + TCost ( pref 2) TR i PSq U where i=1..n, q=1...m + TCost ( Suff ( pref 2) ( pref 2)) CMS i TR PSq U Where the left hand part of the equation in the above model represents the total transmission cost required to retrieve and stream the requested complete video to the client. The right hand part represents the cost required to retrieve and stream (pref-1) from PS [PS-U], (pref-2) from TR [TR-PS-U], and the remaining suffix (Suff-(pref-1)-(pref-2)) from CMS to the user [CMS-TR-PS-U] respectively. The optimization problem is to minimize the amount of video data (Suff-(pref-1)-(pref- 2)) streamed from the Central Multimedia Server which in turn reduces the transmission cost and bandwidth demand for streaming the video data (Suffix) from Central Multimedia Server. This can be done by maximizing the availability of the maximum portion the requested video at the cluster of Proxy Servers close to the client. This can be formulated as follows: Figure 4.14 Stochastic Simulation Model

95 Minimize Minimize N CMS PSq WAN bw and {Suff i -(pref-2 ) i-(pref-1 ) i} i 1 CMS PSq TCosti (Transmission cost from CMS) Maximize ((Pr ef 1) (Pr ef 2)) LPSG = N i 1 ((Pr ef 1) (Pr ef 2) i Where i = 1...N, and q = 1...M Subject to C PS = B = K i (pref 1 1 )i, CTR = P = H i (pref 1 (pref-1) and (pref-2)>0 2 )i Stochastic Model Video partitioning technique and the stochastic parameters considered in this section are same as explained in the section and Let N be a stochastic variable representing the group of videos and may take the different values (videos) V i (i=1,2.. N). The probability of the video V i being taken is p(v i ), let the set of values p(v i ) be the probability mass function. Since the variable must take one of the values, it follows that p(v i ) 1. So the estimation of the ni probability of requesting V i video, is p(v i ) =, I N i 1 where I is the total number of requests for all the videos and n i is the number of requests for the video i. A cumulative distribution function denoted as p(v i ) is the function that gives the probability of a request (random variable s) being less than or equal to a given maximum value. We assume that the client s requests (X/hr) arrive according to Poisson process with mean arrival rates λ 1... λ N respectively that are being streamed to the users using M Proxy Servers as shown in the stochastic model of Figure Let S i be the size (duration in minutes) of i th video V i Prefix Placement Strategy to Achieve Reduced Transmission Cost The distributed VoD architecture shown in the Figure 3.15 is considered for this problem and part of it is shown in Figure 4.3. Modules of various components of this architecture are shown in Figure 4.4 and their jobs are same as explained in section

96 Distribution of the replica (pref-1) of the video among the Proxy Servers of LPSG and caching of (pref-2) of the video at the Tracker is done as follows. Video Distribution Manager (VDM) of the TR determines the lengths d 1 of (pref-1) and d 2 of (pref-2) for each video as explained in the section The (pref-1) is then placed at a Proxy Server, in which the frequency of accessing the video V i is very high. The (pref-1) is stored in only one Proxy Server of LPSG. The (pref-2) is cached at the Tracker. The cache allocator allocates cache blocks using the proposed buffer management algorithm. Popularity Agent keeps updating the popularity of the videos with Tracker based on the user request frequencies for the videos. The size d 1 and d 2 are adjusted dynamically based on popularity of the videos. This arrangement of non redundant placement of the video prefixes across the cluster of interconnected Proxy Servers increases the number of videos as well as amount of video data cached near to the client. Hence, the network usage and the data transmission cost between the Central Multimedia Server and Proxy Server are reduced significantly. The algorithm to achieve this is given in the next section Proposed Algorithm [Nomenclature: PS q : q th Proxy Server V req : requested Video TCostVreq : Transmission cost for V req ] When there is a request for a video V req at a particular proxy PS q of LPSG, the following steps can be evaluated if (V req PS q ) (pref-1) Vreq is streamed immediately to user, (pref-2) Vreq from TR and (suffix) Vreq from CMS. PS-U Vreq TCostVreq = TCost(pref 1 ) + TCost(pref 2 ) (TR-PS) Vreq (PS-U) + TCost(Suff-(pref-2 )-(pref-1)) (CMS-TR) (TR-PS) (PS-U) Vreq else ( pass the request to the TR(LPSG)) if (V req PS(LPSG)) if (PS(LPSG) is left or right NBR(PS q )) SM TR streams (pref-1) Vreq from NBR(ps q ), (pref-2) Vreq from its cache and the (suffix) Vreq from CMS (PS-PS) (PS-U) Vreq TCostVreq = TCost(pref-1 ) + TCost(pref-2 ) (TR-PS) (PS-U) Vreq

97 (CMS-TR) (TR-PS) (PS-U) Vreq else SM TR streams the (pref-1) Vreq from OTR(PS q ), (pref-2) Vreq from its cache and the (suffix) Vreq from CMS- to-user through PS q using optimal path found (PS-PS) (PS-U) Vreq + TCost(Suff-(pref-2 )-(pref-1 )) TCostVreq = TCost(pref-1 ) + TCost(pref-2 ) + TCost(Suff-(pref-2 )-(pref-1 )) (CMS-TR) (TR-PS) (PS-U) Vreq (TR-PS) (PS-U) Vreq else Pass the request to left or right TR(NBR(LPSG)) if (V req NBR(LPSG)) TR(NBR(LPSG)) streams the V req from NBR(LPSG)-user through TR(LPSG) TCostVreq = TCost((pref-1 ) (pref-2 )) (TR TR) Vreq (TR PS) (PS U) + TCost(Suff-(pref-2 )-(pref-1 )) (CMS-TR) (TR-PS) (PS-U) Vreq else TR(LPSG) downloads the complete V req from CMS and streams to user TCostVreq = TCost(Suff) (CMS-TR) (TR-PS) (PS-U) Vreq Experimentation Simulation Model The simulation model consists of a single CMS and a few LPSG clusters. The LPSG cluster consists of a few Proxy Servers. The parameters considered for simulation are shown in Table 4.2. We use the Video Hit Ratio (VHR), the average number of access to remote Central Multimedia Server and the average transmission cost as parameters to measure the performance of our proposed approach. Table 4.2 Simulation parameters used for the Model Parameter values

98 Number of Trackers 2 Number of Proxy Servers 6 Number of videos 480 Request distribution of the videos follows Zipf-like with θ=0.75 The average user request rate at each PS 45 req/min The ratio of cache sizes at different elements like CMS, TR and PS C CMS : C TR : C PS = 10 :2:1 Transmission cost between the proxy and the client Transmission cost between PS to PS Transmission cost between Tracker to Tracker Transmission cost between CMS to PS The size of the cached video 390MB to 1870MB(25min 2hr) Storage Capacity of PS 30Gb-35GB Performance evaluation and Results Analysis The simulation results presented below are an average of several simulations conducted on the model. Consider Figure 4.15, which shows the total number of video requests (Nreqs(TOT)), number of requests served from LPSG (Nreqs((LPSG)+NBR(LPSG))) and CMS (Nreqs(CMS)). Almost 82% of the video requests are served from LPSG through PS q, by sharing the videos present among the PSs of LPSG and NBR[LPSG]. Only about 18% of the video requests are served from CMS which is negligible. The proposed (pref-1) distribution algorithm enables the system to serve more than 80% of the users from LPSG. This is done by sharing the videos present among the Proxy Servers of the LPSG with the cooperation of various modules of PSs, and the coordination of modules of TR of LPSG. As the number of videos and the amount of the video data cached at LPSG is more, this scheme has achieved very high video hit ratio as shown in the Figure More number of blocks of frequently requested videos are cached and shared among the Proxy Servers and TR of LPSG and NBR[LPSG]. So when there is a request for any of these i th video, streaming starts from one of the PS of the LPSG and hence reduces the direct communication with the remote CMS. This reduces the network usage, transmission cost and transmission time of the system as shown in the Figure 4.17 and Figure 4.18

99 Figure 4.15 Average Amount of videos streamed from (PS, LPSG+NBR[LPSG]), CMS Vs Time(hrs) Figure 4.16 Average Video Hit ratio Vs Time(hrs) Figure 4.17 Average access rate to CMS Vs Time(hrs)

100 Avg. Network Tcost Summary Time(hrs) Figure 4.18 Average Network Transmission Cost Vs Time (hrs) In this work, we have proposed an efficient prefix distribution scheme and video sharing mechanism for the proposed VoD architecture. This prefix distribution scheme has significantly increased the system aggregate storage capacity. Hence, maximum portion of more number of the most frequently requested videos could be cached among the Proxy Servers of LPSG. Also the sharing of these videos available among the Proxy Servers of LPSG resulted in servicing of more number of requests with maximum portion of the video streamed from LPSG itself. This technique has greatly increased the service rate and decreased the network usage for video suffix data transmission. This approach in turn has reduced the transmission cost and load on the Central Multimedia Server significantly. 0 PS G

101 CHAPTER 5 EFFICIENT PREFIX BASED STREAMING SCHEME FOR DISTRIBUTED VoD 5.1 OVERVIEW Introduction The bandwidth-intensive nature and long-lived characteristics of digital video make transmission bandwidth a major limiting factor in an extensive streaming of video content over communication network [123]. For popular videos, the client population is likely to be large, with different clients asynchronously issuing requests to receive their chosen media streams. Different videos can have different sizes (playback durations) and popularities. A challenging problem is developing techniques for bandwidth-efficient distribution of heterogeneous videos to such a large, asynchronous client population [101]. Many transmission schemes have been proposed to address this challenge, such as batching, periodic broadcasting and patching. These schemes use multicast or broadcast communications to reduce bandwidth usage, while providing a guaranteed bound on playback startup latency of a client Motivation From the study we made, we identified the following challenges in implementing scalable Video-on-Demand streaming services over the communication networks. 1 Network bandwidth limitation is the main operating constraint of most Video-on- Demand systems. 2 Even though the throughput of a public network (e.g.,atm) can be huge, the network bottleneck limits the number of client stations that a VoD server can support simultaneously. 3 A possible solution to this problem is to batch the requests for the same video and multicast the data to these requests to save the network bandwidth. 4 A disadvantage of this scheme is that it unfairly forces the requests arriving early in a batch to wait for the late arrivals. Hence, the reneging rate can be high in a system which employs this technique. The above challenges motivated this proposed research work to achieve the overall improvement in the VoD system throughput Contribution

102 In this work, we combine peer to peer techniques with the current server-client streaming model to build a new system that is both scalable and robust. Specifically, we propose an algorithm C2C-Chain, Client-to-Client chaining protocol for VoD applications. Here we explore the frame work combining proxy based prefix caching and load sharing scheme to chain the end points in the proposed architecture of coordinator based cooperative Proxy Servers. This architecture uses a proxy-to-proxy and client-to-client streaming approach to cooperatively stream the video using chaining technique with unicast communication among the clients. Prefix caching scheme is proposed to accommodate more number of videos closer to the client, so that the request-service delay for the user can be minimized. A cooperative proxy and client chaining scheme is proposed for streaming the videos using unicasting. This approach minimizes the client rejection rate, load on the CMS and bandwidth requirement between the CMS and the PS. The simulation results show that the proposed approach achieves reduced request-service delay and optimal prefix caching of videos minimizing CMS to PS path bandwidth requirement by utilizing the proxy to proxy and client to client bandwidth, which is occasionally used when compared to busy CMS to PS path bandwidth. 5.2 OPTIMAL STREAMING APPROACH Efficient Video Streaming Problem In this work, to reduce the demand on CMS bandwidth requirement, the proposed algorithm exploits in-network bandwidth that is the aggregate bandwidth and buffer of the network elements (Proxy Server and client). This can be achieved by serving a chain of clients, which have requested the same video using a single video stream. In this scheme, every client who uses the service must indirectly contribute its own resources (i.e., buffer and bandwidth) to the environment. As a result, each service request can be seen as a virtual contributor, rather than just a burden to the PS or CMS. This unique characteristic makes chaining much more scalable than the other methods. This scheme is in turn reduces the load on the Central Multimedia Server, client rejection ratio, increasing the service rate. We consider R rej as the variable to represent the client request rejection ratio. R rej = Nrej R where, R rej : Request rejection ratio N rej : Number of requests rejected R : Total number of requests arrived at the system

103 R rej is the ratio of the number of requests rejected (N rej ) to the total number of requests arrived at the system (R) and is inversely proportional to the system throughput. System throughput is the ratio of number of requests served (Q) to total number of requests arrived (R) at the system. i.e. R rej α S T where, S T = Q : System throughput R Q : Number of requests served R : Total number of requests arrived The optimization problem is to maximize S T by minimizing the client rejection ratio R rej, average Request-Service delay [(Req-Ser) delay ] and average bandwidth usage on CMS-PS path [BW CMS-PS ]. This optimization problem can be defined as follows. Maximize System Throughput S T = Minimizing Average Network Bandwidth usage on CMS-PS path Q R Avg BW CMS-PS = Q BW(Suffix (pref 1 ) (pref 2 i 1 )) i CMS PS Average Request-service delay for the user delay 1 Avg (Re q Ser) = Q Q i 1 Average Request-rejection ratio (Re q Ser) delay i Subject to Avg R rej = Nrej R B = K i (pref 1 1 )i, P = H i (pref 1 2 )i, W 1 min(pref-1)= d1 > 0 and W 2 min(pref-2)=d2 > System Model The procedure of partitioning the video into different parts is same as given in the section These prefixes are distributed among the Proxy Servers of LPSG based on the local demand for the videos at Proxy Server and storage capacity of the Proxy Server.

104 Figure 5.1 System Simulation Model The parameters considered for the model shown in the Figure 3.15 are listed in Table 5.1. The system model considered for this approach is same as the model given in the sections and with different parameters as shown in the Figure 5.1. With the storage constraints of Proxy Server and the Tracker, video distribution manager (VDM) of the TR finds the sizes d 1 and d 2 of (pref-1) and (pref-2) respectively. Every PS caches (pref-1) of the videos distributed by VDM based on their local demand. VDM caches the (pref-2) of the same videos at the TR. This distribution scheme allows LPSG to increase the storage capacity of the system and the service rate at the Proxy Server. Hence, maximum number of requests can be served immediately from LPSG itself, which reduces the request-service delay [(Req-Ser) delay ] and network bandwidth requirement on CMS-PS path [BW CMS-PS ] significantly. Table 5.1 Parameters of the System Model N V i S i λ i M J PS q Pref-1 Pref-2 P B Parameter Definition Total number of videos i th video (i=1..n) The size(minutes) of i th video(i=1..n) mean arrival rate of i th video Number of Proxy Servers in LPSG Total number of LPSGs q th Proxy Server W 1 minutes video of V i W 2 minutes of video V i Total size(minutes) of TR buffer Total size (minutes) of Proxy buffer

105 H K W Total number of videos at TR Total number of videos at PS Size (minutes) 5.3 PROPOSED ARCHITECTURE AND ALGORITHM Introduction The architecture given in section 3.15 is considered here also Overview of the Architecture The proposed VoD architecture considered for this scheme is shown in Figure Part of that architecture is shown in the Figure 5.2. Here we assume that, 1. The TR is also a PS with high computational power and with large storage capacity when compared to other Proxy Servers, to which the clients are connected. As shown in Figure 5.3 it has various modules, using which it coordinates and maintains a database that contains the information about the presence of videos, and also size of (pref-1) and (pref- 2) of video in each PS and TR respectively. 2. Proxy Servers and their clients are closely located with relatively low communication cost. The Central Multimedia Server in which all the videos are completely stored is placed far away from LPSG, which involves high cost remote communication. 3. The CMS, TR and the PSs of LPSG are assumed to be interconnected through high capacity optic fiber cables. All the clients of the PS are also interconnected. Figure 5.2 Proposed VoD Architecture Proposed C2C-Chain Algorithm

106 The proposed scheme C2C-Chain is an efficient streaming technique that combines the advantages of proxy prefix caching and both client-server and peer-to-peer approaches to cooperatively stream the video using chaining. The main goal of C2C-Chain is to make each client to act as a server while it receives the video stream, so that the available memory and bandwidth of the clients can be utilized more efficiently. The un-scalability of traditional clientserver unicast VoD service lies in the fact that the Central Multimedia Server is the only contributor and can thus become flooded by a large number of clients submissively requesting the service. In the client-server service model, the client sets up a direct connection with the server to receive the video. In this case WACN bandwidth requirement on CMS-to-PS path is equal to the playback rate which is very high. As the number of requests increases, the bandwidth demand at the CMS also increases, due to which network becomes congested and the incoming requests are rejected. In contrast, we propose two schemes to address these issues 1) Local group of interconnected proxies and clients architecture with prefix caching technique and load sharing among the proxies of the group can reduce the frequent access to CMS which in turn can reduce the utilization of the bandwidth consumption between client and the CMS and also the load on CMS. 2) C2C-Chain, where the clients not only receive the requested stream, but also contribute to the overall VoD service by forwarding the stream to other clients, whose request arrives within the threshold time of W 1 min(pref-1) i.e d1. In C2C-Chain all the clients are treated as potential server points. When there is a request for a video V i from the client C k at a particular proxy PS q, if the requested video V i is present at PS q, then with the coordination of modules of TR, PS and the client as shown in Figure 5.3, the service will be provided to C k in the following phases.

107 Figure 5.3 Modules of Tracker, Proxy Server and Client Client admission phase Streaming phase Closing Phase Client admission phase When request arrive at the PS q, the Request-handler (Req-handler) checks for the presence of the video in the cache of PS q. If it is present then it checks the flag IS-STREAMING of the video V i. If it is not true it indicates that there are no clients having streamed with the same video object. The Req-handler then informs the Service-manager (Ser-mgr) to provide the streaming of V i to C k, then Ser-mgr starts a new stream creating a new active chain for Vi and updates the Streaming Clients List (SCL) by adding a new entry for the video V i along with its (pref-1) size and making IS-STREAMING flag of Vi true. SCL is a list that contains chains of

Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching

Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Reduction of Periodic Broadcast Resource Requirements with Proxy Caching Ewa Kusmierek and David H.C. Du Digital Technology Center and Department of Computer Science and Engineering University of Minnesota

More information

Dynamic Load Balancing Architecture for Distributed VoD using Agent Technology

Dynamic Load Balancing Architecture for Distributed VoD using Agent Technology Dynamic Load Balancing Architecture for Distributed VoD using Agent Technology H S Guruprasad Research Scholar, Dr MGR University Asst Prof& HOD / Dept of ISE BMSCE, Bangalore, India hs_gurup@yahoo.com

More information

A ROLE MANAGEMENT MODEL FOR USER AUTHORIZATION QUERIES IN ROLE BASED ACCESS CONTROL SYSTEMS A THESIS CH.SASI DHAR RAO

A ROLE MANAGEMENT MODEL FOR USER AUTHORIZATION QUERIES IN ROLE BASED ACCESS CONTROL SYSTEMS A THESIS CH.SASI DHAR RAO A ROLE MANAGEMENT MODEL FOR USER AUTHORIZATION QUERIES IN ROLE BASED ACCESS CONTROL SYSTEMS A THESIS Submitted by CH.SASI DHAR RAO in partial fulfillment for the award of the degree of MASTER OF PHILOSOPHY

More information

Advanced Networking Technologies

Advanced Networking Technologies Advanced Networking Technologies Chapter 13 Caching Techniques for Streaming Media (Acknowledgement: These slides have been prepared by Dr.-Ing. Markus Hofmann) 1 What is Streaming? Streaming media refers

More information

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT

ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT ADAPTIVE AND DYNAMIC LOAD BALANCING METHODOLOGIES FOR DISTRIBUTED ENVIRONMENT PhD Summary DOCTORATE OF PHILOSOPHY IN COMPUTER SCIENCE & ENGINEERING By Sandip Kumar Goyal (09-PhD-052) Under the Supervision

More information

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS Kevin Streeter Adobe Systems, USA ABSTRACT While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces

More information

Hd Video Transmission Over Wireless Network Using Batching And Patching Technique

Hd Video Transmission Over Wireless Network Using Batching And Patching Technique Hd Video Transmission Over Wireless Network Using Batching And Patching Technique M. Ramya, S. Girija, M. Bharathy Devi, S. Gnanavel M.E (Phd) Abstract: In recent advances the high speed video transmission

More information

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System Ubaid Abbasi and Toufik Ahmed CNRS abri ab. University of Bordeaux 1 351 Cours de la ibération, Talence Cedex 33405 France {abbasi,

More information

Bandwidth Overview. Rev Whitepaper

Bandwidth Overview. Rev Whitepaper Rev. 1.03 Whitepaper About the Arel ICP Platform Arel s Integrated Conferencing Platform TM (ICP) is an advanced platform that combines the critical aspects of verbal and visual communication video and

More information

UNIVERSITY OF OSLO Department of informatics. Investigating the limitations of video stream scheduling in the Internet. Master thesis.

UNIVERSITY OF OSLO Department of informatics. Investigating the limitations of video stream scheduling in the Internet. Master thesis. UNIVERSITY OF OSLO Department of informatics Investigating the limitations of video stream scheduling in the Internet Master thesis Espen Jacobsen May, 2009 Investigating the limitations of video stream

More information

RECOMMENDATION ITU-R BT.1720 *

RECOMMENDATION ITU-R BT.1720 * Rec. ITU-R BT.1720 1 RECOMMENDATION ITU-R BT.1720 * Quality of service ranking and measurement methods for digital video broadcasting services delivered over broadband Internet protocol networks (Question

More information

ACKNOWLEDGEMENT. my PhD Supervisor Dr. Vidhyacharan Bhaskar, Professor, Department of

ACKNOWLEDGEMENT. my PhD Supervisor Dr. Vidhyacharan Bhaskar, Professor, Department of iv ACKNOWLEDGEMENT It is my pleasant duty to thank a large number of people for the various forms of help, encouragement and support that they have provided during the time I have been working on this

More information

Assignment 5. Georgia Koloniari

Assignment 5. Georgia Koloniari Assignment 5 Georgia Koloniari 2. "Peer-to-Peer Computing" 1. What is the definition of a p2p system given by the authors in sec 1? Compare it with at least one of the definitions surveyed in the last

More information

LINEAR VIDEO DELIVERY FROM THE CLOUD. A New Paradigm for 24/7 Broadcasting WHITE PAPER

LINEAR VIDEO DELIVERY FROM THE CLOUD. A New Paradigm for 24/7 Broadcasting WHITE PAPER WHITE PAPER LINEAR VIDEO DELIVERY FROM THE CLOUD A New Paradigm for 24/7 Broadcasting Copyright 2016 Elemental Technologies. Linear Video Delivery from the Cloud 1 CONTENTS Introduction... 3 A New Way

More information

IPTV Explained. Part 1 in a BSF Series.

IPTV Explained. Part 1 in a BSF Series. IPTV Explained Part 1 in a BSF Series www.aucklandsatellitetv.co.nz I N T R O D U C T I O N As a result of broadband service providers moving from offering connectivity to services, the discussion surrounding

More information

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM ABSTRACT: SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM J.Priyanka 1, P.Rajeswari 2 II-M.E(CS) 1, H.O.D / ECE 2, Dhanalakshmi Srinivasan Engineering College, Perambalur. Recent years have witnessed

More information

Scalability And The Bandwidth Efficiency Of Vod Systems K.Deepathilak et al.,

Scalability And The Bandwidth Efficiency Of Vod Systems K.Deepathilak et al., Asian Journal of Electrical Sciences (AJES) Vol.3.No.1 2015 pp 33-37. available at: www.goniv.com Paper Received :08-03-2015 Paper Accepted:20-03-2015 Paper Reviewed by: 1. R. Venkatakrishnan 2. R. Marimuthu

More information

ENHANCED DISTRIBUTED MULTIMEDIA SERVICES USING ADVANCED NETWORK TECHNOLOGIES

ENHANCED DISTRIBUTED MULTIMEDIA SERVICES USING ADVANCED NETWORK TECHNOLOGIES ENHANCED DISTRIBUTED MULTIMEDIA SERVICES USING ADVANCED NETWORK TECHNOLOGIES By SUNGWOOK CHUNG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE

More information

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming

COPACC: A Cooperative Proxy-Client Caching System for On-demand Media Streaming COPACC: A Cooperative - Caching System for On-demand Media Streaming Alan T.S. Ip 1, Jiangchuan Liu 2, and John C.S. Lui 1 1 The Chinese University of Hong Kong, Shatin, N.T., Hong Kong {tsip, cslui}@cse.cuhk.edu.hk

More information

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek Yingfei Dong David Du Poznan Supercomputing and Dept. of Electrical Engineering Dept. of Computer Science Networking Center

More information

Multimedia Networking. Network Support for Multimedia Applications

Multimedia Networking. Network Support for Multimedia Applications Multimedia Networking Network Support for Multimedia Applications Protocols for Real Time Interactive Applications Differentiated Services (DiffServ) Per Connection Quality of Services Guarantees (IntServ)

More information

CCNA Exploration Network Fundamentals. Chapter 06 Addressing the Network IPv4

CCNA Exploration Network Fundamentals. Chapter 06 Addressing the Network IPv4 CCNA Exploration Network Fundamentals Chapter 06 Addressing the Network IPv4 Updated: 20/05/2008 1 6.0.1 Introduction Addressing is a key function of Network layer protocols that enables data communication

More information

Network Design. Overview. CDS with Vaults and Streamers CHAPTER

Network Design. Overview. CDS with Vaults and Streamers CHAPTER CHAPTER 2 This chapter describes the different network topologies for the Cisco TV CDS, the different network connections of the CDS servers, the CDS workflow, and network configuration considerations.

More information

Communication Networks

Communication Networks Communication Networks Chapter 3 Multiplexing Frequency Division Multiplexing (FDM) Useful bandwidth of medium exceeds required bandwidth of channel Each signal is modulated to a different carrier frequency

More information

Replicate It! Scalable Content Delivery: Why? Scalable Content Delivery: How? Scalable Content Delivery: How? Scalable Content Delivery: What?

Replicate It! Scalable Content Delivery: Why? Scalable Content Delivery: How? Scalable Content Delivery: How? Scalable Content Delivery: What? Accelerating Internet Streaming Media Delivery using Azer Bestavros and Shudong Jin Boston University http://www.cs.bu.edu/groups/wing Scalable Content Delivery: Why? Need to manage resource usage as demand

More information

Content distribution networks over shared infrastructure : a paradigm for future content network deployment

Content distribution networks over shared infrastructure : a paradigm for future content network deployment University of Wollongong Research Online University of Wollongong Thesis Collection 1954-2016 University of Wollongong Thesis Collections 2005 Content distribution networks over shared infrastructure :

More information

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE

RAID SEMINAR REPORT /09/2004 Asha.P.M NO: 612 S7 ECE RAID SEMINAR REPORT 2004 Submitted on: Submitted by: 24/09/2004 Asha.P.M NO: 612 S7 ECE CONTENTS 1. Introduction 1 2. The array and RAID controller concept 2 2.1. Mirroring 3 2.2. Parity 5 2.3. Error correcting

More information

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration

EqualLogic Storage and Non-Stacking Switches. Sizing and Configuration EqualLogic Storage and Non-Stacking Switches Sizing and Configuration THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS

More information

Dynamic Load Sharing Policy in Distributed VoD using agents

Dynamic Load Sharing Policy in Distributed VoD using agents 270 IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.10, October 2008 Dynamic Load Sharing Policy in Distributed VoD using agents H S Guruprasad Asst Prof & HOD Dept of ISE,

More information

On the Power of Cooperation in Multimedia Caching

On the Power of Cooperation in Multimedia Caching On the Power of Cooperation in Multimedia Caching Itai Dabran and Danny Raz Computer Science Department, Technion, Haifa 32, Israel Abstract. Real time multimedia applications such as Internet TV, Video

More information

Cloud Movie: Cloud Based Dynamic Resources Allocation And Parallel Execution On Vod Loading Virtualization

Cloud Movie: Cloud Based Dynamic Resources Allocation And Parallel Execution On Vod Loading Virtualization Cloud Movie: Cloud Based Dynamic Resources Allocation And Parallel Execution On Vod Loading Virtualization Akshatha K T #1 #1 M.Tech 4 th sem (CSE), VTU East West Institute of Technology India. Prasad

More information

Live Broadcast: Video Services from AT&T

Live Broadcast: Video Services from AT&T Delivering your content through the power of the cloud Live Broadcast: Video Services from AT&T Getting your content to your audience is becoming increasingly diverse and complex. Today, people want to

More information

BUILDING LARGE VOD LIBRARIES WITH NEXT GENERATION ON DEMAND ARCHITECTURE. Weidong Mao Comcast Fellow Office of the CTO Comcast Cable

BUILDING LARGE VOD LIBRARIES WITH NEXT GENERATION ON DEMAND ARCHITECTURE. Weidong Mao Comcast Fellow Office of the CTO Comcast Cable BUILDING LARGE VOD LIBRARIES WITH NEXT GENERATION ON DEMAND ARCHITECTURE Weidong Mao Comcast Fellow Office of the CTO Comcast Cable Abstract The paper presents an integrated Video On Demand (VOD) content

More information

Audio Visual: Introduction to Audio Visual Technology

Audio Visual: Introduction to Audio Visual Technology Higher National Unit specification General information for centres Unit title: Audio Visual: Introduction to Audio Visual Technology Unit code: K1TK 34 Unit purpose: This Unit is designed to introduce

More information

AN ABSTRACT OF THE THESIS OF. Arul Nambi Dhamodaran for the degree of Master of Science in

AN ABSTRACT OF THE THESIS OF. Arul Nambi Dhamodaran for the degree of Master of Science in AN ABSTRACT OF THE THESIS OF Arul Nambi Dhamodaran for the degree of Master of Science in Electrical and Computer Engineering presented on September 12, 2011. Title: Fast Data Replenishment in Peer to

More information

Chapter 3: Naming Page 38. Clients in most cases find the Jini lookup services in their scope by IP

Chapter 3: Naming Page 38. Clients in most cases find the Jini lookup services in their scope by IP Discovery Services - Jini Discovery services require more than search facilities: Discovery Clients in most cases find the Jini lookup services in their scope by IP multicast/broadcast Multicast UDP for

More information

Wireless Networks. Communication Networks

Wireless Networks. Communication Networks Wireless Networks Communication Networks Types of Communication Networks Traditional Traditional local area network (LAN) Traditional wide area network (WAN) Higher-speed High-speed local area network

More information

CONTENT MANAGEMENT - THE USERS' REQUIREMENTS

CONTENT MANAGEMENT - THE USERS' REQUIREMENTS ABSTRACT CONTENT MANAGEMENT - THE USERS' REQUIREMENTS M G Croll, A Lee and S J Parnall (BBC) The trading of content between broadcasters requires that descriptive data and some versions or illustrations

More information

Week-12 (Multimedia Networking)

Week-12 (Multimedia Networking) Computer Networks and Applications COMP 3331/COMP 9331 Week-12 (Multimedia Networking) 1 Multimedia: audio analog audio signal sampled at constant rate telephone: 8,000 samples/sec CD music: 44,100 samples/sec

More information

DELIVERING MULTIMEDIA CONTENT FOR THE FUTURE GENERATION MOBILE NETWORKS

DELIVERING MULTIMEDIA CONTENT FOR THE FUTURE GENERATION MOBILE NETWORKS Research Article DELIVERING MULTIMEDIA CONTENT FOR THE FUTURE GENERATION MOBILE NETWORKS S. Swarna Parvathi, Dr. K. S. Eswarakumar Address for Correspondence S. Swarna Parvathi, PhD Scholar Department

More information

Dynamic Service Aggregation for Efficient Use of Resources in Interactive Video Delivery 1

Dynamic Service Aggregation for Efficient Use of Resources in Interactive Video Delivery 1 Dynamic Service Aggregation for Efficient Use of Resources in Interactive Video Delivery 1 D. Venkatesh and T.D.C. Little Multimedia Communications Laboratory Department of Electrical, Computer and Systems

More information

AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES

AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES AUTOMATED STUDENT S ATTENDANCE ENTERING SYSTEM BY ELIMINATING FORGE SIGNATURES K. P. M. L. P. Weerasinghe 149235H Faculty of Information Technology University of Moratuwa June 2017 AUTOMATED STUDENT S

More information

Multimedia Networking

Multimedia Networking CMPT765/408 08-1 Multimedia Networking 1 Overview Multimedia Networking The note is mainly based on Chapter 7, Computer Networking, A Top-Down Approach Featuring the Internet (4th edition), by J.F. Kurose

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

CS 457 Multimedia Applications. Fall 2014

CS 457 Multimedia Applications. Fall 2014 CS 457 Multimedia Applications Fall 2014 Topics Digital audio and video Sampling, quantizing, and compressing Multimedia applications Streaming audio and video for playback Live, interactive audio and

More information

Subnet Multicast for Delivery of One-to-Many Multicast Applications

Subnet Multicast for Delivery of One-to-Many Multicast Applications Subnet Multicast for Delivery of One-to-Many Multicast Applications We propose a new delivery scheme for one-to-many multicast applications such as webcasting service used for the web-based broadcasting

More information

Quality of Service II

Quality of Service II Quality of Service II Patrick J. Stockreisser p.j.stockreisser@cs.cardiff.ac.uk Lecture Outline Common QoS Approaches Best Effort Integrated Services Differentiated Services Integrated Services Integrated

More information

On the Use of Multicast Delivery to Provide. a Scalable and Interactive Video-on-Demand Service. Kevin C. Almeroth. Mostafa H.

On the Use of Multicast Delivery to Provide. a Scalable and Interactive Video-on-Demand Service. Kevin C. Almeroth. Mostafa H. On the Use of Multicast Delivery to Provide a Scalable and Interactive Video-on-Demand Service Kevin C. Almeroth Mostafa H. Ammar Networking and Telecommunications Group College of Computing Georgia Institute

More information

ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY

ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY ADAPTIVE VIDEO STREAMING FOR BANDWIDTH VARIATION WITH OPTIMUM QUALITY Joseph Michael Wijayantha Medagama (08/8015) Thesis Submitted in Partial Fulfillment of the Requirements for the Degree Master of Science

More information

The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING

The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING AKAMAI.COM The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING PART 3: STEPS FOR ENSURING CDN PERFORMANCE MEETS AUDIENCE EXPECTATIONS FOR OTT STREAMING In this third installment of Best Practices

More information

Buffer Management Scheme for Video-on-Demand (VoD) System

Buffer Management Scheme for Video-on-Demand (VoD) System 2012 International Conference on Information and Computer Networks (ICICN 2012) IPCSIT vol. 27 (2012) (2012) IACSIT Press, Singapore Buffer Management Scheme for Video-on-Demand (VoD) System Sudhir N.

More information

Chapter 20: Multimedia Systems

Chapter 20: Multimedia Systems Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition,

Chapter 20: Multimedia Systems. Operating System Concepts 8 th Edition, Chapter 20: Multimedia Systems, Silberschatz, Galvin and Gagne 2009 Chapter 20: Multimedia Systems What is Multimedia? Compression Requirements of Multimedia Kernels CPU Scheduling Disk Scheduling Network

More information

Peer-to-Peer Streaming Systems. Behzad Akbari

Peer-to-Peer Streaming Systems. Behzad Akbari Peer-to-Peer Streaming Systems Behzad Akbari 1 Outline Introduction Scaleable Streaming Approaches Application Layer Multicast Content Distribution Networks Peer-to-Peer Streaming Metrics Current Issues

More information

A Centralized Approaches for Location Management in Personal Communication Services Networks

A Centralized Approaches for Location Management in Personal Communication Services Networks A Centralized Approaches for Location Management in Personal Communication Services Networks Fahamida Firoze M. Tech. (CSE) Scholar, Deptt. Of CSE, Al Falah School of Engineering & Technology, Dhauj, Faridabad,

More information

Reservation Based Admission Control of a Multimedia Server

Reservation Based Admission Control of a Multimedia Server Reservation Based Admission Control of a Multimedia Server Md. Mujibur Rahman Dr. Mostofa Akbar Rukhsana Afroz Ruby Dept. of Computer Science Dept. of Computer Science & Engg, Millinium Solutions Ltd.

More information

Scalable Multi-DM642-based MPEG-2 to H.264 Transcoder. Arvind Raman, Sriram Sethuraman Ittiam Systems (Pvt.) Ltd. Bangalore, India

Scalable Multi-DM642-based MPEG-2 to H.264 Transcoder. Arvind Raman, Sriram Sethuraman Ittiam Systems (Pvt.) Ltd. Bangalore, India Scalable Multi-DM642-based MPEG-2 to H.264 Transcoder Arvind Raman, Sriram Sethuraman Ittiam Systems (Pvt.) Ltd. Bangalore, India Outline of Presentation MPEG-2 to H.264 Transcoding Need for a multiprocessor

More information

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization

IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL Segment-Based Streaming Media Proxy: Modeling and Optimization IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 8, NO. 2, APRIL 2006 243 Segment-Based Streaming Media Proxy: Modeling Optimization Songqing Chen, Member, IEEE, Bo Shen, Senior Member, IEEE, Susie Wee, Xiaodong

More information

Supporting Quality of Service for Internet Applications A thesis presented for the degree of Master of Science Research

Supporting Quality of Service for Internet Applications A thesis presented for the degree of Master of Science Research Supporting Quality of Service for Internet Applications A thesis presented for the degree of Master of Science Research Department of Computer Systems Faculty of Information Technology University of Technology,

More information

The Novel HWN on MANET Cellular networks using QoS & QOD

The Novel HWN on MANET Cellular networks using QoS & QOD The Novel HWN on MANET Cellular networks using QoS & QOD Abstract: - Boddu Swath 1 & M.Mohanrao 2 1 M-Tech Dept. of CSE Megha Institute of Engineering & Technology for Women 2 Assistant Professor Dept.

More information

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers

Optimizing Parallel Access to the BaBar Database System Using CORBA Servers SLAC-PUB-9176 September 2001 Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Jacek Becla 1, Igor Gaponenko 2 1 Stanford Linear Accelerator Center Stanford University, Stanford,

More information

Module 15: Network Structures

Module 15: Network Structures Module 15: Network Structures Background Topology Network Types Communication Communication Protocol Robustness Design Strategies 15.1 A Distributed System 15.2 Motivation Resource sharing sharing and

More information

Chapter 17: Distributed Systems (DS)

Chapter 17: Distributed Systems (DS) Chapter 17: Distributed Systems (DS) Silberschatz, Galvin and Gagne 2013 Chapter 17: Distributed Systems Advantages of Distributed Systems Types of Network-Based Operating Systems Network Structure Communication

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 multimedia applications:

More information

Effects of Internet Path Selection on Video-QoE

Effects of Internet Path Selection on Video-QoE Effects of Internet Path Selection on Video-QoE by Mukundan Venkataraman & Mainak Chatterjee Dept. of EECS University of Central Florida, Orlando, FL 32826 mukundan@eecs.ucf.edu mainak@eecs.ucf.edu Streaming

More information

Accelerating Video Using Cisco Wide Area Application Services and Digital Media Systems

Accelerating Video Using Cisco Wide Area Application Services and Digital Media Systems Accelerating Video Using Cisco Wide Area Application Services and Digital Media Systems Most enterprises understand the power of video-based information delivered directly to their employees in the workplace.

More information

Multimedia Storage Servers

Multimedia Storage Servers Multimedia Storage Servers Cyrus Shahabi shahabi@usc.edu Computer Science Department University of Southern California Los Angeles CA, 90089-0781 http://infolab.usc.edu 1 OUTLINE Introduction Continuous

More information

Application-Layer Protocols Peer-to-Peer Systems, Media Streaming & Content Delivery Networks

Application-Layer Protocols Peer-to-Peer Systems, Media Streaming & Content Delivery Networks COMP 431 Internet Services & Protocols Application-Layer Protocols Peer-to-Peer Systems, Media Streaming & Content Delivery Networks Jasleen Kaur February 14, 2019 Application-Layer Protocols Outline Example

More information

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services

Overview Computer Networking What is QoS? Queuing discipline and scheduling. Traffic Enforcement. Integrated services Overview 15-441 15-441 Computer Networking 15-641 Lecture 19 Queue Management and Quality of Service Peter Steenkiste Fall 2016 www.cs.cmu.edu/~prs/15-441-f16 What is QoS? Queuing discipline and scheduling

More information

Agenda. What are we looking at? Introduction. Aim of the project. IP Routing

Agenda. What are we looking at? Introduction. Aim of the project. IP Routing Agenda Handoffs in Cellular Wireless Networks: The Daedalus Implementation & Experience by Shrinivasan Seshan, Hari Balakrishnan Randy H. Katz A short presentation by Aishvarya Sharma Dept of Computer

More information

A Project Report on MULTIPLE ROUTING CONFIGURATIONS FOR FAST IP NETWORK RECOVERY

A Project Report on MULTIPLE ROUTING CONFIGURATIONS FOR FAST IP NETWORK RECOVERY A Project Report on MULTIPLE ROUTING CONFIGURATIONS FOR FAST IP NETWORK RECOVERY Submitted partial fulfillment of the requirements for the award Of the degree of MASTER OF COMPUTER APPLICATIONS IN COMPUTER

More information

A Scalable Framework for Content Replication in Multicast-Based Content Distribution Networks

A Scalable Framework for Content Replication in Multicast-Based Content Distribution Networks A Scalable Framework for Content Replication in Multicast-Based Content Distribution Networks Yannis Matalas 1, Nikolaos D. Dragios 2, and George T. Karetsos 2 1 Digital Media & Internet Technologies Department,

More information

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services Zhiyong Liu, CATR Prof. Zhili Sun, UniS Dr. Dan He, UniS Denian Shi, CATR Agenda Introduction Background Problem Statement

More information

Caching video contents in IPTV systems with hierarchical architecture

Caching video contents in IPTV systems with hierarchical architecture Caching video contents in IPTV systems with hierarchical architecture Lydia Chen 1, Michela Meo 2 and Alessandra Scicchitano 1 1. IBM Zurich Research Lab email: {yic,als}@zurich.ibm.com 2. Politecnico

More information

AV OVER IP DEMYSTIFIED

AV OVER IP DEMYSTIFIED AV OVER IP DEMYSTIFIED INTRODUCTION Audio/visual (AV) over internet protocol (IP) suite is the routing of high definition video, audio and control signals to various destinations using a standard Ethernet

More information

Summary Cache based Co-operative Proxies

Summary Cache based Co-operative Proxies Summary Cache based Co-operative Proxies Project No: 1 Group No: 21 Vijay Gabale (07305004) Sagar Bijwe (07305023) 12 th November, 2007 1 Abstract Summary Cache based proxies cooperate behind a bottleneck

More information

Master s Thesis. A Construction Method of an Overlay Network for Scalable P2P Video Conferencing Systems

Master s Thesis. A Construction Method of an Overlay Network for Scalable P2P Video Conferencing Systems Master s Thesis Title A Construction Method of an Overlay Network for Scalable P2P Video Conferencing Systems Supervisor Professor Masayuki Murata Author Hideto Horiuchi February 14th, 2007 Department

More information

Cache Management for TelcoCDNs. Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK)

Cache Management for TelcoCDNs. Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK) Cache Management for TelcoCDNs Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK) d.tuncer@ee.ucl.ac.uk 06/01/2017 Agenda 1. Internet traffic: trends and evolution

More information

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service

CSCD 433/533 Advanced Networks Spring Lecture 22 Quality of Service CSCD 433/533 Advanced Networks Spring 2016 Lecture 22 Quality of Service 1 Topics Quality of Service (QOS) Defined Properties Integrated Service Differentiated Service 2 Introduction Problem Overview Have

More information

An Effective Neighborhood Initial-Playback Based Caching Scheme for Video on Demand over Mobile Ad Hoc Network

An Effective Neighborhood Initial-Playback Based Caching Scheme for Video on Demand over Mobile Ad Hoc Network An Effective Neighborhood Initial-Playback Based Caching Scheme for Video on Demand over Mobile Ad Hoc Network Saleh Ali Alomari, Member, IACSIT, Vaithegy Doraisamy, and Putra Sumari Abstract Video on

More information

Chunk Scheduling Strategies In Peer to Peer System-A Review

Chunk Scheduling Strategies In Peer to Peer System-A Review Chunk Scheduling Strategies In Peer to Peer System-A Review Sanu C, Deepa S S Abstract Peer-to-peer ( P2P) s t r e a m i n g systems have become popular in recent years. Several peer- to-peer systems for

More information

Overlay and P2P Networks. Introduction and unstructured networks. Prof. Sasu Tarkoma

Overlay and P2P Networks. Introduction and unstructured networks. Prof. Sasu Tarkoma Overlay and P2P Networks Introduction and unstructured networks Prof. Sasu Tarkoma 14.1.2013 Contents Overlay networks and intro to networking Unstructured networks Overlay Networks An overlay network

More information

Proxy Prefix Caching for Multimedia Streams

Proxy Prefix Caching for Multimedia Streams Proxy Prefix Caching for Multimedia Streams Subhabrata Seny, Jennifer Rexfordz, and Don Towsleyy ydept. of Computer Science znetworking & Distributed Systems University of Massachusetts AT&T Labs Research

More information

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2

B.H.GARDI COLLEGE OF ENGINEERING & TECHNOLOGY (MCA Dept.) Parallel Database Database Management System - 2 Introduction :- Today single CPU based architecture is not capable enough for the modern database that are required to handle more demanding and complex requirements of the users, for example, high performance,

More information

SOLUTION GUIDE FOR BROADCASTERS

SOLUTION GUIDE FOR BROADCASTERS SOLUTION GUIDE FOR BROADCASTERS TV DIRECT TO VIEWERS Deliver live OTT, timeshift and VOD services with an amazing viewing experience, without redesigning your existing system, and save on delivery costs.

More information

Network Design. Overview. CDS with Vaults and Streamers CHAPTER

Network Design. Overview. CDS with Vaults and Streamers CHAPTER 2 CHAPTER This chapter describes the different network topologies for the Cisco TV CDS, the different network connections of the CDS servers, the CDS workflow, and network configuration considerations.

More information

QUALITY of SERVICE. Introduction

QUALITY of SERVICE. Introduction QUALITY of SERVICE Introduction There are applications (and customers) that demand stronger performance guarantees from the network than the best that could be done under the circumstances. Multimedia

More information

Technology Insight Series

Technology Insight Series IBM ProtecTIER Deduplication for z/os John Webster March 04, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved. Announcement Summary The many data

More information

Networking Applications

Networking Applications Networking Dr. Ayman A. Abdel-Hamid College of Computing and Information Technology Arab Academy for Science & Technology and Maritime Transport Multimedia Multimedia 1 Outline Audio and Video Services

More information

Multicast and Quality of Service. Internet Technologies and Applications

Multicast and Quality of Service. Internet Technologies and Applications Multicast and Quality of Service Internet Technologies and Applications Aims and Contents Aims Introduce the multicast and the benefits it offers Explain quality of service and basic techniques for delivering

More information

INF5071 Performance in distributed systems Distribution Part II

INF5071 Performance in distributed systems Distribution Part II INF5071 Performance in distributed systems Distribution Part II 5 November 2010 Type IV Distribution Systems Combine Types I, II or III Network of servers Server hierarchy Autonomous servers Cooperative

More information

GEO BASED ROUTING FOR BORDER GATEWAY PROTOCOL IN ISP MULTI-HOMING ENVIRONMENT

GEO BASED ROUTING FOR BORDER GATEWAY PROTOCOL IN ISP MULTI-HOMING ENVIRONMENT GEO BASED ROUTING FOR BORDER GATEWAY PROTOCOL IN ISP MULTI-HOMING ENVIRONMENT Duleep Thilakarathne (118473A) Degree of Master of Science Department of Electronic and Telecommunication Engineering University

More information

MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA

MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA Gilad Shainer 1, Tong Liu 1, Pak Lui 1, Todd Wilde 1 1 Mellanox Technologies Abstract From concept to engineering, and from design to

More information

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0.

Best Practices. Deploying Optim Performance Manager in large scale environments. IBM Optim Performance Manager Extended Edition V4.1.0. IBM Optim Performance Manager Extended Edition V4.1.0.1 Best Practices Deploying Optim Performance Manager in large scale environments Ute Baumbach (bmb@de.ibm.com) Optim Performance Manager Development

More information

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 17, NO. 9, SEPTEMBER

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 17, NO. 9, SEPTEMBER IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 17, NO. 9, SEPTEMBER 1999 1599 A Generic Platform for Scalable Access to Multimedia-on-Demand Systems Raouf Boutaba and Abdelhakim Hafid, Member,

More information

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies

Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies Impact of Frequency-Based Cache Management Policies on the Performance of Segment Based Video Caching Proxies Anna Satsiou and Michael Paterakis Laboratory of Information and Computer Networks Department

More information

3. Quality of Service

3. Quality of Service 3. Quality of Service Usage Applications Learning & Teaching Design User Interfaces Services Content Process ing Security... Documents Synchronization Group Communi cations Systems Databases Programming

More information

Networking interview questions

Networking interview questions Networking interview questions What is LAN? LAN is a computer network that spans a relatively small area. Most LANs are confined to a single building or group of buildings. However, one LAN can be connected

More information