GridCast: Improving Peer Sharing for P2P VoD

Size: px
Start display at page:

Download "GridCast: Improving Peer Sharing for P2P VoD"

Transcription

1 GridCast: Improving Peer Sharing for P2P VoD BIN CHENG Huazhong University of Science and Technology LEX STEIN Microsoft Research Asia HAI JIN, and XIAOFEI LIAO Huazhong University of Science and Technology and ZHENG ZHANG Microsoft Research Asia 26 Video-on-Demand (VoD) is a compelling application, but costly. VoD is costly due to the load it places on video source servers. Many have proposed using peer-to-peer (P2P) techniques to shift load from servers to peers. Yet, nobody has implemented and deployed a system to openly and systematically evaluate how these techniques work. This article describes the design, implementation and evaluation of GridCast, a real deployed P2P VoD system. GridCast has been live on CERNET since May of It provides seek, pause, and play operations, and employs peer sharing to improve system scalability. In peak months, GridCast has served videos to 23,000 unique users. From the first deployment, we have gathered information to understand the system and evaluate how to further improve peer sharing through caching and replication. We first show that GridCast with single video caching (SVC) can decrease load on source servers by an average of 22% from a client-server architecture. We analyze the net effect on system resources and determine that peer upload is largely idle. This leads us to changing the caching algorithm to cache multiple videos (MVC). MVC decreases source load by an average of 51% over the client-server. The improvement is greater as user load increases. This bodes well for peer-assistance at larger scales. A detailed analysis of MVC shows that departure misses become a major issue in a P2P VoD system with caching optimization. Motivated by this observation, we examine how to use replication to eliminate departure misses and further reduce server load. A framework for lazy replication is presented and evaluated in this article. In this framework, two predictors are plugged in to create the working replication algorithm. With these two simple predictors, lazy replication can decrease server load by 15% from MVC with only a minor increase in network traffic. Categories and Subject Descriptors: C.2.4 [Computer-Communication Networks]: Distributed Systems; C.4 [Performance of Systems]: Measurement Techniques General Terms: Design, Measurement, Performance This work is supported by National Natural Science Foundation of China (NSFC) grants , and , Research Fund for the Doctoral Program of Higher Education grant , Wuhan Chengguang Plan grant , as well as grants from Microsoft Research Asia. Authors addresses: B. Cheng, H. Jin, and X. Liao, Services Computing Technology and System Lab, Huazhong University of Science and Technology, Wuhan, China; {showersky,hjin,xfliao}@hust.edu.cn; L. Stein and Z. Zhang, Microsoft Research Asia, Beijing, China; {castein,zzhang}@microsoft.com. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY USA, fax +1 (212) , or permissions@acm.org. c 2008 ACM /2008/10-ART26 $5.00 DOI /

2 26:2 B. Cheng et al. Additional Key Words and Phrases: Video-on-demand, peer-to-peer, caching, replication ACM Reference Format: Cheng, B., Stein, L., Jin, H., Liao, X., and Zhang, Z GridCast: Improving peer sharing for P2P VoD. ACM Trans. Multimedia Comput. Comm. Appl. 4, 4, Article 26 (October 2008), 31 pages. DOI = / / INTRODUCTION Video-on-Demand (VoD) is increasingly popular with Internet users. VoD is compelling because it provides users with video stream control, such as pause and random seek. However, VoD is costly because of the load it places on video servers. Peer-to-peer (P2P) techniques can reduce server load by sharing between peers. These techniques have been used successfully for file downloading [Tian and Dai 2007] and live streaming [Chu et al. 2000; Liao et al. 2006]. How to use P2P for VoD is not yet well-understood. VoD differs from other Internet media applications in several important ways. First, in VoD a user can begin a VoD session at any time and seek to any position during playback. In live streaming, a stream begins at the same time for everyone and users cannot seek forward and backward arbitrarily. Second, VoD has strict real-time constraints while file downloading does not. For VoD, the next chunk is more important than a later one while any new chunk is good for file downloading. Together, user-control operations and real-time constraints make VoD more challenging than live streaming or file downloading. A number of studies on P2P VoD have been done in recent years, such as RINDY [Cheng et al. 2007], P2Cast [Guo et al. 2003], and P2VoD [Tai et al. 2004]. Much of the recent work has focused on optimizing the overlay topology for a single video, with evaluation through simulation. Yet nobody has implemented and deployed a system to openly and systematically evaluate how these techniques work. In addition, a VoD system often provides a large number of videos with a heavy-tailed distribution. With these in mind, we implemented and deployed a live P2P VoD system, namely GridCast [Cheng et al. 2008]. GridCast has been deployed on China s CERNET 1 since May of CERNET is a general-purpose Internet network and is the main fixed network access for Chinese university and college campuses. In China, the vast majority of students live in a dormitory while attending college and grad school. Students use CERNET for many applications; web, , chat, BBS access, and news. In December 2006, about 23,000 users watched video on GridCast. To evaluate the system, we traced its operation at a fine level of detail, recording system events and latency observations. The events include all user joins and leaves, all user control operations, all chunk requests and transfers. To the best of our knowledge, our instrumentation gives us system history information at unprecedented detail in studies of internet VoD systems. This detailed instrumentation enables us to perform a deep analysis for P2P VoD systems. In this article, we measure GridCast and explore how to improve peer sharing by using caching and replication. With caching, a peer saves retrieved chunks either in memory or on disk, to share with others. Replication is another way to increase sharing opportunities between peers. Using replication, a peer replicates chunks to other peers in a proactive way. That means a peer can push some chunks to others before it leaves. We examine how much benefit can be achieved by caching and replication through log analysis and trace-driven simulation. This article makes the following major contributions: The first detailed design description of a deployed peer-assisted VoD system. 1 CERNET is the China Education and Research Network. As of January 2006, CERNET connects about 1,500 universities and colleges and about 20M hosts.

3 GridCast: Improving Peer Sharing for P2P VoD 26:3 A presentation of the insights and lessons learned from over 16 months of live deployment. For example, the findings that bigger caches are not always better because of the load imbalance among peers and a small group of NAT peers can have a big impact on total system behavior. A demonstration of the basic benefit of peer sharing to reduce load on video source servers. The article further evaluates two caching algorithms; single video caching and multiple video caching. The article shows that multiple video caching can improve both scalability and user experience, then evaluates the limitations of this approach. The observation that, with multiple video caching, peer departures become responsible for a larger number of source server load. They are the major problem to be solved. The introduction and evaluation of a new replication framework designed to reduce the effect of peer departures. The framework is parameterized by two predictors; a peer departure predictor and a chunk request predictor. The rest of this article is organized as follows. Section 2 presents the basic design of GridCast. Section 3 describes the network environment and data collection mechanisms. Section 4 explains the model we use to evaluate the system. Section 5 evaluates the performance of single video caching. Section 6 explains the details of the multivideo caching algorithm and evaluates its continuity and scalability. Motivated by the observations of multivideo caching, we explore how to use replication to further improve peer sharing through simulation in Section 7. Section 8 outlines related work then Section 9 concludes. 2. BASIC DESIGN OF GRIDCAST The basic idea behind GridCast is to enable peers to share with each other and offload the server. There are three general questions that need to be addressed in GridCast. First, how to organize online peers for sharing? Second, how to schedule requests for real-time playback? Third, how to use peer resource to maximize peer sharing and minimize server load? With these questions in mind, this section describes the basic design of GridCast and explains how it works. 2.1 Architecture Overview With a hybrid architecture, the GridCast system comprises a track server (tracker), one or more video source servers (sources), peers, and a Web portal. Figure 1 illustrates these components and their interactions. This section describes the functions of these principal components in detail. Source servers (sources). The sources provide a baseline on content availability by storing a persistent complete copy of every video. Videos are partitioned across the sources. In the future, the videos may be replicated across multiple geographically disparate sources to decrease both latency and network core traffic. Such a content-distribution network would be orthogonal to the peer-assistance mechanisms. This approach is not prohibited by the GridCast design. In GridCast, video files are segmented on time rather than space. In a file downloading system such as BitTorrent [Cohen 2003], files are segmented on space. BitTorrent does not provide any coherent way for users to interact with files during downloading. As long as downloading takes some noticeable amount of time, a VoD system must overlap user interaction and downloading. User seeks are based on time. Videos are partitioned into chunks of uniform time to make the file addressable on time. Chunks have a fixed playing time of 1s and one chunk will typically include tens of real-time protocol (RTP) [Schulzrinne et al. 2003] packets. Since both codecs and contents vary, so do chunk sizes. For example, RealVideo 10 typically streams at Kbps so a chunk of RealVideo 10 will be about 65 KB.

4 26:4 B. Cheng et al. Fig. 1. Basic GridCast Architecture. This diagram illustrates the components of GridCast and their typical interactions. Thin arrows represent metadata or control and thick arrows represent video data (chunk) transfers. In arrow 1, the user on Peer A contacts the Web portal to browse the catalog and select a video file. The portal returns a video file ID to the peer, with arrow 2. To form connections with others for sharing, a peer contacts the tracker (details of peer management in Section 2.3), sending the tracker a description of its state. The tracker uses this information to construct a list of candidate peers, returned in arrow 4. Chunks are fetched from peers or sources. A fetch from source Y is shown with arrows 7 and 8 and a fetch from peer E with arrows 5 and 6. Tracker server (tracker). The tracker is a well-known rendezvous for joining peers. The tracker maintains a membership list of all joined peers. This information is used to facilitate data sharing between peers (peer-assistance) and need not be perfect for the system to function correctly. Existing peers refresh their playhead information every 30s, and synchronize with the tracker every five minutes or asynchronously on a user seek trigger. Peers. Peers fetch chunks from sources or peers and cache them in local memory and disk. A peer has three components; the peer client, the media server, and the media player. The client fetches chunks and places them in a linked list shared with the media server. Periodically, the media server pulls a chunk off the list and feeds it to the media player. The player decodes the chunk data and presents it to the viewer. The media server and player communicate over a single local socket using real time streaming protocol (RTSP) [Schulzrinne et al. 1998] for metadata and RTP for data. RTSP controls a media player with a set of VCR operations, such as pause, forward, rewind. Web portal. The web portal presents a catalog of available movies to users. With a web browser, the user goes to the portal, browses the catalog of videos, then picks one to watch. 2.2 System Operations Users join and leave the system. When in the system, a user can select a movie, play, pause, seek, and stop. This subsection describes how GridCast implements these events and operations. Join. To join the system, the peer notifies the tracker of its existence. Next, the peer sends a keep-alive UDP message to the tracker once per minute. For logging the timestamps of events, the peer sets a local clock to the tracker s clock value. Leave (or crash). When a user leaves the system, the peer notifies the tracker and its neighbors, then closes all its TCP connections. If a peer quietly disappears or crashes, its neighbors will eventually receive a broken TCP connection signal. On receiving the signal, they remove the peer from their

5 GridCast: Improving Peer Sharing for P2P VoD 26:5 state. The tracker cannot use this mechanism for detecting failure because peers communicate with the tracker over UDP. Instead, the tracker concludes that a peer has failed if it has not received any keep-alive messages from that peer within five minutes. Start. Users select a video to start a session. The peer sends the video file identifier to the tracker and receives a list of candidates for sharing. The peer needs the video s session description protocol (SDP [Handley et al. 2006]) metadata to initialize the local player. It can retrieve this from another peer or a source. If the tracker returns no candidates, the peer immediately contacts the source for the SDP. Otherwise, the peer randomly picks one of the candidates and requests the SDP with a 2s timeout. If that fails, then it tries another candidate, finally requesting from a source after another timeout. Once the peer has the SDP, it sends that into the local player and launches the chunk scheduler. After retrieving the first 10s of data, the peer signals the media player to start playback. Play. To play continuously, the peer performs two tasks; fetching from the network (the scheduler) and feeding (the local media server) to the media player. Every 10s the scheduler reevaluates the candidate peers based on content proximity, selects appropriate peers as its data providers, and sends chunk requests based on their current chunk maps and measured capacities. The details of peer management and chunk fetch scheduling are described in Sections 2.3 and 2.4. For smooth playback, the local media server sends the fetched chunks to the player at the speed of one chunk per second. Pause. On a pause, the scheduler continues to fetch data from other peers and provide data to other peers, but the media server stops sending chunks into the player. Seek. On a seek, the peer moves the playhead to the target position, synchronizes with the tracker for a new candidate list, then activates the scheduler to satisfy the new data requirements. When the 10s of data following the new playhead have been fetched, the peer signals the player to resume playback. Stop. On a stop, the scheduler stops operating, but the peer continues to cache and serve the current video. The peer provides data to other peers even though it is not consuming data, acting like a temporary source server. The peer only ceases caching and serving if the user switches videos or explicitly leaves the system. 2.3 Peer Management GridCast organizes all of the peers watching the same video into an overlay network. The overlay has two purposes. First, to find peers that have potential for chunk sharing. Second, to spread metadata so peers can make scheduling decisions locally with rich and timely information. There are three key issues for peer management. First, how to define metrics that meaningfully estimate the potential for sharing between peers (Section 2.3.1). Second, given these metrics, how to use them to organize peering relationships (Section 2.3.2). Third, how to distribute the metadata used to compute the metrics (Section 2.3.3) Content Proximity. Peers are organized in the overlay network based on their potential for future sharing. Content proximity metrics estimate this potential between peers. GridCast uses two such metrics; playhead and playcache. Section describes how peers form relationships of varying degrees of metadata sharing to facilitate overlay flexibility. Playhead is for peers that share little metadata, and playcache for those that share more. playhead (B A) = offset B offset A (1) playcache (B A) = max i=min chunkmap (B)[i]. (2) max min + 1

6 26:6 B. Cheng et al. Table I. Peer Relationships in Overlay This table lists the three relationships that exist in the GridCast overlay. Partner is the closest relationship and member the most distant, with P N M. Member (M) Neighbor (N) Partner (P) Metadata not shared shared shared Chunks not shared not shared shared Promoted by playhead proximity playcache proximity playcache proximity Demoted by out-of-date metadata zero playcache proximity no sharing in the past minute Purpose pool to select new neighbors gossip chunk sharing and gossip Max value Playhead proximity roughly estimates the potential for sharing using only the current locations of the two peers playheads. It takes the absolute value of the difference, as shown in Equation (1). Two peers are estimated to have better sharing potential if their playheads are closer. This is a coarse-grained metric for peers that share little metadata. Playcache proximity benefits from more metadata than just the playhead position. It directly compares the contents of two caches. The value is equal to the number of chunks that are in the current prefetch window and are available in the chunkmap of the other peer. Consider two peers A and B. For A, the playcache proximity of B is shown in Equation (2). The min and max are, respectively, the begin and end of the prefetch window of A. The availability of chunk i in the chunkmap of B is shown with chunkmap(b)[i]. The more chunks B caches within the prefetch window of A, the better sharing it has. In this way, playcache measures how many chunks can be provided by the other peer Peer Selection. Each peer organizes peers it knows from the tracker or gossip messages into three lists; members, neighbors, and partners. From members to partners, the sets are increasingly exclusive. All partners are neighbors and all neighbors are members. A member is a peer with a known IP address and playhead. The member list is the most inclusive. It is updated when the peer synchronizes with the tracker or receives gossip messages from other peers. A neighbor is a member that has been promoted based on playhead proximity. Neighbors share gossip messages over persistent TCP connections. A partner is a neighbor that has been promoted based on playcache proximity. Chunks are only shared between partners. Partners share data and neighbors share metadata. To limit the overhead of peer management, each peer constrains the number of its members, neighbors, and partners. Each peer has a maximum of 100 members, 50 neighbors, and 25 partners. User operations can quickly change the potential for sharing between peers. The local scheduler needs fresh information to find chunks. To keep the partner list relevant for chunk sharing, every 10s the peer recalculates the content proximities of its members, neighbors and partners, then promotes or demotes based on this calculation. Playhead proximity promotes members to neighbors and neighbors to partners. Peers can also be demoted from sets. If a set reaches or exceeds its size limit, peers will be demoted. The demotion use proximity metrics to select peers. In addition to reaching size limits, peers are demoted for liveness reasons. A partner is demoted when it has not shared within the past minute. A neighbors is demoted when it has zero playcache proximity. A members is demoted if the timestamp of its most recent playhead position is older than 10 minutes. These relationships are summarized in Table I Gossip. The overlay structure represents sharing opportunities. A user can watch any video at any time and seek freely within that video. These actions will change sharing opportunities. The

7 GridCast: Improving Peer Sharing for P2P VoD 26:7 overlay structure must change as the sharing opportunities do. Peers use metadata to decide how to change their overlay relationships and discover more members. For scalability, GridCast combines gossip and centralized approaches for distributing metadata. There are two ways for a peer to get fresh metadata. The first way is to synchronize with the tracker for more candidate peers. A peer does this every five minutes or whenever its user seeks. To keep the metadata on the tracker up-to-date, peers send it their playhead every 30s. The second way to get fresh metadata is gossip. Every 10s a peer sends its cachemap and playhead to all its directly connected neighbors. Each neighbor updates its neighbor list with the new cachemap and playhead, then forwards that update to a randomly selected neighbor. A message will propagate until its time-to-live (TTL) reaches zero. The initial TTL of gossip is five. With gossip, a peer decreases load on the tracker, but the metadata may be less fresh than if sharing were coordinated centrally. Missed opportunities for sharing will increase the load on the source server. Therefore, using gossip to reduce load on the tracker may increase load on the source servers. We do not explore this question further. 2.4 Request Scheduling The scheduler and the local media server cooperate to provide video playback. The scheduler fetches chunks from peers or sources and then the media server feeds those chunks into the media player. The media server s only task is to feed one chunk per second into the media player. This operation fails only if the chunk is not available locally. If the number of failures exceeds 10, the media server skips the chunk and moves to the next one. The scheduler is more complicated and is activated periodically. The current scheduling period is 10 seconds. In each scheduling period, The scheduler makes two important decisions; how many outstanding requests to issue and which partners should receive those requests. The scheduler maintains a capacity value for each partner. A partner s capacity value represents how many requests the peer will send to it in one scheduling period. A peer initializes the capacity of each partner based on the number of partners N and the bitrate of the current video. It reevaluates partner capacities with feedback from each scheduling period. When multiple partners can serve the same chunk, the scheduler selects the partner with maximum remaining capacity to serve first. Using the chunk maps and capacities of partners, the scheduler can determine how many chunks can be fetched and from where each chunk can be fetched in a period. Every 10s, the scheduler wakes up to fetch chunks into the buffer. The scheduler first reevaluates the capacities of its partners using the feedback of the last period. After that, the scheduler checks the outstanding chunk requests within the prefetch window. If a request has timed out or failed due to the departure of a partner, it is rescheduled for another partner. Then the scheduler schedules the unrequested chunks in the prefetch window to partners in sequence, consulting the chunkmaps and estimated capacities. Finally, the next 10s of chunks from the currently playing position are fetched from the source server if there are no suitable partners. As chunks arrive from partners, the prefetch window keeps moving. Figure 2 describes this scheduling algorithm in greater detail. 2.5 Caching Chunk caching stores retrieved chunks locally, either in memory or on disk. Caching has two benefits. First, playing out of the local cache eliminates uncertain network operations, improving continuity. Second, cooperative caching can improve both scalability and continuity by shifting request load from sources to peers. In this way, by helping others, peers can help themselves. An early prototype only cached N of the recently viewed chunks. To increase the opportunity for sharing and improve seek latency, this was changed to cache all the fetched chunks of the current

8 26:8 B. Cheng et al. Fig. 2. Peer chunk scheduling algorithm. Before the scheduler starts, the peer retrieves the metadata of the video to be played from partners or sources. This includes the SDP and chunk size table. When requested chunk i arrives from partner j, the current capacity of partner j is increased, C cur( j ) = C cur( j ) + S i. video. When the user switches to a new video, all the previous chunks are evicted from the local cache. 3. DEPLOYMENT To understand how well these design choices work, we built and deployed GridCast, logging events and operations to evaluate and understand performance. Detailed logging instrumentation enables deep analysis. This section describes the deployment environment and then summarizes data collection. 3.1 Environment GridCast has been deployed and operational on China s CERNET since May CERNET is a generalpurpose internet network and the sole fixed network access for students on Chinese university and college campuses. In China, student housing is subsidized and substantially cheaper than equivalent

9 GridCast: Improving Peer Sharing for P2P VoD 26:9 Fig. 3. Topology of CERNET from 2006 CERNET annual report. This shows the structure of CERNET. CERNET is one of China s seven major backbone networks. There are four types of links, shown in the key at the bottom left. Large trunks of >2.5Gbps connect the major national hubs of Beijing, Wuhan, and Nanjing. Smaller links of 155Mbps and 2Mbps connect regional networks. Five external links connect the CERNET from Beijing to other countries and Hong Kong. Within China, CERNET connects about 1500 institutions and 20 million hosts. Our GridCast servers are located in Wuhan, the most populous city of Hubei province with approximately 10M residents. off-campus housing. For this reason and convenience, the vast majority of students live in a dormitory while attending college and grad school. Students use CERNET for many applications; Web, , chat, and news. The topology of CERNET is shown in Figure 3, taken from the 2006 CERNET annual report [CERNET 2006]. Though the vast majority of users are in dormitories with direct connections, a small fraction of sessions originate from outside CERNET. All the major national ISPs peer with CERNET, but these peering connections are constrained. Approximately 1% of the GridCast user sessions originate from these ISPs. Their connections are generally inadequate for video and they do not have a large enough community to form independent sharing pools. There is only so much a system can do for users that lack adequate connectivity for the content bitrate. Our GridCast deployment has two video source servers, located on different physical servers. Source A is more powerful and hosts more video files than source B. The tracker and web portal share a third machine. These three machines are the shared, centralized base. Table II summarizes the properties of these servers. All servers are located on the same lab subnet at the campus of Huazhong University of Science and Technology (HUST), a university of approximately 56,000 resident students located in Wuhan City, Hubei Province.

10 26:10 B. Cheng et al. Table II. Properties of the Shared Server Base Source Server A Source Server B Tracker/Portal OS RedHat Linux AS3.0 Windows Server 2003 Windows Server 2003 CPU Pentium4 3.0GHz Celeron 2.0GHz Opteron 1.6GHz RAM 2GB 512MB 2GB Disk 800GB 200GB 40GB Network 100 Mbps 100 Mbps 100 Mbps Videos (SVC) N/A Videos (MVC) N/A Table III. Videos on GridCast, by Category, Sept Type Count Pct. TV - Comedy and Drama % TV - Variety Shows % TV - Games and Sports % TOTAL TV % Movies - Action % Movies - Comedy % Movies - Science Fiction % Movies - Horror % Movies - Love Stories % TOTAL Movies % As of September 2007, GridCast hosts about 1,800 video files. The videos are movies and television shows classified into nine categories, ranging from action movies to variety shows. Table III shows the distribution of videos across categories. The time length of the video files ranges from five minutes to just over two hours, with an average length of 48 minutes. Movies make up 46.5% of the video files and TV shows the other 53.5%, but the distribution of video lengths is not bimodal. Of all files, 38% are between 40 and 50-minutes long and 22% are between 50-minutes and an hour long. Most of the TV shows are Chinese comedies that are around 48-minutes long. The average movie video file has a similar length. Though the system carries two main types of content, the file size distribution is unimodal on 40 to 60 minutes, with 60% of the videos in this range. GridCast videos are encoded at a bitrate higher than the videos available at major VoD publishers such as MSN Video, YouTube, Yahoo Video, or CNN. GridCast videos are encoded from 400 Kbps to 800 Kbps, with a few movies exceeding 800 Kbps and average bitrate of 610Kbps. During the same period of December 2006, the majority of requests to MSN video were for encodings of 320Kbps [Huang et al. 2007]. 3.2 Data Collection The trace records system events and latency observations at a fine level of detail. The events include all user joins and leaves, all VCR operations, all chunk requests and transfers. The observations include measurements of all jitter, seek latencies, and startup latencies. This instrumentation yields system information at a resolution that is of unprecedented detail in studies of live internet VoD systems. Each peer appends trace records to a local log. Every 30s a peer clears out its log, packages all records in a message, and sends the message to the tracker. The evaluation of this article uses two trace periods. The evaluation of single video caching uses a system trace taken from December 5, 2006, to January 5, This is called the SVC trace. The evaluation of multiple video caching uses a system trace taken from August 3, 2007, to August 30,

11 GridCast: Improving Peer Sharing for P2P VoD 26:11 Table IV. Statistics During Deployment Dates are in MM/DD/YY format. SVC denotes single video caching and MVC denotes multiple video caching. SVC is evaluated in Sections 5. Section 6 evaluates MVC. Statistic SVC Trace MVC Trace Time length 32 days 28 days Time period 12/05/06 01/05/07 08/03/07 08/30/07 Number of unique users 23,000 14,000 Max number of active users Number of videos 1,800 2,000 Number of video views 140,000 84,000 Number of user sessions 97,000 62,000 Total data served from sources 22,000 GB 14,200 GB Total data served from peers 5,200 GB 7,900 GB Playing time 46,000 hours 35,000 hours Users behind a NAT 22.8% 23.7% storage upload bandwidth scalability measurerd by chunk cost userb behavior quality of streaming measured by continuity upload bandwidth cache Fig. 4. System model of peer assisted VoD This is called the MVC trace. Table IV summarizes and compares statistics of the two traces. The MVC trace has lower usage values because it was taken during Summer, when most students are away from campus. 4. SYSTEM MODEL A simple conceptual model helps to explain a system designer s perspective on the main objectives and challenges of a peer-assisted VoD system. Figure 4 illustrates this model. The system takes user behavior as input, employs peer and source resources as state, and then outputs quality and scalability. Across the range of expected input loads, the best design uses the available resources to produce the best outputs. Multiple outputs present an evaluation issue, discussed in this section. The quality of the output is measured by the average playback continuity across streams. Delays include startup latencies, playback jitters, and seek latencies. The continuity metric is defined as the total delay time divided by the number of played chunks. The units are seconds per chunk played.

12 26:12 B. Cheng et al. Lower continuity values are better, representing greater playback continuity, with fewer delays for each played chunk. The source servers are the only centralized, shared component that is affected by total system scale. Therefore, the scalability of a design is the inverse of the source load at a given number of peers. Like quality, source load is measured for each played chunk. The chunk cost is defined as the total number of chunks fetched from sources for each played chunk. The units are source fetches per played chunk and the value represents the average load on the sources for each played chunk. Like continuity, lower values are better. Chunk cost (and therefore source load) is affected by several factors; the number of active files, the average encoding bitrate of active files, the number of active users, and the amount of sharing between peers. The first three factors can vary under different designs. For example, the number of active files can be constrained, content can be encoded at lower bitrates, and user access can be limited. These techniques all affect service in some way or other. The first reduces choice, the second reduces quality (without improvements in codecs), and the third reduces availability. Unlike the first three, greater sharing can reduce load without affecting service or quality. The amount of sharing is not an independent, external factor. The system can use several approaches to increase sharing between peers. Possible approaches include caching all played chunks for the current file, caching the chunks of previously played files, replicating chunks, and even increasing user session times. Sharing can be increased through design to decrease the chunk cost. An evaluation of the effects of peer-assistance needs an estimate of what the client-server would be. There are several complexities to obtaining an estimate. First, most of the large-scale, deployed client-server VoD systems (for example, YouTube and MSN video) employ a content delivery network (CDN) to improve latency and alleviate load on the network core. Second, prefetching may increase the chunk cost. Third, caching may decrease the chunk cost. The downward effect of caching on the chunk cost depends on the local hit rate. That, in turn, depends on the workload and caching algorithm. Prefetching will increase the chunk cost, particularly if the client aggressively prefetches many chunks that are never viewed. These three effects are difficult to estimate. Instead of estimating these three effects, we simplify. In fact, no CDNs are free, so fetches to a CDN should be counted as a cost. The net effect of caching and prefetching depends on workload and the internal algorithms. Assuming these would introduce complexity and possibly error. Rather, we develop the simple concept of the centralized limit. It is defined as a chunk cost of 1.0 and represents the cost per played chunk of a client that lacks both client-side caching and prefetching. Given a workload and source server resources, the centralized limit estimates the number of users that could be supported by a client-server architecture. In this article, we will compare the effects of single and multiple video caching to the centralized limit. Scalability (chunk cost) and quality (continuity) are two independent design objectives. If a system decreases cost but lowers quality, some may prefer it while others may not. However, if a system both decreases cost and improves quality, it is rationally dominant. Any evaluation of a VoD system must consider both scalability and streaming quality (all the outputs). Any brute force design change can improve scalability by damaging continuity. In this article, we evaluate all the outputs with the goal of a design that reaches rationally dominant pairs. 5. SINGLE VIDEO CACHING In the initial deployment of GridCast, each peer only caches the current video for sharing. When the user switches to a new video, all the previous chunks are evicted from the local cache. We call this policy single video caching (SVC). This part examines how well GridCast with SVC works and investigates how many peer resources are remained for further improvement.

13 GridCast: Improving Peer Sharing for P2P VoD 26:13 Fig. 5. Chunk cost vs. concurrency. 5.1 Evaluation of SVC The ability of peer-assistance to decrease load on the source server depends on how much sharing exists between peers. The following parts answer this question from two aspects. First, for a single video, what is the change of peer sharing over various users. Second, for the whole system serving lots of videos with different popularity, how about the overall decrease of source server load. A detailed analysis is also presented through the comparisons between SVC and other models, such as the client server and a proposed ideal model Sharing vs. Concurrency. With SVC, peer sharing takes place only if multiple users are watching the same video. The concurrency is the number of users in the system watching the same video at the same time. If a video s concurrency is high, then peer-assistance can substitute peer upload bandwidth for server upload bandwidth and thereby reduce load on the centralized components. We compare GridCast with the client server and ideal models, in terms of chunk cost defined in Section 4. The client-server model assumes that every peer always fetches data from the source server. Its chunk cost is always 1.0. In the ideal model, each peer has sufficient upload bandwidth for sharing and never seeks. Only the peer with the farthest playhead fetches data from the source server. The chunk cost of the ideal model is 1/concurrency. Figure 5(a) examines how GridCast with SVC takes advantage of existing concurrency. GridCast obtains significant improvement over the client-server model for a concurrency of more than 1. As concurrency increases, the improvement becomes better. When concurrency increases from 2 to 6, GridCast decreases chunk cost by 70% from 0.56 to This means only 17% of chunks are fetched from the source at a concurrency of 6. These results demonstrate the potential of peer sharing in VoD. GridCast has real constraints lacking in the ideal model, including finite bandwidth, content misses caused by seeks, and connection problems caused by NAT barriers. These constraints increase the chunk cost of GridCast over the ideal model. But Figure 5(a) shows that GridCast is close to the ideal across concurrencies and has marginally lower chunk cost than the ideal model at some concurrencies. For example, the chunk cost of GridCast is 0.12 for a concurrency of 7, lower than the 0.14 of the ideal model. This result comes from caching and aggressive prefetching described above in Section 2.4. In GridCast, a peer will continue prefetching from partners so long as they can provide new chunks within the prefetch window. As chunks arrive, the prefetch window grows, leading to more requests.

14 26:14 B. Cheng et al. Fig. 6. Scalability of GridCast and client-server. This figure compares the scalability of GridCast and the centralized limit over an average of all days in the SVC trace. This continues until reaching a stall, a chunk available only on a source, or the end of the video. Caching and aggressive prefetching decreases the gap between GridCast and the ideal model and can even push it lower. For deeper understanding, Figure 5(b) presents a breakdown of chunk fetches across concurrencies. Observe that some plays are of chunks received from peers that have since departed or switched to a different video. Without prefetching and caching, these chunks would be fetched from a source. Chunk fetches caused by insufficient bandwidth and connection issues (i.e., NAT barriers) are relatively stable at roughly 10% of plays as concurrency increases. At the lower concurrencies, chunks fetched from the sources are more likely to be global misses, meaning that no peer caches the content. There are three reasons for a global miss; the chunk is new, all peers caching it have departed, or all peers that once cached it have switched videos, evicting it. At a concurrency of 2, chunk fetches caused by global misses are 44% of played chunks and 79% of the chunks fetched from sources. By caching more data on peers, the availability of content increases to further decrease these misses and source server load Overall Scalability. The number of users served by GridCast can be compared to an estimate of the number that would be supported by a client-server architecture under the same load. The estimate is calculated using the centralized limit. Figure 6 shows the average scalability of GridCast across hours of the day for the SVC trace. The increase in the number of supported users fluctuates from 0 to 55% across the day. GridCast decreases source server load by an average of 22% and improves scalability by an average of 28% over client-server. Figure 5 shows that GridCast achieves a 75% increase when the concurrency reaches 2. Therefore, the current improvement is not as high as we might hope. This is because 80% of viewing sessions happen at a concurrency of 1, as Figure 9 shows. For a given user scale, caching more data on peers is necessary to further improve overall scalability. 5.2 Analysis of Peer Resources Caching uses two kinds of peer resources; disk to store chunks and upload to share chunks. This section investigates if adequate disk and upload exist for GridCast to cache more data. We lack information on peers network hardware, so we measure the peak realized upload and download across 10s periods of the trace and take those as a peers hard limit. This is conservative because many peers are short-lived and may not have the opportunity to realize their peaks.

15 GridCast: Improving Peer Sharing for P2P VoD 26:15 Fig. 7. Used and unused resources with single video caching. Figure 7(a) shows the distribution of observed peak bandwidths across users. Across all the active peers, the average peak download is 2.65Mbps. This is because most users are on CERNET with good connectivity. The average peak upload is less than the average peak download, at 2.25Mbps. This is unsurprising because peers upload less than they download and more samples of a distribution can only give a higher peak. Figure 7(b) shows the unused bandwidth capacity distribution. A peer s bandwidth capacity is its session length multiplied by its peak bandwidth. This assumes that the peaks are sustainable. In this figure, some idle peers are shown as having 100% unused bandwidth even though we lack an estimate for their peak capacities. While this may be reasonable to claim, these idle peers cannot contribute because they never request anything and therefore will never cache anything. Looking at the other peers, few use more than 10% of their upload. This suggests that there is excess upload available for use by caching. Unlike for bandwidth, we lack samples to estimate peers disk capacities. Instead, we look at how reasonably sized caches can capture the total requested data. Figure 7(c) shows these results. For example, a 2GB cache captures 70% of the requests over the month. To summarize, reasonably sized peer caches can cover a good fraction of the requests. A cache of 500MB covers 45% of total data and a cache of 5GB covers 90%. These numbers suggest that we should be able to get a reasonable hit rate for a cache size that is a small fraction of the size of an average consumer disk.

16 26:16 B. Cheng et al. Fig. 8. Simulation of multiple video caching. Our observations have shown that substantial peer resources are unused with single video caching. Through trace-driven simulation, we investigate how much benefit we can obtain if each peer caches all recently watched videos. In simulation, we ignore the existence of NATs and suppose that every peer has global knowledge of cache contents, an infinite cache size, and sufficient upload bandwidth to serve all requested chunks. Using this model, Figure 8 shows the change of server load over time. Due to cold cache effects, the maximal server load decreases day by day. After one week, the maximum server load has decreased by 75%, compared to SVC. This result suggests that multiple video caching can improve scalability. Following on these results, the next section looks at how multiple video caching can improve the system. 6. MULTIPLE VIDEO CACHING In May 2007, we released a software update to deploy multiple video caching (MVC). MVC is motivated by the observations of Section 5.2 that peer upload and storage capacity are both underutilized. MVC uses peer local storage to cache content and peer upload to serve that content, thereby shifting load from sources to peers. This section describes the implementation of MVC, evaluates its scalability and continuity, and compares it to SVC and the centralized limit (defined in Section 4). With MVC, peers cache all their recently viewed chunks, rather than just the chunks from their current video. Chunks are evicted when the cache size reaches a bound, not when the user switches videos. When a peer is online, all cached chunks are available for download, constrained by practical factors such as the peer s maximum upload bandwidth. Multiple video caching increases local peer cache contents to increase sharing. By caching chunks from multiple files, MVC increases the number of chunk replicas. It does this in a passive way. The fetch algorithms do not change from SVC to MVC. Fetches from a peer are only issued for the purposes of satisfying the viewing needs of that peer s user. The extent to which caching makes efficient use of global resources is one of the questions that our experiments aim to resolve. The NATs become more of an issue in MVC. GridCast does not have a mechanism for explicitly helping peers behind NATs. NATs are problematic because they block incoming connections. To establish a connection, the peer behind the NAT must initiate. In MVC, peers continue to fetch chunks as they did in SVC, only connecting and fetching for the purposes of satisfying their local viewer. The tracker assigns a member to a peer based on the member s playhead position, with no consideration as to

17 GridCast: Improving Peer Sharing for P2P VoD 26:17 whether it is behind a NAT. In SVC, if a peer j is assigned a NAT peer i, the two must be viewing the same video, so it is likely that i will also be assigned j and a connection will form. In MVC, sharing can be more asymmetric, and the tracker may assign j a NAT peer i where i is viewing a different video, but at some point in the past viewed the video j is now watching. NAT peer i is much less likely to contact j than if the two are watching the same video. 6.1 Implementation of MVC The MVC implementation changes how peers communicate with the tracker, but there is no real change to how peers interact with one another. The design of MVC is simple and purposefully makes few changes to the protocol. When a peer joins, it sends the tracker a list of all the videos for which it caches one or more chunks. When a peer starts a video, it uses the same protocol as SVC. However, a peer may now serve chunks for videos other than its current video. The protocol for gossiping chunk maps does not change. As in SVC, peers only share chunk map metadata with partners and only gossip about the video on which they formed their partnership. In MVC a peer can be caching chunks of other videos. Consequently, peers gossip about a subset of their chunk map in MVC. The size of the disk cache is 1GB, and is halved if disk space becomes constrained. Newly arrived chunks are stored into the disk cache when there are 300 chunks in the memory. If the cache becomes full, chunks are evicted by LRU. Chunks are organized on disk with an index file. If an index file is corrupted or lost, all cached chunks are lost. 6.2 Evaluation of MVC MVC creates sharing opportunities by caching more content on peers. This section investigates the impact of MVC on scalability and continuity, compares the measured results from the deployment against the predicted results from simulation, then outlines lessons learned Dataset Properties. To evaluate MVC, we examine a trace from August 3, 2007, to August 30, In the network wild, experiments cannot be controlled as in the lab. There are similarities and differences between the SVC and MVC traces. Table IV summarizes the MVC and SVC traces. The most notable change is the decrease in users. The average number of unique daily users decreased by 26% from 3,500 to 2,600. The MVC trace was taken during Summer holiday when fewer students were on campus. As discussed earlier, access to CERNET GridCast from other backbone networks is inadequate for video. Most of the changes in the trace properties come from the holiday drop in load. The number of videos and their popularity distribution changed from SVC to MVC. The number of videos increased from 1,800 to 2,000. We cannot freeze content for the purpose of controlled experimentation. If we did, we would not have users. The basic heavy tail shape of the video popularity distribution persists in MVC, but with attenuation. The popular videos become relatively more popular. Between the SVC and MVC trace periods, a new What s Hot? list was added to the web video catalog. This is a popular feature, but we hypothesize that it changed the video popularity distribution by leading some users to investigate popular viewing choices rather than search for videos that satisfy their particular tastes. Figure 9 supports this hypothesis. It shows that higher concurrency views increased relative to lower concurrency views. From SVC to MVC, the number of videos increased and the popular videos became relatively more popular. The scale is a major difference between the two traces. For comparison between SVC and MVC, the traces are broken down into user scale ranges.

A Framework for Lazy Replication in P2P VoD

A Framework for Lazy Replication in P2P VoD A Framework for Lazy Replication in P2P VoD Bin Cheng Huazhong University of Science and Technology showersky@hust.edu.cn Lex Stein Microsoft Research Asia castein@microsoft.com Zheng Zhang Microsoft Research

More information

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM ABSTRACT: SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM J.Priyanka 1, P.Rajeswari 2 II-M.E(CS) 1, H.O.D / ECE 2, Dhanalakshmi Srinivasan Engineering College, Perambalur. Recent years have witnessed

More information

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System Ubaid Abbasi and Toufik Ahmed CNRS abri ab. University of Bordeaux 1 351 Cours de la ibération, Talence Cedex 33405 France {abbasi,

More information

Multimedia Streaming. Mike Zink

Multimedia Streaming. Mike Zink Multimedia Streaming Mike Zink Technical Challenges Servers (and proxy caches) storage continuous media streams, e.g.: 4000 movies * 90 minutes * 10 Mbps (DVD) = 27.0 TB 15 Mbps = 40.5 TB 36 Mbps (BluRay)=

More information

SamKnows test methodology

SamKnows test methodology SamKnows test methodology Download and Upload (TCP) Measures the download and upload speed of the broadband connection in bits per second. The transfer is conducted over one or more concurrent HTTP connections

More information

BUILDING LARGE VOD LIBRARIES WITH NEXT GENERATION ON DEMAND ARCHITECTURE. Weidong Mao Comcast Fellow Office of the CTO Comcast Cable

BUILDING LARGE VOD LIBRARIES WITH NEXT GENERATION ON DEMAND ARCHITECTURE. Weidong Mao Comcast Fellow Office of the CTO Comcast Cable BUILDING LARGE VOD LIBRARIES WITH NEXT GENERATION ON DEMAND ARCHITECTURE Weidong Mao Comcast Fellow Office of the CTO Comcast Cable Abstract The paper presents an integrated Video On Demand (VOD) content

More information

Page 1. Outline / Computer Networking : 1 st Generation Commercial PC/Packet Video Technologies

Page 1. Outline / Computer Networking : 1 st Generation Commercial PC/Packet Video Technologies Outline 15-441/15-641 Computer Networking Lecture 18 Internet Video Delivery Peter Steenkiste Slides by Professor Hui Zhang Background Technologies: - HTTP download - Real-time streaming - HTTP streaming

More information

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System Bo Li,GabrielY.Keung,SusuXie,Fangming Liu,YeSun and Hao Yin Hong Kong University of Science and Technology Tsinghua

More information

Internet Video Delivery. Professor Hui Zhang

Internet Video Delivery. Professor Hui Zhang 18-345 Internet Video Delivery Professor Hui Zhang 1 1990 2004: 1 st Generation Commercial PC/Packet Video Technologies Simple video playback, no support for rich app Not well integrated with Web browser

More information

A Measurement Study of a Peer-to-Peer Video-on-Demand System

A Measurement Study of a Peer-to-Peer Video-on-Demand System A Measurement Study of a Peer-to-Peer Video-on-Demand System Bin Cheng *, Xuezheng Liu, Zheng Zhang, Hai Jin * * Services Computing Technology and System Lab, Cluster and Grid Computing Lab Huazhong University

More information

improving the performance and robustness of P2P live streaming with Contracts

improving the performance and robustness of P2P live streaming with Contracts MICHAEL PIATEK AND ARVIND KRISHNAMURTHY improving the performance and robustness of P2P live streaming with Contracts Michael Piatek is a graduate student at the University of Washington. After spending

More information

Chunk Scheduling Strategies In Peer to Peer System-A Review

Chunk Scheduling Strategies In Peer to Peer System-A Review Chunk Scheduling Strategies In Peer to Peer System-A Review Sanu C, Deepa S S Abstract Peer-to-peer ( P2P) s t r e a m i n g systems have become popular in recent years. Several peer- to-peer systems for

More information

Octoshape. Commercial hosting not cable to home, founded 2003

Octoshape. Commercial hosting not cable to home, founded 2003 Octoshape Commercial hosting not cable to home, founded 2003 Broadcasting fee is paid by broadcasters Free for consumers Audio and Video, 32kbps to 800kbps Mesh based, bit-torrent like, Content Server

More information

irtc: Live Broadcasting

irtc: Live Broadcasting 1 irtc: Live Broadcasting Delivering ultra-low-latency media at massive scale with LiveSwitch and WebRTC Introduction In the early days of the Internet and personal computing, it wasn t uncommon to wait

More information

Adaptive Server Allocation for Peer-assisted VoD

Adaptive Server Allocation for Peer-assisted VoD Adaptive Server Allocation for Peer-assisted VoD Konstantin Pussep, Osama Abboud, Florian Gerlach, Ralf Steinmetz, Thorsten Strufe Konstantin Pussep Konstantin.Pussep@KOM.tu-darmstadt.de Tel.+49 6151 165188

More information

Finding a needle in Haystack: Facebook's photo storage

Finding a needle in Haystack: Facebook's photo storage Finding a needle in Haystack: Facebook's photo storage The paper is written at facebook and describes a object storage system called Haystack. Since facebook processes a lot of photos (20 petabytes total,

More information

Architecture for Cooperative Prefetching in P2P Video-on- Demand System

Architecture for Cooperative Prefetching in P2P Video-on- Demand System Architecture for Cooperative Prefetching in P2P Video-on- Demand System Ubaid Abbasi and Toufik Ahmed CNRS LaBRI Lab. University of Bordeaux, France 351, Cours de la Libération Talence Cedex, France {abbasi,

More information

P2P Applications. Reti di Elaboratori Corso di Laurea in Informatica Università degli Studi di Roma La Sapienza Canale A-L Prof.ssa Chiara Petrioli

P2P Applications. Reti di Elaboratori Corso di Laurea in Informatica Università degli Studi di Roma La Sapienza Canale A-L Prof.ssa Chiara Petrioli P2P Applications Reti di Elaboratori Corso di Laurea in Informatica Università degli Studi di Roma La Sapienza Canale A-L Prof.ssa Chiara Petrioli Server-based Network Peer-to-peer networks A type of network

More information

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation 1 Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation Weijie Wu, Student Member, IEEE, and John C.S. Lui, Fellow, IEEE Abstract P2P-Video-on-Demand (P2P-VoD)

More information

Distributed File Storage and Sharing using P2P Network in Cloud

Distributed File Storage and Sharing using P2P Network in Cloud Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 3, Issue. 12, December 2014,

More information

Effects of Internet Path Selection on Video-QoE

Effects of Internet Path Selection on Video-QoE Effects of Internet Path Selection on Video-QoE by Mukundan Venkataraman & Mainak Chatterjee Dept. of EECS University of Central Florida, Orlando, FL 32826 mukundan@eecs.ucf.edu mainak@eecs.ucf.edu Streaming

More information

DVS-200 Configuration Guide

DVS-200 Configuration Guide DVS-200 Configuration Guide Contents Web UI Overview... 2 Creating a live channel... 2 Inputs... 3 Outputs... 6 Access Control... 7 Recording... 7 Managing recordings... 9 General... 10 Transcoding and

More information

Advanced Networking Technologies

Advanced Networking Technologies Advanced Networking Technologies Chapter 13 Caching Techniques for Streaming Media (Acknowledgement: These slides have been prepared by Dr.-Ing. Markus Hofmann) 1 What is Streaming? Streaming media refers

More information

Towards Low-Redundancy Push-Pull P2P Live Streaming

Towards Low-Redundancy Push-Pull P2P Live Streaming Towards Low-Redundancy Push-Pull P2P Live Streaming Zhenjiang Li, Yao Yu, Xiaojun Hei and Danny H.K. Tsang Department of Electronic and Computer Engineering The Hong Kong University of Science and Technology

More information

138 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 1, JANUARY 2009

138 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 1, JANUARY 2009 138 IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 11, NO. 1, JANUARY 2009 Optimal Prefetching Scheme in P2P VoD Applications With Guided Seeks Yifeng He, Member, IEEE, Guobin Shen, Senior Member, IEEE, Yongqiang

More information

Collaborative Multi-Source Scheme for Multimedia Content Distribution

Collaborative Multi-Source Scheme for Multimedia Content Distribution Collaborative Multi-Source Scheme for Multimedia Content Distribution Universidad Autónoma Metropolitana-Cuajimalpa, Departament of Information Technology, Mexico City, Mexico flopez@correo.cua.uam.mx

More information

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo

Google File System. Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google fall DIP Heerak lim, Donghun Koo Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google 2017 fall DIP Heerak lim, Donghun Koo 1 Agenda Introduction Design overview Systems interactions Master operation Fault tolerance

More information

Performance Characterization of a Commercial Video Streaming Service. Mojgan Ghasemi, Akamai Technologies - Princeton University

Performance Characterization of a Commercial Video Streaming Service. Mojgan Ghasemi, Akamai Technologies - Princeton University Performance Characterization of a Commercial Video Streaming Service Mojgan Ghasemi, Akamai Technologies - Princeton University MGhasemi,PKanuparthy,AMansy,TBenson,andJRexford ACM IMC 2016 1 2 First study

More information

RINDY: A Ring Based Overlay Network for Peer-to- Peer On-Demand Streaming *

RINDY: A Ring Based Overlay Network for Peer-to- Peer On-Demand Streaming * RINDY: A Ring Based Overlay Network for Peer-to- Peer On-Demand Streaming * Bin Cheng, Hai Jin, Xiaofei Liao Cluster and Grid Computing Lab Huazhong University of Science and Technology, Wuhan, 430074,

More information

The Scalability of Swarming Peer-to-Peer Content Delivery

The Scalability of Swarming Peer-to-Peer Content Delivery The Scalability of Swarming Peer-to-Peer Content Delivery Daniel Zappala Brigham Young University zappala@cs.byu.edu with Daniel Stutzbach Reza Rejaie University of Oregon Page 1 Motivation Small web sites

More information

A Scalable Framework for Content Replication in Multicast-Based Content Distribution Networks

A Scalable Framework for Content Replication in Multicast-Based Content Distribution Networks A Scalable Framework for Content Replication in Multicast-Based Content Distribution Networks Yannis Matalas 1, Nikolaos D. Dragios 2, and George T. Karetsos 2 1 Digital Media & Internet Technologies Department,

More information

It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley

It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley 1 A Brief History! Fall, 2006: Started Conviva with Hui Zhang (CMU)! Initial goal: use p2p technologies to reduce distribution

More information

The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING

The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING AKAMAI.COM The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING PART 3: STEPS FOR ENSURING CDN PERFORMANCE MEETS AUDIENCE EXPECTATIONS FOR OTT STREAMING In this third installment of Best Practices

More information

Interactive Branched Video Streaming and Cloud Assisted Content Delivery

Interactive Branched Video Streaming and Cloud Assisted Content Delivery Interactive Branched Video Streaming and Cloud Assisted Content Delivery Niklas Carlsson Linköping University, Sweden @ Sigmetrics TPC workshop, Feb. 2016 The work here was in collaboration... Including

More information

UNIVERSITY OF OSLO Department of informatics. Investigating the limitations of video stream scheduling in the Internet. Master thesis.

UNIVERSITY OF OSLO Department of informatics. Investigating the limitations of video stream scheduling in the Internet. Master thesis. UNIVERSITY OF OSLO Department of informatics Investigating the limitations of video stream scheduling in the Internet Master thesis Espen Jacobsen May, 2009 Investigating the limitations of video stream

More information

Week-12 (Multimedia Networking)

Week-12 (Multimedia Networking) Computer Networks and Applications COMP 3331/COMP 9331 Week-12 (Multimedia Networking) 1 Multimedia: audio analog audio signal sampled at constant rate telephone: 8,000 samples/sec CD music: 44,100 samples/sec

More information

A Data Storage Mechanism for P2P VoD based on Multi-Channel Overlay*

A Data Storage Mechanism for P2P VoD based on Multi-Channel Overlay* A Data Storage Mechanism for P2P VoD based on Multi-Channel Overlay* Xiaofei Liao, Hao Wang, Song Wu, Hai Jin Services Computing Technology and System Lab Cluster and Grid Computing Lab School of Computer

More information

Peer-to-Peer Streaming Systems. Behzad Akbari

Peer-to-Peer Streaming Systems. Behzad Akbari Peer-to-Peer Streaming Systems Behzad Akbari 1 Outline Introduction Scaleable Streaming Approaches Application Layer Multicast Content Distribution Networks Peer-to-Peer Streaming Metrics Current Issues

More information

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services Zhiyong Liu, CATR Prof. Zhili Sun, UniS Dr. Dan He, UniS Denian Shi, CATR Agenda Introduction Background Problem Statement

More information

HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 )

HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 ) HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 ) Abstract Traditionally, Content Delivery Networks (CDNs)

More information

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group

WHITE PAPER: BEST PRACTICES. Sizing and Scalability Recommendations for Symantec Endpoint Protection. Symantec Enterprise Security Solutions Group WHITE PAPER: BEST PRACTICES Sizing and Scalability Recommendations for Symantec Rev 2.2 Symantec Enterprise Security Solutions Group White Paper: Symantec Best Practices Contents Introduction... 4 The

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS Kevin Streeter Adobe Systems, USA ABSTRACT While HTTP adaptive streaming (HAS) technology has been very successful, it also generally introduces

More information

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on

On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement Study on See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/220850337 On the Feasibility of Prefetching and Caching for Online TV Services: A Measurement

More information

Improving VoD System Efficiency with Multicast and Caching

Improving VoD System Efficiency with Multicast and Caching Improving VoD System Efficiency with Multicast and Caching Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 1. Introduction 2. Previous Works 3. UVoD

More information

The Google File System

The Google File System October 13, 2010 Based on: S. Ghemawat, H. Gobioff, and S.-T. Leung: The Google file system, in Proceedings ACM SOSP 2003, Lake George, NY, USA, October 2003. 1 Assumptions Interface Architecture Single

More information

Overview Computer Networking Lecture 16: Delivering Content: Peer to Peer and CDNs Peter Steenkiste

Overview Computer Networking Lecture 16: Delivering Content: Peer to Peer and CDNs Peter Steenkiste Overview 5-44 5-44 Computer Networking 5-64 Lecture 6: Delivering Content: Peer to Peer and CDNs Peter Steenkiste Web Consistent hashing Peer-to-peer Motivation Architectures Discussion CDN Video Fall

More information

Open Connect Overview

Open Connect Overview Open Connect Overview What is Netflix Open Connect? Open Connect is the name of the global network that is responsible for delivering Netflix TV shows and movies to our members world wide. This type of

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung December 2003 ACM symposium on Operating systems principles Publisher: ACM Nov. 26, 2008 OUTLINE INTRODUCTION DESIGN OVERVIEW

More information

Multimedia: video ... frame i+1

Multimedia: video ... frame i+1 Multimedia: video video: sequence of images displayed at constant rate e.g. 24 images/sec digital image: array of pixels each pixel represented by bits coding: use redundancy within and between images

More information

PeerApp Case Study. November University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs

PeerApp Case Study. November University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs PeerApp Case Study University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs November 2010 Copyright 2010-2011 PeerApp Ltd. All rights reserved 1 Executive Summary

More information

Enhancing Downloading Time By Using Content Distribution Algorithm

Enhancing Downloading Time By Using Content Distribution Algorithm RESEARCH ARTICLE OPEN ACCESS Enhancing Downloading Time By Using Content Distribution Algorithm VILSA V S Department of Computer Science and Technology TKM Institute of Technology, Kollam, Kerala Mailid-vilsavijay@gmail.com

More information

Cache Management for TelcoCDNs. Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK)

Cache Management for TelcoCDNs. Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK) Cache Management for TelcoCDNs Daphné Tuncer Department of Electronic & Electrical Engineering University College London (UK) d.tuncer@ee.ucl.ac.uk 06/01/2017 Agenda 1. Internet traffic: trends and evolution

More information

Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard

Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard Confused, Timid, and Unstable: Picking a Video Streaming Rate is Hard Five students from Stanford Published in 2012 ACM s Internet Measurement Conference (IMC) 23 citations Ahmad Tahir 1/26 o Problem o

More information

MISB EG Motion Imagery Standards Board Engineering Guideline. 24 April Delivery of Low Bandwidth Motion Imagery. 1 Scope.

MISB EG Motion Imagery Standards Board Engineering Guideline. 24 April Delivery of Low Bandwidth Motion Imagery. 1 Scope. Motion Imagery Standards Board Engineering Guideline Delivery of Low Bandwidth Motion Imagery MISB EG 0803 24 April 2008 1 Scope This Motion Imagery Standards Board (MISB) Engineering Guideline (EG) provides

More information

CONTENTS. System Requirements FAQ Webcast Functionality Webcast Functionality FAQ Appendix Page 2

CONTENTS. System Requirements FAQ Webcast Functionality Webcast Functionality FAQ Appendix Page 2 VIOCAST FAQ CONTENTS System Requirements FAQ... 3 Webcast Functionality... 6 Webcast Functionality FAQ... 7 Appendix... 8 Page 2 SYSTEM REQUIREMENTS FAQ 1) What kind of Internet connection do I need to

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung Google SOSP 03, October 19 22, 2003, New York, USA Hyeon-Gyu Lee, and Yeong-Jae Woo Memory & Storage Architecture Lab. School

More information

Networking Applications

Networking Applications Networking Dr. Ayman A. Abdel-Hamid College of Computing and Information Technology Arab Academy for Science & Technology and Maritime Transport Multimedia Multimedia 1 Outline Audio and Video Services

More information

!!!!!! Portfolio Summary!! for more information July, C o n c e r t T e c h n o l o g y

!!!!!! Portfolio Summary!! for more information  July, C o n c e r t T e c h n o l o g y Portfolio Summary July, 2014 for more information www.concerttechnology.com bizdev@concerttechnology.com C o n c e r t T e c h n o l o g y Overview The screenplay project covers emerging trends in social

More information

Computer Science 461 Midterm Exam March 14, :00-10:50am

Computer Science 461 Midterm Exam March 14, :00-10:50am NAME: Login name: Computer Science 461 Midterm Exam March 14, 2012 10:00-10:50am This test has seven (7) questions, each worth ten points. Put your name on every page, and write out and sign the Honor

More information

Variable Bitrate Stream in Set top Box device

Variable Bitrate Stream in Set top Box device Variable Bitrate Stream in Set top Box device Preeti Chourasia Student M.Tech (CS) United Institute of Technology And Research Greater Noida (UP) Priyank Chourasia MCA (MITS Gwalior) ABSTRACT Video processing

More information

CS 514: Transport Protocols for Datacenters

CS 514: Transport Protocols for Datacenters Department of Computer Science Cornell University Outline Motivation 1 Motivation 2 3 Commodity Datacenters Blade-servers, Fast Interconnects Different Apps: Google -> Search Amazon -> Etailing Computational

More information

OpenCache. A Platform for Efficient Video Delivery. Matthew Broadbent. 1 st Year PhD Student

OpenCache. A Platform for Efficient Video Delivery. Matthew Broadbent. 1 st Year PhD Student OpenCache A Platform for Efficient Video Delivery Matthew Broadbent 1 st Year PhD Student Motivation Consumption of video content on the Internet is constantly expanding Video-on-demand is an ever greater

More information

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55%

Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% openbench Labs Executive Briefing: May 20, 2013 Condusiv s V-locity Server Boosts Performance of SQL Server 2012 by 55% Optimizing I/O for Increased Throughput and Reduced Latency on Physical Servers 01

More information

UNIT I (Two Marks Questions & Answers)

UNIT I (Two Marks Questions & Answers) UNIT I (Two Marks Questions & Answers) Discuss the different ways how instruction set architecture can be classified? Stack Architecture,Accumulator Architecture, Register-Memory Architecture,Register-

More information

Google File System (GFS) and Hadoop Distributed File System (HDFS)

Google File System (GFS) and Hadoop Distributed File System (HDFS) Google File System (GFS) and Hadoop Distributed File System (HDFS) 1 Hadoop: Architectural Design Principles Linear scalability More nodes can do more work within the same time Linear on data size, linear

More information

Encouraging bandwidth efficiency for peer-to-peer applications

Encouraging bandwidth efficiency for peer-to-peer applications Encouraging bandwidth efficiency for peer-to-peer applications Henning Schulzrinne Dept. of Computer Science Columbia University New York, NY P2Pi Workshop Overview Video bandwidth consumption Cost of

More information

Mohammad Hossein Manshaei 1393

Mohammad Hossein Manshaei 1393 Mohammad Hossein Manshaei manshaei@gmail.com 1393 Voice and Video over IP Slides derived from those available on the Web site of the book Computer Networking, by Kurose and Ross, PEARSON 2 multimedia applications:

More information

Towards Health of Replication in Large-Scale P2P-VoD Systems

Towards Health of Replication in Large-Scale P2P-VoD Systems Towards Health of Replication in Large-Scale P2P-VoD Systems Haitao Li, Xu Ke Tsinghua National Laboratory for Information Science and Technology Dept. of Computer Science and Technology Tsinghua University,

More information

Varnish Streaming Server

Varnish Streaming Server Varnish Streaming Server Delivering reliable, high-performance streaming, particularly of live, over-the-top (OTT) and video on demand (VoD) media, is at the heart of the challenge companies face. HTTP

More information

Cloud Transcoder: Bridging the Format and Resolution Gap between Internet Videos and Mobile Devices

Cloud Transcoder: Bridging the Format and Resolution Gap between Internet Videos and Mobile Devices Cloud Transcoder: Bridging the Format and Resolution Gap between Internet Videos and Mobile Devices Zhenhua Li, Peking University Yan Huang, Gang Liu, Fuchen Wang, Tencent Research Zhi-Li Zhang, University

More information

Copyright 2009 by Scholastic Inc. All rights reserved. Published by Scholastic Inc. PDF0090 (PDF)

Copyright 2009 by Scholastic Inc. All rights reserved. Published by Scholastic Inc. PDF0090 (PDF) Enterprise Edition Version 1.9 System Requirements and Technology Overview The Scholastic Achievement Manager (SAM) is the learning management system and technology platform for all Scholastic Enterprise

More information

Anatomy of a P2P Content Distribution System with Network Coding

Anatomy of a P2P Content Distribution System with Network Coding Anatomy of a P2P Content Distribution System with Network Coding Christos Gkantsidis, John Miller, and Pablo Rodriguez Microsoft Research, Cambridge Anatomy of a P2P Content Distribution System with Network

More information

Internet Networking recitation #13 HLS HTTP Live Streaming

Internet Networking recitation #13 HLS HTTP Live Streaming recitation #13 HLS HTTP Live Streaming Winter Semester 2013, Dept. of Computer Science, Technion 1 2 What is Streaming? Streaming media is multimedia that is constantly received by and presented to the

More information

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX Inventing Internet TV Available in more than 190 countries 104+ million subscribers Lots of Streaming == Lots of Traffic

More information

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction

Chapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.

More information

Distributed Video Systems Chapter 3 Storage Technologies

Distributed Video Systems Chapter 3 Storage Technologies Distributed Video Systems Chapter 3 Storage Technologies Jack Yiu-bun Lee Department of Information Engineering The Chinese University of Hong Kong Contents 3.1 Introduction 3.2 Magnetic Disks 3.3 Video

More information

Experimental Study of Skype. Skype Peer-to-Peer VoIP System

Experimental Study of Skype. Skype Peer-to-Peer VoIP System An Experimental Study of the Skype Peer-to-Peer VoIP System Saikat Guha (Cornell) Neil Daswani (Google) Ravi Jain (Google) IPTPS 2006 About Skype Voice over IP (VoIP) 50 million users Valued at $2.6 billion

More information

Performance of relational database management

Performance of relational database management Building a 3-D DRAM Architecture for Optimum Cost/Performance By Gene Bowles and Duke Lambert As systems increase in performance and power, magnetic disk storage speeds have lagged behind. But using solidstate

More information

Video AI Alerts An Artificial Intelligence-Based Approach to Anomaly Detection and Root Cause Analysis for OTT Video Publishers

Video AI Alerts An Artificial Intelligence-Based Approach to Anomaly Detection and Root Cause Analysis for OTT Video Publishers Video AI Alerts An Artificial Intelligence-Based Approach to Anomaly Detection and Root Cause Analysis for OTT Video Publishers Live and on-demand programming delivered by over-the-top (OTT) will soon

More information

Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System

Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System Xiaojun Hei, Chao Liang, Jian Liang, Yong Liu and Keith W. Ross Department of Computer and Information Science Department of Electrical

More information

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference

A Comparison of File. D. Roselli, J. R. Lorch, T. E. Anderson Proc USENIX Annual Technical Conference A Comparison of File System Workloads D. Roselli, J. R. Lorch, T. E. Anderson Proc. 2000 USENIX Annual Technical Conference File System Performance Integral component of overall system performance Optimised

More information

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3

Today s Papers. Array Reliability. RAID Basics (Two optional papers) EECS 262a Advanced Topics in Computer Systems Lecture 3 EECS 262a Advanced Topics in Computer Systems Lecture 3 Filesystems (Con t) September 10 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical Engineering and Computer Sciences University of California,

More information

Reducing Disk Latency through Replication

Reducing Disk Latency through Replication Gordon B. Bell Morris Marden Abstract Today s disks are inexpensive and have a large amount of capacity. As a result, most disks have a significant amount of excess capacity. At the same time, the performance

More information

QoS Featured Wireless Virtualization based on Hardware

QoS Featured Wireless Virtualization based on Hardware QoS Featured Wireless Virtualization based on 802.11 Hardware Cong Wang and Michael Zink Department of Electrical and Computer Engineering University of Massachusetts, Amherst, MA 01003 {cwang, zink} @ecs.umass.edu

More information

arxiv: v3 [cs.ni] 3 May 2017

arxiv: v3 [cs.ni] 3 May 2017 Modeling Request Patterns in VoD Services with Recommendation Systems Samarth Gupta and Sharayu Moharir arxiv:1609.02391v3 [cs.ni] 3 May 2017 Department of Electrical Engineering, Indian Institute of Technology

More information

DVS-100P Configuration Guide

DVS-100P Configuration Guide DVS-100P Configuration Guide Contents Web UI Overview... 2 Creating a live channel... 2 Applying changes... 4 Live channel list overview... 4 Creating a VOD channel... 5 Stats... 6 Creating and managing

More information

Adaptive Video Acceleration. White Paper. 1 P a g e

Adaptive Video Acceleration. White Paper. 1 P a g e Adaptive Video Acceleration White Paper 1 P a g e Version 1.0 Veronique Phan Dir. Technical Sales July 16 th 2014 2 P a g e 1. Preface Giraffic is the enabler of Next Generation Internet TV broadcast technology

More information

SONAS Best Practices and options for CIFS Scalability

SONAS Best Practices and options for CIFS Scalability COMMON INTERNET FILE SYSTEM (CIFS) FILE SERVING...2 MAXIMUM NUMBER OF ACTIVE CONCURRENT CIFS CONNECTIONS...2 SONAS SYSTEM CONFIGURATION...4 SONAS Best Practices and options for CIFS Scalability A guide

More information

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM

Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Migration Based Page Caching Algorithm for a Hybrid Main Memory of DRAM and PRAM Hyunchul Seok Daejeon, Korea hcseok@core.kaist.ac.kr Youngwoo Park Daejeon, Korea ywpark@core.kaist.ac.kr Kyu Ho Park Deajeon,

More information

CSE 4/60373: Multimedia Systems

CSE 4/60373: Multimedia Systems CSE 4/60373: Multimedia Systems Outline for today 32: Y.-F. Chen, Y. Huang, R. Jana, H. Jiang, M. Rabinovich, J. Rahe, B. Wei, and Z. Xiao. Towards Capacity and Profit Optimization of Video-on-Demand Services

More information

HP AutoRAID (Lecture 5, cs262a)

HP AutoRAID (Lecture 5, cs262a) HP AutoRAID (Lecture 5, cs262a) Ion Stoica, UC Berkeley September 13, 2016 (based on presentation from John Kubiatowicz, UC Berkeley) Array Reliability Reliability of N disks = Reliability of 1 Disk N

More information

HTRC Data API Performance Study

HTRC Data API Performance Study HTRC Data API Performance Study Yiming Sun, Beth Plale, Jiaan Zeng Amazon Indiana University Bloomington {plale, jiaazeng}@cs.indiana.edu Abstract HathiTrust Research Center (HTRC) allows users to access

More information

A Simulation: Improving Throughput and Reducing PCI Bus Traffic by. Caching Server Requests using a Network Processor with Memory

A Simulation: Improving Throughput and Reducing PCI Bus Traffic by. Caching Server Requests using a Network Processor with Memory Shawn Koch Mark Doughty ELEC 525 4/23/02 A Simulation: Improving Throughput and Reducing PCI Bus Traffic by Caching Server Requests using a Network Processor with Memory 1 Motivation and Concept The goal

More information

UNIVERSITY OF OSLO Department of Informatics. RBC: A Relevance Based Caching Algorithm for P2P Access Patterns. Master thesis. Kristoffer Høegh Mysen

UNIVERSITY OF OSLO Department of Informatics. RBC: A Relevance Based Caching Algorithm for P2P Access Patterns. Master thesis. Kristoffer Høegh Mysen UNIVERSITY OF OSLO Department of Informatics RBC: A Relevance Based Caching Algorithm for P2P Access Patterns Master thesis Kristoffer Høegh Mysen 9th July 2007 3 Preface This Master Thesis is written

More information

Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet

Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet Sachin Agarwal 1, Jatinder Pal Singh 1, Aditya Mavlankar 2, Pierpaolo Bacchichet 2, and Bernd Girod 2 1

More information

Measuring Over-the-Top Video Quality

Measuring Over-the-Top Video Quality Contents Executive Summary... 1 Overview... 2 Progressive Video Primer: The Layers... 2 Adaptive Video Primer: The Layers... 3 Measuring the Stall: A TCP Primer... 4 Conclusion... 5 Questions to Ask of

More information

What is Network Acceleration?

What is Network Acceleration? What is Network Acceleration? How do WAN Optimization, Network Acceleration, and Protocol Streamlining work, and what can they do for your network? Contents Introduction Availability Improvement Data Reduction

More information

Supra-linear Packet Processing Performance with Intel Multi-core Processors

Supra-linear Packet Processing Performance with Intel Multi-core Processors White Paper Dual-Core Intel Xeon Processor LV 2.0 GHz Communications and Networking Applications Supra-linear Packet Processing Performance with Intel Multi-core Processors 1 Executive Summary Advances

More information

Spotify Behind the Scenes

Spotify Behind the Scenes A Eulogy to P2P (?) Spotify gkreitz@spotify.com KTH, May 7 2014 What is Spotify? Lightweight on-demand streaming Large catalogue, over 20 million tracks 1 Available in 28 countries. Over 24 million active

More information