An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System

Similar documents
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 25, NO. 9, DECEMBER

Towards Low-Redundancy Push-Pull P2P Live Streaming

Diagnosing Network-wide P2P Live Streaming Inefficiencies

Inside the New Coolstreaming: Principles, Measurements and Performance Implications

On Characterizing PPStream: Measurement and Analysis of P2P IPTV under Large-Scale Broadcasting

A HYBRID PUSH-PULL OVERLAY NETWORK FOR PEER-TO-PEER VIDEO STREAMING

Peer-to-Peer Streaming Systems. Behzad Akbari

Chunk Scheduling Strategies In Peer to Peer System-A Review

Proxy-P2P Streaming Under the Microscope: Fine-Grain Measurement of a Configurable Platform

Insights into PPLive: A Measurement Study of a Large-Scale P2P IPTV System

An Analysis of User Dynamics in P2P Live Streaming Services

Flash Crowd Handling in P2P Live Video Streaming Systems

Overlay Monitoring and Repair in Swarm-based Peer-to-Peer Streaming

Performance analysis of optimization techniques in peer-to-peer video streaming systems with tree/forest topology

A Model to Seize the Instantaneous Performance of P2P Streaming Platforms: Simulative and Experimental Validation

A Case Study of Large Scale P2P Video Multicast

Broadcast Routing. Chapter 5 Multicast and P2P. In-network Duplication. Spanning Tree

IPTV over P2P Streaming Networks: The Mesh-Pull Approach

Understanding the Start-up Delay of Mesh-pull. Peer-to-Peer Live Streaming Systems

Early Measurements of a Cluster-based Architecture for P2P Systems

Push pull techniques in peer-to-peer video streaming systems with tree/forest topology

Exploring the Optimal Replication Strategy in P2P-VoD Systems: Characterization and Evaluation

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

Application-Layer Protocols Peer-to-Peer Systems, Media Streaming & Content Delivery Networks

By Jiangchuan Liu, Member IEEE, Sanjay G. Rao, Bo Li, Senior Member IEEE, and Hui Zhang I. INTRODUCTION

It s Not the Cost, It s the Quality! Ion Stoica Conviva Networks and UC Berkeley

Live P2P Streaming with Scalable Video Coding and Network Coding

AOTO: Adaptive Overlay Topology Optimization in Unstructured P2P Systems

Exploring Large-Scale Peer-to-Peer Live Streaming Topologies

IPTV Experiments and Lessons Learned

PPM - A Hybrid Push-Pull Mesh-Based Peer-to-Peer Live Video Streaming Protocol

improving the performance and robustness of P2P live streaming with Contracts

Flash Crowd in P2P Live Streaming Systems: Fundamental Characteristics and Design Implications

Peer-to-Peer Media Streaming

A Proposed Peer Selection Algorithm for Transmission Scheduling in P2P-VOD Systems

Commercial Peer-To-Peer Video Streaming To Avoid Delay Transmission

Octoshape. Commercial hosting not cable to home, founded 2003

PRIME: Peer-to-Peer Receiver-drIven MEsh-based Streaming

Considering Priority in Overlay Multicast Protocols under Heterogeneous Environments

Topology Optimization in Hybrid Tree/Mesh-based Peer-to-Peer Streaming System

arxiv:cs/ v4 [cs.ni] 19 Apr 2007

HSM: A Hybrid Streaming Mechanism for Delay-tolerant Multimedia Applications Annanda Th. Rath 1 ), Saraswathi Krithivasan 2 ), Sridhar Iyer 3 )

Scalability and Efficiency of Push-Driven P2PTV Systems

Understanding Mesh-based Peer-to-Peer Streaming

Distilling Superior Peers in Large-Scale P2P Streaming Systems

BitTorrent and CoolStreaming

KQStream: Kindred-Based QoS-Aware Live Media Streaming in Heterogeneous Peer-to-Peer Environments

Adaptive Video Acceleration. White Paper. 1 P a g e

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming

A DHT-Aided Chunk-Driven Overlay for Scalable and Efficient Peer-to-Peer Live Streaming

The Novel HWN on MANET Cellular networks using QoS & QOD

Second Generation P2P Live Streaming

SECURED SOCIAL TUBE FOR VIDEO SHARING IN OSN SYSTEM

A Tale of Three CDNs

To address these challenges, extensive research has been conducted and have introduced six key areas of streaming video, namely: video compression,

On Topology Construction in Layered P2P Live Streaming Networks

Improving Channel Scanning Procedures for WLAN Handoffs 1

Video AI Alerts An Artificial Intelligence-Based Approach to Anomaly Detection and Root Cause Analysis for OTT Video Publishers

Lava: A Reality Check of Network Coding in Peer-to-Peer Live Streaming

Collaborative Multi-Source Scheme for Multimedia Content Distribution

A Cloud-assisted Tree-based P2P System for Low Latency Streaming

Keerthana Priyadharshini 1, T Sivakumar 2 1,2. Computer Science and Engineering, Anna University, Chennai

The Guide to Best Practices in PREMIUM ONLINE VIDEO STREAMING

On the System Parameters of Peer-to-Peer Video Streaming with Network Coding

Watch Global, Cache Local: YouTube Network Traffic at a Campus Network - Measurements and Implications

Peer-to-Peer Streaming Systems

P2P content distribution Jukka K. Nurminen

Peer to Peer Media Streaming Systems. Survey Paper

P2P content distribution

On Minimizing Packet Loss Rate and Delay for Mesh-based P2P Streaming Services

EXPERIENCES WITH A LARGE-SCALE DEPLOYMENT OF STANFORD PEER-TO-PEER MULTICAST

Performance and Quality-of-Service Analysis of a Live P2P Video Multicast Session on the Internet

IN recent years, the amount of traffic has rapidly increased

ADAPTIVE STREAMING. Improve Retention for Live Content. Copyright (415)

QoS-Aware Hierarchical Multicast Routing on Next Generation Internetworks

Scalability And The Bandwidth Efficiency Of Vod Systems K.Deepathilak et al.,

Providing NPR-Style Time-Shifted Streaming in P2P Systems

AN INITIAL PEER CONFIGURATION ALGORITHM FOR MULTI-STREAMING PEER-TO-PEER NETWORKS

Stir: Spontaneous Social Peer-to-Peer Streaming

Peer-to-Peer (P2P) Architectures

Video Streaming Over the Internet

Implementing a P2P Live Streaming Overlay for PeerfactSim.KOM

MEASURING P2P-TV SYSTEMS ON BOTH SIDES OF THE WORLD

On the Impact of Popularity Decays in Peer-to-Peer VoD Systems

The Edge: Delivering the Quality of Experience of Digital Content

Substream Trading: Towards an Open P2P Live Streaming System

PeerApp Case Study. November University of California, Santa Barbara, Boosts Internet Video Quality and Reduces Bandwidth Costs

Modeling User Behavior in P2P Live Video Streaming Systems through a Bayesian Network

Multilevel Fault-tolerance for Designing Dependable Wireless Networks

Volume 3, Issue 9, September 2013 International Journal of Advanced Research in Computer Science and Software Engineering

Overlay Networks for Multimedia Contents Distribution

Topology dynamics in a P2PTV network

Multicast Transport Protocol Analysis: Self-Similar Sources *

HotStreaming: Enabling Scalable and Quality IPTV Services

Contracts: Practical Contribution Incentives for P2P Live Streaming

Performance Consequences of Partial RED Deployment

AN EFFICIENT MAC PROTOCOL FOR SUPPORTING QOS IN WIRELESS SENSOR NETWORKS

A Cloud-assisted Tree-based P2P System for Low Latency Streaming

A Method of Identifying the P2P File Sharing

MEASUREMENT STUDY OF A P2P IPTV SYSTEM: SOPCAST

Transcription:

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System Bo Li,GabrielY.Keung,SusuXie,Fangming Liu,YeSun and Hao Yin Hong Kong University of Science and Technology Tsinghua University Abstract Peer-to-Peer (P2P) based live video streaming system has emerged as a promising solution for the Internet video streaming applications, partly evident from commercial deployment of several large-scale P2P streaming systems. The key is to leverage the resources available at end users, which offers great potential to scale in the Internet. Flash crowd poses a unique challenge for live streaming systems, in which there could be potentially hundred of thousands of users joining the system during the initial few minutes of a live program. This adds considerable difficulty in particular for a P2P based system in quickly ramping to a scale that can provide reasonable streaming services for newly incoming peers. In this paper, we examine the system dynamics under flash crowd based on measurements obtained from the Coolstreaming system. We are particularly concerned with the impact and user behaviors during flash crowd. The results reveal a number of interesting observations: 1) the system can scale up to a limit during the flash crowd; 2) there is a strong correlation between resource competition among newly joined peers; 3) the user behavior during flash crowd can be best captured by the number of retries and the impatience time. Key Words: Video streaming, peer-to-peer technology, measurement, flash crowds I. INTRODUCTION Recently there has been significant deployment in adopting Peer-to-Peer (P2P) technology for Internet live video streaming [1]. There are primarily two factors behind this: First, the P2P technology requires minimum support from the infrastructure, thus it is cost-effective and easy to deploy. Second, in such a system, each peer participating in a video program is not only downloading the video content, but also uploading it to other participants watching the same program. Consequently, such an approach has the potential to scale as greater demands also generate more resources. The Coolstreaming system was one of the earliest largescale P2P live video streaming systems [2], which was built on the notion called data-driven. The key idea in the design is that every peer periodically exchanges its data availability information with a set of partners and retrieves currently unavailable data from each other. The system demonstrated excellent self-scaling property in the global Internet [3][4]. Since then, there have been several deployments of commercial systems, where PPLive is one The research was support in part by grants from RGC under the contracts 615608, 616207 and 616406, by a grant from NSFC/RGC under the contract N HKUST603/07, by a grant from HKUST under the contract RPC06/07.EG27. of the largest running systems [5]. However, question remains whether a Peer-to-Peer based streaming technique can scale to millions of viewers. Perhaps more importantly, the question is what set of challenges and possible technical solutions are for such a system to scale. We believe that one of the most critical features in a P2P system is the system dynamics, in which P2P live streaming is no exception; this is further challenged by a phenomenon called flash crowd, which captures the typical arrival behavior in that there could be hundreds of thousands of users joining the system during the initial few minutes of a live broadcast. This adds considerable difficulty for a P2P live streaming system to accommodate such a large number of newly incoming peers with reasonable streaming quality, and within a stringent time constraint. This is unique in live streaming applications and is less critical in other type of P2P applications such as file swapping, in which either peers joining process is stretched over a significantly longer period of time or application itself can tolerate much longer delay. Our focus in this study is on the system dynamics during flash crowd and its impact on the streaming performance. To the best of our knowledge, there has been little study on the system dynamics during flash crowd. There were works addressing some aspects of flash crowd. Sripanidkulchai et al. examined a large set of traces and showed that the available resources could potentially support large scale streaming systems [6]. Hei et al. analyzed various different measurements from PPLive system [7]; one result evaluated flash crowd data from the annual Spring Festival Gala on Chinese New Year, in which the dynamic of user population was presented. A more relevant work explored topological properties in a practical P2P streaming system (UUSee) under large flash crowds [8]. Their follow-up work concluded that the clustering phenomenon of peers in each ISP and the reciprocal behavior among peers play important roles for sustaining acceptable streaming performance under flash crowds [9]. Flash crowd was also examined in other P2P based applications. De Veciana et al. proposed a model based on age dependent branching processes to examine the dynamics of Bit-Torrent based file sharing system during flash crowd [10]. The analysis demonstrated the key elements in capturing the ability of such systems to handle flash crowd. Another model provided a theoretical evaluation of the scalability of a distributed randomized

P2P protocol that provides transmission of objects from servers currently suffering with flash crowd [11]. The focus in this paper is different from all prior works in that we examine the detailed dynamics during flash crowd. We are particularly interested in the user behavior and the impact on the system scale during flash crowd. We believe this can provide new insights in understanding the overall system behavior for P2P based live streaming systems. Specifically, we leverage the Coolstreaming system we have implemented and we designed a logging system that allowed us to collect a large set of traces from real live streaming programs. Our major contributions in this study are: 1) we demonstrate that the system can scale up to a limit during flash crowd; 2) we show that there is a strong correlation between resource competition among newly joined peers; 3) we show that the user behavior during flash crowd can be best captured by the number of retries and the impatience time from peers. The rest of this paper is organized as follows. Section II describes the basic architecture of the Coolstreaming system and discusses the system dynamics. Section III presents results from live event broadcast based on the extensive traces we have obtained and examines the flash crowd behavior and implications on the system performance. Section V concludes the paper with highlights on further research. II. SYSTEM ARCHITECTURE AND DYNAMICS The Coolstreaming system was one of the earliest systems that utilize the Gossip protocol for constructing a random overlay for live streaming applications. This system demonstrates superior scaling property in the public Internet yet being robust against the high dynamics. The central design in this system is based on the data-driven notion, in which every peer node periodically exchanges its data availability information with a set of partners (i.e., gossip) to retrieve unavailable data, while also supplying available data to others. There are two basic modules in the system: 1) Membership manager, which constructs and maintains partial view of the overlay based on the peer exchange information. 2) Stream manager, which utilizes a hybrid pull and push mechanism for video content retrieval. The video stream is organized into multiple substreams and each peer could subscribe to sub-streams from different peers. Each sub-stream is further divided into blocks with equal size, in which each block is assigned a sequence number representing its playback order. The Buffer Map or BM is introduced to represent the availability of latest blocks of different sub-streams in each peer s buffer. This information will be exchanged periodically among partners in order to determine which sub-stream(s) to subscribe to. The system dynamics is primarily caused by the churn, i.e., the joining or departure of peers in the system. This by far creates the main challenge in the design of a P2P based live streaming system. We next describe the system dynamics using a peer joining process to illustrate the key effect that will help us to explain the results presented next. The peer departure process has similar effect in the sense that the children nodes of a departed peer have to relocate a set of new peers to obtain live video content. The sub-stream design helps to alleviate the impact during the churn, in which an affected peer might only need to relocate a sub-stream instead of the whole stream from another peer. A newly joined node first contacts a boot-strap node for the initial list of nodes that it expects to establish partnership. With the exchange of BM information, the newly joined node can obtain the video availability information from a set of randomly selected nodes from the list. Specifically, the newly joined node will start to subscribe (pull) sub-stream blocks, possibly from different peers. The subsequent blocks can be then pushed to the newly joined node. This process, however, is complicated in a live streaming system in that a selected parent node might not satisfy the streaming requirement for the newly joined node. There are two possible scenarios that cause the system dynamics during the peer joining, either because there exist more capable peer nodes or the selected parent node has insufficient capacity in providing the sustainable streaming content. This information can be obtained by monitoring the set of video blocks received by the newly joined node and the set of video blocks available in its partner nodes. A peer node can then dynamically switch to another peer node for content retrieval in order to satisfy its playback requirement. In a micro-view of this dynamic adjustment of the partnership among different peers, this allows the overlay topology to self-stabilize. III. RESULTS AND DISCUSSIONS In this Section, we present and study the results obtained from a recent live event broadcast in Japan Yahoo using the Coolstreaming system. A sport channel had a live baseball game that was broadcast at 18:00 and we recorded real traces from 00:00 to 23:59 on that particular day. Specifically, we focus on the short session duration distribution, user retry behavior and system scalability during flash crowd. A. Log and Data Collection There is a log server in the system. Each user reports its activities to the log server including events and internal status periodically. Users and the log server communicate with each other using the HTTP protocol. Each video program is streamed at a bit rate of 768 Kbps divided into six sub-streams. There are three types of reports for each peer node: 1) a QoS report records the perceived quality of service, e.g., the percentage of video data missing the playback deadline; 2) a traffic report contains the information such as amount of video content downloaded and uploaded; 3) peers can change their partners frequently, which might lead to unnecessarily heavy load at the log server if each such an activity is reported. Instead a

single compacted partner report contains all activities is periodically sent out from each node. A session captures a user activity when a user joins the system until it leaves the system. In each session, a user reports four events to the log server: 1) join event: This event is reported when the client joins the system and connects to the boot-strap node; 2) start subscription event: This event is reported as soon as the client establishes partnership relations with other clients and starts receiving (video) data from its parent node(s); 3) media player ready event: This event is reported when the client receives sufficient data to start playing; 4) leave event: This event is reported when the user leaves the system. Session duration is the time interval between a user joining and leaving the system. In order to cope with the possible service disruption often observed during flash crowd, the Coolstreaming system enables each peer (the client side) to re-connect (retry) to the program. There are two measurements that can best capture effect of the flash crowd: 1) the number of short sessions. This counts for those peers that either fails to start viewing a program or the service is disrupted during flash crowd; 2) the number of retries by newly joined peers. Finally, we also examine the effect on the system scalability during flash crowd. B. Short sessions under flash crowd Since we are primarily concerned with the user behavior during flash crowd, we filter out all the normal sessions (i.e., users who successfully join the program) from the log data, instead we focus on short sessions with the duration shorter than 120 sec and 240 sec respectively. Fig. 1 plots the number of short sessions started against time with a granularity of 1 minute between 17:00-23:00.The result shows that the number of short session increases significantly at around 18:00 when flash crowd occurs with a large number of peers joining the live broadcast program. We further plot the number of short sessions started (session duration shorter than 120s) against peer join rate in Fig. 2. The result shows there is a strong correlation between them. Fig. 1. Short session distribution (session duration shorter than 120s and 240s) against time. Fig. 2. Correlation between numbers of short session and join rate. There are several factors that can cause the short session: user client connection fault, insufficient uploading capacity from at least one of the parent nodes, the poor sustainable bandwidth at beginning of the stream subscription, excessively long waiting time (timeout) for cumulating sufficient video content at playback buffer. The fundamental reason behind this is the fact that newly joined nodes do not have adequate content to share with other peers, thus the newly joined peers can only consume the uploading capacity from existing peers in the system. This can result in heavy resource competition among peers and possibly frequently switching parent nodes (see Section 2, discussion on system dynamics), which consequently lead to service disruption and short sessions. We next examine user playback quality at the beginning of subscription. If the video playback continuity can not be guaranteed, either reconnections will occur or users can opt to leave the system. We use a term called impatient time to capture this effect, which can be measured by sessions with inadequate video content. Specifically, we compare the total downloaded bytes of a session with the expected total playback video bytes according to the session duration; we extract sessions with insufficient download bytes. We plot out the session duration distribution and approximate it as user impatient time distribution in Fig. 3. The average user impatient time is between 60s to 120s. Given that some users will connect more than once (see next subsection user retry behavior), the total number of sessions shown is more than the number of users currently in the system. C. User retry behavior under flash crowd In order to cope with the possible service disruption often observed during flash crowd, the Coolstreaming system enables each peer (the client side) to re-connect (retry) to the program. From the results above, we observe that there could be a large number of short sessions during flash crowd, in which those peers can reconnect with other peers and re-join the program (the overlay). From users perceived service point of view, the video playback of those peers after disruption could be quickly restored. But

Fig. 3. Session duration (to approximate user impatient time) Fig. 5. The retry rate against system time from the system point of view, this amplifies the join rates, whose effect needs to be carefully examined. We define a retry rate to count the number of peers that opt to re-join to the overlay with same IP address and port number per unit time. Fig. 4 plots the retry rate against time with four different levels. The four different levels of retry rate are as followed: Retry rate <= 2mins means the rate of peers trying to re-enter the system within 2 minutes; Retry rate <= 6mins means the rate of peers trying to re-enter the system within 6 minutes and so on. We also plot the total retry rate in Fig. 4. The figure shows that users could have tried many times in order to successfully start a video session, esp. during flash crowd. Fig. 5 further plots the rate of peers trying to re-enter the system within 30 minutes and beyond 30 minutes against time. The retry rate within 30 minutes corresponds to the joining behavior during flash-crowd, while the retry rate beyond 30 minutes is related to the large number of leave events at the end of the live program. This again shows that a flash crowd has significant impact on the initial joining phase in a P2P streaming system. Fig. 4. The retry rates against system time, with 4 different levels of retry rates. D. System scalability under flash crowd Finally, we examine the scalability of Coolstreaming system during flash crowd. Recall in the logging data, we record an event called media player ready event reporting that a user has received sufficient data to start playing the program. We use this event as an indication that the user has successfully joined the system. In other words, the system scalability measures the total number of users successfully joining the system. Fig. 6 plots the media player ready rate and the join rate against time, which shows a strong correlation between those two rates. This is not surprising in that the increase (or the decrease) of the join rate will effectively cause more (or less) users to join the system. The more interesting results are observed from Fig. 7 and Fig. 8, which plot the cumulative media player ready rate and the join rate against time. Fig. 8 shows that the slope of the cumulative media player ready rate is nearly constant, i.e., about 634 peers per minute after performed linear fitting under high user join rate (about 946 peers per minute). The gap between these two rates illustrates the catch up process that users need to go through in order to obtain sufficient video content upon the initial joining event. It can be further observed from Fig. 7 that the media player ready rate picks up when the flash crowd occurs and increases steadily after that; however, the ratio between these two rates does not exceed 0.67 in Fig. 8. What this implies is that the P2P streaming system has capability to accommodate a sudden surge of the user arrivals (flash crowd), but up to some maximum limit. There are many factors that can affect the system scalability such as network conditions, peer uploading capacity, access bandwidth, and overlay construction mechanism. Perhaps one obvious factor in the system design related to the media player ready time is the amount of initial buffering required, which determines how long each peer needs to wait for the accumulation of sufficient video content before playing the video program. The decrease of the initial buffering needed can reduce media player ready time and thus increases the effective measurement of the system scale. The trade-off is that this potentially causes negative impact on the continuity of the video playback. A relevant study currently under the investigation is to ex-

amine the relationship between the initial video program start-up delay and the system scale. IV. CONCLUSION In this paper, we examine the system dynamics during flash crowd in a P2P based live streaming system. We study the system dynamics using a set of measurement data obtained from live event broadcast using the Coolstreaming system. We derive a number of interesting observations: 1) the system can scale up to a limit during the flash crowd; 2) there is a strong correlation between resource competition among newly joined peers; 3) the user behavior during flash crowd can be best captured by the number of retries and the impatience time. There are several avenues for further studies: 1) does there exist a model that can approximate the user join process during flash crowd? 2) what is the correlation between the initial size of the system (number of the peers in the system) and the user joining rate? Intuitively, a larger initial system size can tolerate a higher joining rate; 3) the commercial P2P live streaming systems have widely utilized a set of self-deployed servers or leveraged the CDN service to help alleviating the effect from flash crowd. The Coolstreaming system, for example, in Japan Yahoo, deployed 24 servers in different regions that allowed users to join a program in order of seconds, while the PPLive system is utilizing the CDN services. It is, however, interesting to derive the relationship between the number of servers needed, further how they are geographically distributed, and the expected number of viewers along with their joining behaviors. Fig. 6. Media player ready rate and join rate against time. REFERENCES [1] J.-C. Liu, S. Rao, B. Li and H. Zhang, Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast, in Proceedings of the IEEE, Special Issue on Recent Advances in Distributed Multimedia Communications, (invited) 96(1):11-24, January 2008. [2] X. Zhang, J. Liu, B. Li, and T.-S. P. Yum, DONet/Coolstreaming: A Data-driven Overlay Network for Live Media Streaming, in Proc. of IEEE Infocom, March 2005. Fig. 7. Cumulative media player ready rate and join rate against time. Fig. 8. Cumulative media player ready rate and join rate against time with linear fitting. [3] S.-S. Xie, B. Li, G. Y. Keung and X.-Y Zhang, The Coolstreaming: Design, Theory and Practice, in IEEE Transactions on Multimedia: Special Issue on Content Storage and Delivery in Peer-to-Peer Network, 9(8):1661-1671, December 2007. [4] B. Li, S. Xie, Y. Qu, G. Y. Keung, C. Lin, J.-C Liu and X. Zhang, Inside the New Coolstreaming: Principles, Measurements and Performance Implications, in Proc. of IEEE Infocom, April 2008. [5] http://www.pplive.com [6] K. Sripanidkulchai, A. Ganjam, B. Maggs, and H. Zhang, The Feasibility of Supporting Large-Scale Live Streaming Applications with Dynamic Application End-Points, in Proceedings of ACM SIGCOMM, September 2004. [7] X. Hei, C. Liang, J. Liang, Y. Liu and K. W. Ross, A Measurement Study of a Large-Scale P2P IPTV System, in IEEE Transactions on Multimedia: Special Issue on Content Storage and Delivery in Peer-to-Peer Network, 9(8):1672-1687, December 2007. [8] C. Wu, B. Li, S. Zhao, Magellan: Charting Large-Scale Peer-to-Peer Live Streaming Topologies, in Proc. of 27th International Conference on Distributed Computing Systems (ICDCS 2007), June 2007. [9] C. Wu, B. Li, S. Zhao, Characterizing Peer-to-Peer Streaming Flows, in IEEE Journal on Selected Areas in Communications, Special Issue on Advances in Peer-to- Peer Streaming Systems, 25(9):1612-1626, December 2007. [10] X. Yang, and G. de Veciana, Service capacity of peer to peer networks, in Proc. of IEEE Infocom, March 2004. [11] D. Rubenstein, S. Sahu, Can unstructured P2P protocols survive flash crowds?, in IEEE Transactions on Networking, 13(3):501-512, June 2005.