Video Conferencing with Content Centric Networking

Similar documents
Live Streaming with Content Centric Networking

IN recent years, the amount of traffic has rapidly increased

Networking Named Content: Content-Centric Networking. John Rula

Optimal Cache Allocation for Content-Centric Networking

Optimized Vehicular Traffic Flow Strategy using Content Centric Network based Azimuth Routing

Efficient Mobile Content-Centric Networking. Using Fast Duplicate Name Prefix Detection. Mechanism

Cache Less for More in Information- Centric Networks W. K. Chai, D. He, I. Psaras and G. Pavlou (presenter)

MCBS: Matrix Computation Based Simulator of NDN

CLUSTER BASED IN NETWORKING CACHING FOR CONTENT CENTRIC NETWORKING

An Cross Layer Collaborating Cache Scheme to Improve Performance of HTTP Clients in MANETs

Cache Replacement Strategies for Scalable Video Streaming in CCN

Publisher Mobility Support in Content Centric Networks

Caching Algorithm for Content-Oriented Networks Using Prediction of Popularity of Content

Stateless ICN Forwarding with P4 towards Netronome NFP-based Implementation

QoS-Aware Hierarchical Multicast Routing on Next Generation Internetworks

A Novel Broadcasting Algorithm for Minimizing Energy Consumption in MANET

Hop-count Based Forwarding for Seamless Producer Mobility in NDN

SCAN: Scalable Content Routing for Content-Aware Networking

Multicast Technology White Paper

SCAN: Scalable Content routing for

Oi! Short Messaging in Opportunistic Wireless Named-Data Networks (Version 1.0)

Content Searching Scheme with Distributed Data Processing Service in Content Centric Networking

Network-Adaptive Video Coding and Transmission

Octoshape. Commercial hosting not cable to home, founded 2003

IMPROVING LIVE PERFORMANCE IN HTTP ADAPTIVE STREAMING SYSTEMS

Distributed Systems. 21. Content Delivery Networks (CDN) Paul Krzyzanowski. Rutgers University. Fall 2018

CS November 2018

Introduction to Information Centric Networking

Publisher Mobility Support in Content Centric Networks

Collaborative Multi-Source Scheme for Multimedia Content Distribution

A Light-Weight Forwarding Plane for Content-Centric Networks

Application aware access and distribution of digital objects using Named Data Networking (NDN)

TOP-CCN: Topology aware Content Centric Networking for Mobile Ad Hoc Networks

A Seamless Content Delivery Scheme for Flow Mobility in Content Centric Network

Progress Report No. 15. Shared Segments Protection

Content Centric Networking

Towards a CDN over ICN

Expires: February 21, Huawei & USTC G. Wang. Huawei Technologies. August 20, 2013

A Routing Protocol Proposal for NDN Based Ad Hoc Networks Combining Proactive and Reactive Routing Mechanisms

Clustering-Based Distributed Precomputation for Quality-of-Service Routing*

Performance Evaluation of CCN

Scaled VIP Algorithms for Joint Dynamic Forwarding and Caching in Named Data Networks

CCN & Network Coding. Cedric Westphal Huawei and UCSC

Thwarting Traceback Attack on Freenet

On NDN and ( lack of ) Measurement

A Bandwidth-Broker Based Inter-Domain SLA Negotiation

On Adaptive Pre-fetching and Caching the Contents in Content Centric Networking

Design and Implementation of A P2P Cooperative Proxy Cache System

Mobile Transport Layer

CSCD 433/533 Advanced Networks

What Benefits Does NDN Have in Supporting Mobility

CS November 2017

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 21: Network Protocols (and 2 Phase Commit)

CCTCP: A Scalable Receiver-driven Congestion Control Protocol for Content Centric Networking

Enabling Correct Interest Forwarding and Retransmissions in a Content Centric Network

ICN Content Security Using Encrypted Manifest and Encrypted Content Chunks

The Memetic Algorithm for The Minimum Spanning Tree Problem with Degree and Delay Constraints

Dynamic Deferred Acknowledgment Mechanism for Improving the Performance of TCP in Multi-Hop Wireless Networks

Efficient Resource Management for the P2P Web Caching

Improvement of Buffer Scheme for Delay Tolerant Networks

Named Data Networking for 5G Wireless

MAPREDUCE FOR BIG DATA PROCESSING BASED ON NETWORK TRAFFIC PERFORMANCE Rajeshwari Adrakatti

Measuring QoS of Nearest Replica Routing Forwarding Strategy on Named Data Networking for Triple Play Services

Module 15 Communication at Data Link and Transport Layer

Assignment 5. Georgia Koloniari

Delay Constrained ARQ Mechanism for MPEG Media Transport Protocol Based Video Streaming over Internet

Consumer driven Adaptive Rate Control for Real-time Video Streaming in CCN/NDN

DD2490 p IP Multicast routing. Multicast routing. Olof Hagsand KTH CSC

What Benefits Does NDN Have in Supporting Mobility

Journal of Electronics and Communication Engineering & Technology (JECET)

Mobility Study for Named Data Networking in Wireless Access Networks

Research on Transmission Based on Collaboration Coding in WSNs

An Evaluation of Shared Multicast Trees with Multiple Active Cores

Design and Implementation of an Anycast Efficient QoS Routing on OSPFv3

A Low-Overhead Hybrid Routing Algorithm for ZigBee Networks. Zhi Ren, Lihua Tian, Jianling Cao, Jibi Li, Zilong Zhang

Multimedia! 23/03/18. Part 3: Lecture 3! Content and multimedia! Internet traffic!

Part 3: Lecture 3! Content and multimedia!

Quality of Service in Ultrabroadband models

Multicast EECS 122: Lecture 16

Proactive-Caching based Information Centric Networking Architecture for Reliable Green Communication in ITS

QoS Routing By Ad-Hoc on Demand Vector Routing Protocol for MANET

Distributed Mobility Control for Mobile-Oriented Future Internet Environments

GreenICN Overview GreenICN: Architecture and Applications of Green Information Centric Networking

Named Data Networking (NDN) CLASS WEB SITE: NDN. Introduction to NDN. Updated with Lecture Notes. Data-centric addressing

Data Structure Optimization of AS_PATH in BGP

Chapter 7 CONCLUSION

Supporting Mobility in MobilityFirst

Master s Thesis 修士論文 論文題目 CACHE CONSISTENCY IN ICN: LEASE 5114FG21-6 THEINT THEINT MYO. Hidenori NAKAZATO. Supervisor 指導教員 年 7 月 1 9 日

Cascade Mapping: Optimizing Memory Efficiency for Flash-based Key-value Caching

Dynamic Broadcast Scheduling in DDBMS

Performance Assessment of Routing Strategies in Named Data Networking

Enhanced Cores Based Tree for Many-to-Many IP Multicasting

COOCHING: Cooperative Prefetching Strategy for P2P Video-on-Demand System

Analysis of Black-Hole Attack in MANET using AODV Routing Protocol

Node selection in Peer-to-Peer content sharing service in mobile cellular networks with Reduction Bandwidth

Named Data Networking. Lixia Zhang UCLA Computer Science Department August 12, 2014

Internet Load Balancing Guide. Peplink Balance Series. Peplink Balance. Internet Load Balancing Solution Guide

Video Streaming Over the Internet

Visualization of Internet Traffic Features

Media-Ready Network Transcript

Transcription:

Video Conferencing with Content Centric Networking Kai Zhao 1,2, Xueqing Yang 1, Xinming Ma 2 1. Information Engineering College, North China University of Water Rescources and Electric Power,Zhengzhou,china 2. Agricultural College of HeNan Agricultural University, Zhengzhou, China Abstract: A caching scheme for with Content Centric Networking is proposed. The scheme is one of our efforts to explore application designs on top of Content Centric Networking. Unlike the existing methods ACT or VoCCN, the with Content Centric Networking takes the advantage of the cache and the demand of the delay for the user. The scheme proposed accords with the difference of distance with all the partners which brings the different transmission time, and adopts the edge node intermediate to the next nearest node for the partner. Once the content cached is received by the next node, it will be deleted as soon as possible by the local node. The number results reveal that this scheme achieves a better efficiency of cache with a little delay sacrificed. Keywords: content centric networking; ; intermediate node caching 1. Introduction The Internet has evolved from focusing on the communication entities to concentrating on content distribution. Especially in recent years, with the development of the method for streaming media, like audio and video, content plays an important role in the Internet. In this context, CCN [1] has emerged as a promising candidate for the architecture of Future Internet to better respond to the today and future traffic characteristics. In this architecture, caching becomes a universal functionality available at each router. These distributed in-router caches make up the in-network cache system, which can improve network performance remarkably by increasing cache hit rate and reducing upstream traffic. User s real-time requirement of information retrieval becomes more and more strong, this is why media streaming techniques appear, and the growth of access network bandwidth also makes it possible. We can see media streaming techniques everywhere on the Internet today, online video, IPTV and so on. A Voice-over-CCN prototype is proposed in paper [2], the result is functionally and performance equivalent to Voice-over IP but substantially simpler in architecture, implementation and configuration. Since CCN secures content rather than the connections it travels over, VoCCN does not require delegation of either trust or keys to proxies or other network intermediaries and thus is far more secure than VoIP. ACT proposed in paper [3] can serve as a good example on how to design applications over NDN and also a useful tool for research collaborations as well as personal communications. ACT is a completely distributed audio conference tool using a named data approach to demonstrate that using NDN offers more flexibility in design space and results in an intrinsically more scalable and secure solution. A design of CCN live streaming is proposed in paper [4], CCN live streaming splits the video into a sequence of segments, and generates a playlist file, as HTTP live streaming do. The user who wants to play this video streaming can download the small playlist file, and play the video by getting the video segments one by one. All the file delivery is using CCN protocol. By our best knowledge, there is no scheme to consider the with CCN. Because the is an application with demand of real-time and lower shareablity. And the based CCN model need maintain the link state persistently and the partner for the conference has to get the content from the source which product a lot of source waste and privacy leaked. In this paper, we propose a scheme for with CCN based coordinated caching between the edge nodes according to the distance from the source. The scheme save the bandwidth and protect the conference privacy with a delay sacrificed. The remainder of this paper is organized as follows. In Section II, we introduce the background of CCN system model. Section III presents details of the scheme scheme with CCN. Section IV shows the results of simulations in terms of cache hit ratio and access delay. Section V discusses the scheme. Section VI concludes this paper with future work. 2. Reference CCN System Model In this section we briefly introduce the architecture of CCN and a focus on its in-networking caching property. CCN is novel network architecture from PARC. Different from current Internet practice, CCN is based on the key concept of content, and it makes content itself to be central role, rather than the location of content. When a consumer requests a given content, it is not queried in a particular site in the network, so the content can be found anywhere in the network and can be stored in several locations at the same time. The communication mechanism of CCN based on requester-driven and data-centric contains two distinct types of packets: Interest packet and Data packet (depicted in Figure. 1). Interest packet Content Name Selector (order perference, publisher filter,...) Data Packet Content Name MetaInfo (content type, freshness period,...) Nonce Selector (order perference, publisher filter,...) Data Signature (signature type, key locator,...) Figure. 1 Interest and Data Packet architecture In Figure. 2, as mentioned in paper, we illustrate the communication model of CCN. Users who want to obtain content send an interest packet with the name of the desired content to the network. Each router has a content cache called Content Store (CS) to maintain the content if it passed by the router. Upon receiving an interest, a router delivers the content directly from its cache if it has a copy of the content. Otherwise, it forwards the interest to the next router, which is determined by a routing table called Forwarding Information Base (FIB). In FIB, the next router information heading to the corresponding content source is stored based on content names. The interest is forwarded through the path toward the source until it reaches either a router that has the content in its cache or the content source, and then Journal of Residuals Science & Technology, Vol. 13, No. 8, 216 81.1

the content follows the reverse path trace by recording in Pending Interest Table (PIT), in which each entry stores the content name the user requested and a set of interfaces the interests come from. The PIT avoids repeated delivery of the same interest and the corresponding content, when multiple interests request the same content, only the first interest is forwarded toward the content source. When the content returns, the router checks the PIT entry and forwards the content to the interfaces which are listed in the PIT entry. lookup miss lookup hit Interest Packet Content Store(CS) Pending Interest Table(PIT) Forwarding Information Base(FIB) forward data add incoming interface drop or NACK upstream delete interest entry downstream forward cache Pending Interest Table(PIT) Data Packet Content Store(CS) Figure. 2 Forwarding process in CCN node In-network caching can significantly improve efficiency of network resource utilization. The CS which used to cache contents is a buffer memory equipped with CCN router. It allows the CCN router to return content without forwarding to content sources when there are consumers who interested in the same content. When the content which is matching interest in the PIT arrives, it is forwarded through the interested faces and will also be cached in the CS by the caching policy. These policies are referred to the cache and replacement policy of the content stored in the cache structure. 3. Video conferencing with CCN 3.1 The basic idea In CCN model, the user who first request one piece content has to get it from the source. When the data packet returns to the user, it will be cached by the nodes along the reverse path. The is always used by the big company or the government organization, and it is lower shareablity so that there is a worse cached efficiency. However, the conference may be persisting a long time for hours, if we maintain a link for each partner and cache the content at the pass node, it will product resource waste and the content can t be shared by the most users and add the risk of attack from the one who interest in the conference(figure.3 (a)). discard data (a) (b) Figure. 3 Original and our basic idea For the concrete user who holds usually, the partners always locate a constant building or city and connected the concrete node. Consequently, the company or government organization may construct a constant topology. If we analyze the distance between the partners nodes and the source node, we can conduct a better cache path by the node location. As depicted in Figure. 3 (b). 3.2 The caching scheme for 3.2.1 The caching model. As a big company or government organization, which always lean the link from the ISP to connect the department locating different place to be a union networks. Consequently, the characters of the leaned network are as a known knowledge for the manager of the network, such as the topology, the delay between the nodes which connect the department. In the proposed scheme we assume the delays between the nodes as prior information. As depicted in Figure.4. We focus on the network of a single administrative domain (e.g., an autonomous intra-domain system), where a set of routers with both the routing and storage capability serve content requests originated from end users, as shown in Figure 2. The origin stores all contents, referred to as the origin ; therefore, requests for any content object can always be satisfied by the. Note that is an abstraction of multiple origin s (in practice, there are multiple origin s hosting different contents). Let we consider the simple network model for the with CCN. In Figure.4, the delays between the nodes are a known knowledge. We denote T as the average latency of serving requests from source routers. And the partner by node 5 requests the content from, it will take a delay (ts1+t31+t35), if the cached the replica already, it just will spend time (t35), which bases on the user form node 5 will wait (ts1+t31). Journal of Residuals Science & Technology, Vol. 13, No. 8, 216 81.2

ts1 t35 t31 node 5 Figure. 4 The delays between nodes 3.2.2 Caching scheme In the model of Figure.4, we figure out the delays tij between the nodes. To make use of the universal caching for with CCN, we conduct the caching scheme as followed and the pseudo code of the algorithm is depicted in Table 1. While the stares, the which nearest the gets the video streaming first, and caches it until the next node 2 gets the content replica. Then deletes the content replica. The caches the content and waits the next interest packet for the content. The content replica should stay in the node for a constant time which is regarding with the distances between the nearest nodes which connected with each department. We must note that: if the time that the node gets the content from next node is much longer than that from the source, the node will get the content from the source directly. 3.3 An example Table 1 Algorithm for caching content Algorithm (interest/data name) For every arriving packet, interest denoted I. tij is the time from the node Vj to Vi. T is the time from source to Vi. The processing for Interest packet. 1. if tij<= T 2. forwarding the interest packet to Vj; 3. else 4. forwarding the interest packet to source ; 5. end if The processing for Data packet. 6. if the life of the content replica Ci is timeout 7. delete Ci; 8. end if 9. if there is no need caching for the next nearby node 1. forwarding the data packet to user; 11. else 12. caching the data packet and set timeout value tij ; 13. forwarding the data packet to user; 14. end if In this section, we display an example of the with CCN which is depicted as Figure. 5. We assume that there are three partners for the conference and the delay is same between neighbor nodes. The maintainer of the conference is called a. And the other can be called End host 1 and 2. Once the conference starts, both host 1 and host 2 are going to request the video streaming. The delays for host 1 and 2 have been computed because of the constant company networks. Consequently, the host 2 get the video streaming chunk will take three hops and host 1 take two hops. Then, the cached each chunk for a while, and the chunk in node3 will be deleted until the end host 2 takes one hop to get it. End Host 1 node 5 End Host 2 Figure.5. Example of the with CCN 4 Number results Journal of Residuals Science & Technology, Vol. 13, No. 8, 216 81.3

This section evaluates the scheme mainly from three aspects: (1) download speed, (2) the delay, and (3) the load reduction. 4.1 Experimental Setup We developed our simulation based on the ccnsim[5], which is based on OMNet++ simulation platform. The ccnsim with basic features of the CCN forwarding engine and caching components, is an open source CCN simulator developed in ParisTech. In our simulation, we set the number of content chunks to be 1, and the chunk will be request once while there is no congestion or packet loss. And the chunks are ranked from -9999 as the order of the video streaming file. Using the simulator, we implement the forwarding engine of the CCN and evaluate with an random network topology, which is randomly generated by Georgia Tech Internet Topology Model [6]. The link delay between each two cache nodes are the same. In ccnsim, each client represents a group of users. We generated a different request sequence for each client in one simulation run and return the same request sequences as a complete communication. We compared the following cache coordination schemes: The HTTP Live Streaming protocol details can be found in IETF draft [7] and CCN live streaming [8]. The experiments are conducted on a with Intel Xeon E562 CPU (2.4 GHz), and 24 GB main memory. The CPU integrates a small cache (SRAM) between the main memory (DRAM) and the cores. 4.2 Evaluation of downloads speed In this section, we want to display performance of the download speed by adding the partner from 1 to 1 and compare it with http living and at the same situation. The simulation results are showed in Figure. 6. From the picture, we can see that, when one partner joins in the conference, the has a higher download speed and there is no packet loss or congestion. With the partner s number increasing, our scheme shows the better performance than or. For, the request has to get the chunk from the and lead to congestion and travel a long path. And considers the cache at the edge nodes and got a quick download speed. While not only takes account into the edge node for caching but also the nearby node holding the same replica, it achieves the best performance. And we can see that, the more partners, the more quickly download speed. 6 5 download speed(kb/s) 4 3 2 4.3 Evaluation of the delay 1 1 2 3 4 5 6 7 8 9 1 number of partners Figure. 6 The relation of the number of partners and download speed We conduct the delay performance simulation at the same situation. The results are showed in Figure.7. The has a lower delay when there is a few partners, but its delay continue increasing with more partners joining in the conference. The reason is that the request has to arrive at the for every video chunk, so the packet will travel a long path. Meanwhile, the congestion will appear with the number of the partners increasing. Consequently, delay performance of the and didn t change dramatically according with the partner s numbers increasing. The edge node caches the same replica for the partners in both schemes. Especially, the nearby node can get the replicas from its neatest node by the predefined path in and it can reduce the delay of the nearby node who requested the video chunk first. 7 6 5 delay(ms) 4 3 2 1 4.4 Evaluation of the loads reduction 1 2 3 4 5 6 7 8 9 1 number of the partners Figure. 7 The relation of the number of partners and delay Journal of Residuals Science & Technology, Vol. 13, No. 8, 216 81.4

Let we check the load reductions (depicted in Figure.8) for three schemes at the same situation by comparing the ratio (arrive packets/total packets). Obviously, the whose interest packets must arrive at the to get the chunk has a ratio 1 whatever how many partner joined in the conference. Otherwise, has a much lower loads than but a higher than. For each partner in, the request packet has to arrive at the for the chunk which is wanted first, but for the partner in scheme, the partner can get the chunk (whether or not it is first request) from the nearby node that holds the wanted chunk. As a result, a lot of requests can be satisfied by the logical nearby edge nodes, and the loads can be reduced mostly. 1 load ratio(arrvial packets/total packets).9.8.7.6.5.4.3.2.1 5 Discussion 1 2 3 4 5 6 7 8 9 1 number of partners Figure. 8 The relation of the number of partners and reduction There is a lot of cache policies based on content popularity in CCN research literacy, though they all are for the low time demand and high shareability. With the low shareability and real-time demand, there is no scheme for the in CCN. For constant networks, we can design an efficiency caching scheme for the. In this paper, the scheme is proposed according with the delay between nearby nodes and sacrifices a few delays. For the partner of the, a few delays won t product any influence on the news transition, but can reduce the traffic between the lean networks and cut the affording of the lean link. Consequently, the scheme proposed can be adopted by big company or government organization. 6 Conclusion In this paper, a caching scheme for with CCN is proposed. According with the characters of the special network for company or organization, a concrete caching path is designed for video streaming chunk transition. The number results show that: compared with the existing scheme for live streaming, our scheme can get a better performance with higher downloading speed; a fewer delay and lower loads. In the future work, we consider how to make the scheme to be application in real use. Acknowledgment The research was supported by Henan province education department (No.15A1211) Reference [1] Jacobson V, Smetters D K, Thornton J D, et al. Networking named content. ACM Conference on Emerging NETWORKING Experiments and Technology, CONEXT 29, Rome, Italy, December. 212, pp. 117-124. [2] Jacobson V, Smetters D K, Briggs N H, et al. VoCCN: voice-over content-centric networks. The Workshop on Re-Architecting the Internet. ACM, 29, pp. 1-6. [3] Zhu Z, Wang S, Yang X, et al. ACT: Audio conference tool over named data networking. Icn, 211, 44(3), pp. 66-73. [4] Xu H, Chen Z, Chen R, et al. Live Streaming with Content Centric Networking. Terapevticheski Arkhiv, 212, 46(6), pp. 1-5. [5] Chiocchetti R, Rossi D, Rossini G. ccnsim: An highly scalable CCN simulator. Communications (ICC), 213 IEEE International Conference on. IEEE, 213, pp. 239-2314. [6] E. W. Zegura, K. L. Calvert, and S. Bhattacharjee, How to model an Internetwork, in Proc. IEEE INFOCOM, San Francisco, CA, USA, Mar. 1996, pp. 594 62. [7] IETF draft, HTTP live streaming, HTTP://tools.ietf.org/html/draftpantos-HTTP-live-streaming-8. [8] Apple HTTP live streaming site, HTTPs://developer.apple.com/resources/HTTP-streaming. Journal of Residuals Science & Technology, Vol. 13, No. 8, 216 81.5