CPRI FrontHaul requirements continuing discussion with TSN

Similar documents
Network Management & Monitoring

GUARANTEED END-TO-END LATENCY THROUGH ETHERNET

Appendix B. Standards-Track TCP Evaluation

estadium Project Lab 2: Iperf Command

Mininet Performance Fidelity Benchmarks

PoC of Structure agnostic Radio over Ethernet. Peter K. Cho, A. Kim, J. Choi Actus Networks/HFR, Inc

Module 15: Network Structures

Performance Modeling

2. Traffic. Contents. Offered vs. carried traffic. Characterisation of carried traffic

Homework 1. Question 1 - Layering. CSCI 1680 Computer Networks Fonseca

2. Traffic lect02.ppt S Introduction to Teletraffic Theory Spring

Feasiblity of Radio over Ethernet -demonstration with time synch. Peter K. Cho, A. Kim, J. Choi Actus Networks/HFR, Inc

Priority Considerations for Fronthaul Traffic. Raghu M. Rao, Xilinx Inc.

ECE453 Introduction to Computer Networks. Broadcast vs. PPP. Delay. Lecture 7 Multiple Access Control (I)

802.1Qcc findings. Astrit Ademaj. Sept 2018

3. Examples. Contents. Classical model for telephone traffic (1) Classical model for telephone traffic (2)

Time-Sensitive Networking (TSN) How the additional value will meet your requirements

Content. Deterministic Access Polling(1) Master-Slave principles: Introduction Layer 2: Media Access Control

Characterizing Internet Load as a Non-regular Multiplex of TCP Streams

Investigating the Use of Synchronized Clocks in TCP Congestion Control

Routing with a distance vector protocol - EIGRP

Resource allocation in networks. Resource Allocation in Networks. Resource allocation

Michael Johas Teener. April 11, 2008

Applicability of TSN to 5G services Tongtong Wang, Xinyuan Wang Huawei Technologies

802.1 TIME-SENSITIVE NETWORKING (TSN) ON 802.3CG MULTIDROP NETWORKS

Communication Networks

Simulation and Analysis of Impact of Buffering of Voice Calls in Integrated Voice and Data Communication System

A Performance Comparison of Multi-Hop Wireless Ad Hoc Network Routing Protocols. Broch et al Presented by Brian Card

Data and Computer Communications. Chapter 11 Local Area Network

Wireless Networks (CSC-7602) Lecture 8 (15 Oct. 2007)

Module 16: Distributed System Structures. Operating System Concepts 8 th Edition,

Ultra-Low Latency Shaper

Streaming Video and TCP-Friendly Congestion Control

Distributed System Chapter 16 Issues in ch 17, ch 18

Models. Motivation Timing Diagrams Metrics Evaluation Techniques. TOC Models

Transport protocols Introduction

Fast Retransmit. Problem: coarsegrain. timeouts lead to idle periods Fast retransmit: use duplicate ACKs to trigger retransmission

Lecture 21. Reminders: Homework 6 due today, Programming Project 4 due on Thursday Questions? Current event: BGP router glitch on Nov.

On TCP friendliness of VOIP traffic

Networks Homework # 2

Latency on a Switched Ethernet Network

Switching Networks (Fall 2010) EE 586 Communication and. August 30, Lecture 3. (modified by Cheung for EE586; based on K&R original) 1-1

Toward a Reliable Data Transport Architecture for Optical Burst-Switched Networks

Silberschatz and Galvin Chapter 15

Teletraffic theory (for beginners)

Network Management & Monitoring Network Delay

LAN Based Radio Synchronization

An Implementation of the Homa Transport Protocol in RAMCloud. Yilong Li, Behnam Montazeri, John Ousterhout

Network performance. slide 1 gaius. Network performance

Module 15: Network Structures

The Network Layer and Routers

Gate Operations and Automatic Guard Band Feature

Time-Step Network Simulation

Fairness in the IEEE network. Shun Y. Cheung

CHAPTER 17 - NETWORK AMD DISTRIBUTED SYSTEMS

Chapter 8: Multiplexing

P802.1CM Simulation Results for. Profiles A &B. János Farkas and Balázs Varga.

Chapter 3 Packet Switching

Common Public Radio Interface

Access network systems for future mobile backhaul networks

Presented by: Murad Kaplan

Performance investigation and comparison between virtual networks and physical networks based on Sea-Cloud Innovation Environment

Common Public Radio Interface

Comparative study of LTE simulations with the ns-3 and the Vienna simulators

Layered Network Architecture. CSC358 - Introduction to Computer Networks

Computer Science 461 Midterm Exam March 14, :00-10:50am

Sample Routers and Switches. High Capacity Router Cisco CRS-1 up to 46 Tb/s thruput. Routers in a Network. Router Design

Latency on a Switched Ethernet Network

Module 16: Distributed System Structures

End-to-End Mechanisms for QoS Support in Wireless Networks

Applicability of OTN for NGFI / CPRI Fronthaul. Scott Wakelin Richard Tse Steve Gorshe - Microsemi

Episode 5. Scheduling and Traffic Management

TUTORIAL FOR UW NETWORK CODING USING OPNET AND MATLAB

Simple Data Link Protocols

NGFI architecture considerations. Lujing Cai, Abdellah Tazi AT&T

II. Principles of Computer Communications Network and Transport Layer

AVB Latency Math. v5 Nov, AVB Face to Face Dallas, TX Don Pannell -

Large-Scale Deterministic Network (LDN)

Troubleshooting High CPU Caused by the BGP Scanner or BGP Router Process

802.1Qbv: Performance / Complexity Tradeoffs

Lecture 11: Networks & Networking

PacketExpert PDF Report Details

Accurate and Efficient SLA Compliance Monitoring

THE TCP specification that specifies the first original

Introduction to Real-Time Communications. Real-Time and Embedded Systems (M) Lecture 15

PON Functional Requirements: Services and Performance

Technology for Adaptive Hard. Rui Santos, UA

P802.1CM Time- Sensitive Networking for Fronthaul

China Mobile s View on Next Generation Fronthaul Interface

First Exam for ECE671 Spring /22/18

Lecture 3: Modulation & Layering"

CSE 1 23: Computer Networks

Performance Consequences of Partial RED Deployment

TKN. Technische Universität Berlin. Circuit Switching Module for ns-2. Filip Idzikowski. Berlin, March 2009

Lecture 2: Internet Structure

Peristaltic Shaper: updates, multiple speeds

Resource Sharing or Designing Access Network For Low Cost.

Urgency Based Scheduler Scalability - How many Queues?!

STEVEN R. BAGLEY PACKETS

Lecture 2 Communication services The Trasport Layer. Antonio Cianfrani DIET Department Networking Group netlab.uniroma1.it

Transcription:

CPRI FrontHaul requirements continuing discussion with TSN Peter.AshwoodSmith@Huawei.com (presenter) Tao.Wan@Huawei.com (simulations) Nov 204 San Antonio, Texas

Experimental Goal - ongoing To run the simplest possible simulation of a CPRI front haul network. Make use of the TSN concepts. Look at the before/after results for jitter/delay. Understand the pros/cons involved. Make any recommendations on required changes to TSN. Initial focus on Scheduled traffic (802. ) simplified assumptions. I.e. perfect frequency sync, ns level scheduling, Q per CPRI connection & no background traffic. Ongoing will add more and more reality (eg card2card delay/jitter, clock errors, encoding delays).

Network Simulator (NS-3) A discrete-event network simulator, targeted primarily for research and educational use. Free software, licensed under the GNU GPLv2 license, and publicly available at http://www.nsnam.org/ Clean slate design (not based on NS-2), aiming for easy use and extension. Written entirely in c++ with Python wrappers. NS-3 models used in our simulation include, but not limited to: core, network, Internet, UDP Echo Application, and Flow Monitor.

4.9 4.9 4.9 4.9 SIMULATED FRONTHAUL NETWORK 4.9 4.9 4.9 4.9 4.9 4.9 4.9 4.9 20g km 40g 2km 20g km 40g 2km 20g km 40g 2km C-RAN 200g 5km 400g 5km 600g 5km 40g 2km 20g km 40g 2km 20g km 40g 2km 20g km CPRI Connections PDU size = 000 bytes

Queue models in our simulation We use two queue models, namely DropTail queue and Scheduled Queue, to study the jitter behavior of existing Ethernet droptail Q s and a first crude model of the proposed scheduled Q s. DropTail queue comes with NS-3. It is a first-in-first-out queue that drops packets at the tail. Note a couple of other queue models are also available in NS-3, but not tested since they primarily aim at improving TCP performance. Our testing uses a UDP application with perfect rates to simulate three CPRI speeds into a single DropTail queue at each egress link. Scheduled queues in our implementation was totally local, no attempt to phase synch (+ delay )between neighbors, just frequency locked (i.e. jitter reduction attempt, not delay reduction).

Delay & Jitter for DropTail Queue 00000 90000 DropTail Queue PKT_LEN=000 bytes 80000 70000 60000 50000 40000 MinDelay MaxDelay Jitter x 00 30000 20000 0000 ns 0 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 Stable delays and jitters for all flows. Jitter is around 400ns. Note date rates are, 4.9, and Gbps. Flow ID

First Simple Local Scheduling Algorithm One queue per data flow (i.e., no queue sharing among flows), and each is processed according to a schedule (per port) A schedule has a fixed cycle of constant duration, equivalent to the least common multiplier of packet transmission intervals of all flows, and repeats infinitely (until a new schedule is calculated and activated). For example, if a packet length is of 000 bytes, a data flows of 5Gbps would have a packet transmission interval of 600ns, and 8Gbps flow would have an interval of 000ns. In this case, the schedule cycle is of 8000ns. Our simulation uses three data rates of 2.5Gbps, 5Gbps, and 0Gbps. Thus, the schedule cycle is of 3200ns for a packet length of K bytes. A schedule cycle is divided into a number of time slots of equal size. Each slot is at least equal to the packet transmission time of the outgoing link. (see next page for example). Each schedule is calculated independently for each port without any global optimization. The scheduling algorithm further assumes that the first packets from all flows arrive all at time 0 (i.e., when the schedule starts).

2.5G 5G 0G A schedule example 2 3 40G Packet length = 000 bytes Packet transmission intervals of 2.5Gbps, 5Gbps, and 0Gbps are of 3200ns, 600ns, and 800ns respective (= packet_length*8/data rate). Schedule cycle = lcm(3200, 600, 800) = 3200ns. Packet transmission time (ptt) for 40Gbps link is 200ns = (packet_length*8/link_speed). Slot size = k*ptt. For convenience, we choose k =.2 to make sure a slot is always enough for transmitting a packet. So the slot size = 240ns. Note for any k>, (k-)/k percent of bandwidth is lost. Total number of slots = schedule_cycle / slot size = 3200/240 = 3. We further add a non-used trailing slot of 80ns (= 3200 % 240). This results in further bandwidth loss. Below is a schedule for this 40G link: 2 3 0 0 3 0 2 3 0 0 3 0

A schedule example (Cycle = 3200ns, packet=000bytes) 2.5G 5G 0G 3200ns 600ns 800ns 2 3 0 0 3 0 2 3 0 0 3 0 2 3 2 3 0 0 3 0 2 3 0 0 3 0 A 40G B 4 5 6 TotalSlots=3, slotsize = 240ns, trail=80ns C X 200G D C-RAN 2 3 4 5 6 7 8 9 0 0 0 0 0 0 3 0 0 6 0 2 0 0 9 0 0 0 0 0 0 0 0 2 3 0 5 6 0 8 9 0 2 0 0 0 0 0 0 3 0 0 6 0 0 9 0 0 0 0 0 0 0 2 2 2 3 4 5 6 7 8 9 0 0 0 0 0 0 3 0 0 6 0 2 7 8 9 0 2 0 0 9 0 0 0 0 0 0 0 0 2 3 0 5 6 0 8 9 0 2 0 0 0 0 0 0 3 0 0 6 0 0 9 0 0 0 0 0 0 0 2 2 TotalSlots=66, slotsize = 48ns, trail=32ns

Delay & Jitter for Scheduled Queue 20000 Scheduled Queue PKT_LEN=000 bytes 00000 80000 60000 MaxDelay MinDelay Jitter 40000 20000 ns 0 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 Flow ID All jitters are reduced to ~zero (k=.2). Note date rates are 2.5, 5.0, and 0.0Gbps.

Jitter Comparison 450 400 350 300 250 200 Jitter (DT) Jitter (SC) 50 00 50 ns 0 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 Flow ID Scheduled queue reduces all jitters to ~zero (the best case).

Maximum Delay Comparison 00000 90000 80000 70000 60000 MaxDelay(SC) MaxDelay(DT) 50000 40000 ns 30000 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 Flow ID Scheduled queue increases delay by 4347ns in the worst case.

Jitter for Scheduled Queue 30 Scheduled Queue PKT_LEN=000 bytes 25 20 5 0 k=. k=.2 k=.3 k=.4 k=.5 5 Flow ID 0 ns 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 All jitters are zero when k=.2, or.5. When k=.,.3, or.4, small amount of jitters appear.

600 Jitters for different packet sizes (DropTail Queue) DropTail Queue 500 400 300 200 PKT=250B PKT=500B PKT=750B PKT=000B PKT=250B 00 ns 0 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 Jitters increase when packets get larger. Flow ID

Jitters for different packet sizes (Scheduled Queue) 200 Scheduled Queue 000 800 600 400 PKT=250B PKT=000B PKT=750B PKT=500B PKT=250B 200 0 ns 2 3 4 5 6 7 8 9 0 2 3 4 5 6 7 8 9 20 2 22 23 24 25 26 27 28 29 30 3 32 33 34 35 36 Flow ID All jitters are zero when packets are of 500Bytes and 000Bytes. When packets are of other sizes, some flows do have nonzero jitter.

Some Thoughts Single DropTail queue seem to incur small amounts of jitter, and perform consistently with different packet sizes. If background traffic is pre-emptable then worst case would be an additional 64-28 bits of jitter (about.5-3ns at 40G). Scheduled queues have potential to nearly eliminate jitter, but require more effort in developing advanced scheduling algorithms. Especially difficult globally. More than 8 scheduled queues will be required if we are to use this method with CPRI however we still have to explore the gates concept Some bandwidth will be wasted matching the input speeds to the scheduled time slots and intervals.

Thank You