Network isation Reference architecture and ecosystem_ Telefónica I+D @ Global CTO 18.03.2014
A future-proof network architecture requires distributing data plane intensive functions and centralising control plane ones There will be two kinds of ized Network Infrastructure: local PoPs and regional Data Centres Service Domain Data Plane must be Distributed LOCAL PoPs CDN P-CSCF v Video Security Control Plane can be Centralised REGIONAL DATA CENTRES SDP IMS v CSFB SRVCC NGIN M/SMSC EPC BRAS PE DHCP PCRF Network Domain DPI CG-NAT GGSN DNS UDB Infrastructure HW and SW decoupling OS + Hypervisor COTS HW HW and SW decoupling OS + Hypervisor COTS HW MPLS/SDN/Optical MPLS/SDN/Optical
Network isation is not Cloud Computing The network differs from the computing environment in 2 key factors: 1 2 Data plane workloads (which are huge!) Network requires shape (+ E2E interconnection) NEED OF HIGH AND PREDICTABLE PERFORMANCE (as with current equipment) GLOBAL NETWORK VIEW IS REQUIRED FOR MANAGEMENT which are big challenges for vanilla cloud computing and most of industry is offering to Telcos just IT based cloud products as network virtualization environments
ETSI NFV ISG has generated a reference architecture for ensuring interoperability and carrier grade performance Management environment Legacy OSS/BSS Execution environment Orchestrator Network Scenarios Network Functions VNF Manager Network Functions OS + Hypervisor Commodity HW Commodity Switching infrastructure ised Infrastructure Manager Machines
High and predictable performance is not an issue (e.g. vcpe, vcg-nat, vbras ) as long as you know how! EXECUTION EXECUTION Acceptable performance 80 Gbps per COTS blade GAP x10 Bare Metal @vpop Bare Metal @Cloud MANAGEMENT Bare Metal @Cloud MANAGEMENT What defensive Industry says What can be achieved doing things well (*) (*) ETSI NFV Work Item NFV Performance & Portability Best Practises : DGS/NFV-PER001 Current version: v0.0.7 (stable draft 15/10/2013) Telefónica is rapporteur of the draft, as well as chair of Performance and Portability Expert Group
EXECUTION EXECUTION EXECUTION x86 technology evolves faster than ASIC 80 Gbps per COTS blade Bare Metal Bare Metal @vpop Bare Metal @Cloud MANAGEMENT @Cloud MANAGEMENT Bare Metal MANAGEMENT Westmere (2010) Sandy Bridge (2011) Ivy Bridge (2013) BARE METAL Direct PCIe connection to the processor Direct cache access for I/O Large pages support for I/O Support for translations in memory R/W from CPU (small and large pages) Support for translations in memory R/W from I/O (only small pages) Support of NICs in passthrough Support for translations in memory R/W from I/O (large pages)
MEMORY A more detailed HW visibility is needed CLOUD COMPUTING VIEW MEMORY CPU Core Core Core Core Core Core Core Core I/O device I/O device NETWORK VIRTUALISATION VIEW Minimise QPI usage CPU Core Core Core Core QPI CPU Core Core Core Core MEMORY Max. cache sharing Min. mem. translations I/O device I/O device I/O I/O device device Polling mode drivers Full assigment to process
while server configuration does not add bottlenecks CLOUD COMPUTING NETWORK VIRTUALISATION machine 1 machine N machine 1 machine N Apps OS Apps OS Apps OS Network Functions SW libs Apps OS Network Functions SW libs DATA PLANE IS MANAGED DIRECTLY HW HW HW HW OS + Hypervisor Hardware BYPASSED OS + Hypervisor Hardware TRAFFIC UPSTREAM TRAFFIC DOWNSTREAM TRAFFIC
If ignored, equivalent deployments would lead to completely different behaviours! Correct mapping allows LINE RATE Random mapping is FAR FROM LINE RATE x2.5 Random mapping leads to UNPREDICTABILITY
Industry s best practices are being set Based on the previous results, a formal list of recommendations aimed at a telco datacentre has been issued. A formal list of features to be included in the templates for orchestration has been elaborated. Both are collected in ETSI NFV Work Item NFV Performance & Portability Best Practises : DGS/NFV-PER001 Current version: v0.0.7 (stable draft 15/10/2013) Telefónica is rapporteur of the draft, as well as chair of Performance and Portability Expert Group.
but more work is needed on closing the gaps and getting the industry focused on providing real value ADD VALUE HERE: Industry should focus on providing differential VNFs. Credible ROADMAP needed!! Legacy OSS/BSS Network Functions Orchestrator VNF Manager ADD VALUE HERE: Industry should focus on providing differential VNFs and Network Orchestration Current State of the Art is good enough (if properly arranged) OS + Hypervisor Commodity HW Commodity Switching infrastructure ised Infrastructure Manager Work is needed in Open Source to AVOID proliferation of VERTICAL SOLUTIONS Network isation Infrastructure and its Management should become COMMODITY
OUR NEXT STEP: Network isation Reference Lab @ Telefónica ECOSYSTEM ADD VALUE HERE: Simplest integration ECOSYSTEM We want your logo here VNFs NFVO Add your logo here ADD VALUE HERE: Network Orchestration on top of Carrier-grade OpenStack NFVI VIM = OpenStack++ OFC++ Proper HW & Hypervisor config Carrier-grade OpenStack going to upstream development BASELINE TECHNOLOGIES 12
To introduce gradually and smoothly these changes in our network It is key to decide what to virtualize first Step 1 (2013-2014) Step 2 (2015-2016) Step 3 (2016-.) Home environment simplification vcpe vstb Local Data Centre Single IP Edge vepc vcg-nat vbras vpe vggsn Unified IP Edge Regional Data Centre Network Intelligence rationalisation Core simplification and centralisation vdhcp vims vdpi vdns vudb vpcrf vsdp Real Time Network Analytics Infrastructure orchestration IP and photonic coordination Transport provisioning simplification PCE SDN Orchestrator Net OS: joint orchestration of network resources