Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung Alexei Agueev, Systems Engineer
ETHERNET MIGRATION 10G/40G à 25G/50G/100G
Interface Parallelism Parallelism increases the effective speed of an interface Each interface uses multiple lanes/lasers Bit Stripping ensures maximum efficiency Increased failure domain Multiplicative CapEX Cost 10G 40G
Standardizing on 25GbE Faster clock rate increases the effective speed of an interface Each interface uses a single lane/lasers 10G 25G 50G 25G & 50G Ethernet Founding Member
Cloud Servers & Storage Driving 25GbE and 50GbE Adoption PCIe Gen3 drives 25G and 50G PCIe-Gen1 2Gb/s 4x = 10GbE PCIe-Gen2 4Gb/s 8x = 40GbE PCIe-Gen3 8Gb/s 8x = 50GbE Evolution of PCI Express Technology Maximize switch and server throughput and efficiency Minimize capex fewer switch ports and cables Minimize opex lower power and cooling Minimize cost per bit by utilizing highest speed available
Example of a 2x 25G Ethernet Adapter
Evolution of the Network Leaf 2011-7050 Series 2013-7050X Series 2015-7060X Series 64 lanes 1.28Tbps 128 lanes 2.56Tbps 128 lanes 6.4Tbps 1/10GbE 10/40GbE 25/40/100GbE
OPENSTACK INTEGRATION MODELS
Arista EOS DB protobuf OpenConfig SDK CLI eapi OMI XMPP Next Gen EOS For YANG model configs For Analytics and Telemetry Mgt BGP container tracer More Application Visibility Notify MLAG SysDB states PIM Add containers in EOS Logs STP Counters Driver IGMP etc More languages (Go SDK, goapi) Hybrid Cloud integration Arista hardware abstraction Unmodified Linux. layer Kernel New protocols scaling: 1M+ Routes, 100K+ tunnels, Millisecond convergence
NetDB Custom Back-end OR CloudVision Apps Partner Apps Custom Apps Open APIs grpc ( protobuf ), HTTP, Custom (SDK, scripts), OpenConfig YANG models, RESTCONF, NETCONF Stream APIs Stream APIs Stream APIs Stream APIs Stream APIs Network state architecture Real-time state streaming Working with Network States Coalesce - network-wide states into one DB State Filtering Queries Exports Use Cases Analytics - anomalies, trends, security,... Correlation - troubleshoot, understand behaviours Telemetry - real-time counters, queues, logs, events Same publish-subscribe architecture as SysDB Network Central State Store open collection and consumption State Replication Complete network-wide real-time state streaming
Arista OpenStack Integration VLAN-based/ML2 CVX as a single point of contact CVX takes care of MLAG Dynamic VLAN creation (LLDP-based) Neutron ML2 Arista CVX MLAG Spine Create VLAN L2 Fabric Dynamic creation of VLAN on OS compute node link and uplink based on CVX LLDP table Rack 1 Rack N-1 Rack N-2 Rack N
Arista OpenStack Integration VXLAN-based Transparent VLAN or Hierarchical Port Binding Scalable IP fabric with a Layer 3 ECMP design Hardware VXLAN VTEP configured on every leaf switch Layer 2 connectivity between racks via VXLAN across the L3 fabric Neutron ML2 Arista CVX Layer 3 ECMP fabric for increased underlay scale Create VLAN VNI àvlan Layer 2 L3 ECMP IP Fabric VNI VNI VTEP VTEP VTEP VTEP Rack 1 Rack N-1 Rack N-2 Rack N
Arista OpenStack Integration L2 Gateway Syncs the Neutron DB with the CVX DB via DB Integration with Ironic. Support for Security Groups Every ToR can be a HW VTEP and pass-through for VXLAN at the same time MLAG redundancy supported seemlessly Neutron L2 Gw Svc Plugin L2 Gw Agent DB CVX Create Port, VLAN à VNI Mapping Layer 2 L3 ECMP IP Fabric VNI Layer 3 ECMP fabric for increased underlay scale VNI VTEP VTEP VTEP VNI VTEP Bare Metal Security Groups Rack 1 Rack N-1 Rack N-2 Rack N
Scaling OpenStack Multiple OpenStack clusters supported per CVX instance Can be combined with other network virtualization NSX Etc VXLAN breaks out of the 4K VLAN limit 16M VNIs mapped to locally significant VLANs
Multi-Tenant OpenStack Deployment Neutron (Region1) ML2 Arista Neutron (Region2) ML2 Arista VNI Y Region 1 VNI X Region 2 VTEP VTEP VTEP VTEP Rack 1- Region1 Rack 2 Region1 Rack N-1 Region2 Rack N Region2
Routing with OpenStack L2 up until now, how do you route? Can be performed by a Network Node Allows connectivity between tenants and external networks NAT Support VRF Support Limited by software Alterative is perform this at the switch...with limitations!
OpenStack Integration L3 Plugin Arista L3 plugin provisions SVIs over eapi in response to tenant s creating logical routers Routing happens at dedicated network nodes Pair of MLAGed physical devices Active-Active HA via MLAG Performs routing for the OpenStack cluster - Can be scaled out horizontally by tenant as needed TORs can also be used as the routing nodes Neutron MLAG Spine ML2 Arista L2 Fabric Arista L3 Plugin Arista L3 node Infra / GW Rack Rack N-1 Rack N-2 Rack N
MACRO-SEGMENTATION SECURITY (MSS)
Current Approaches for DC Security Security at the perimeter north-south flows only Scaling limitations e.g. active/standby HA pairing Security policy dependent on network topology and vice versa Network & security administration are co-dependent Limited or no security of east-west flows, especially for physical devices Little or no coordination between vswitch security and physical firewalling Active vswitch vswitch Active/Standby Current approaches ill-suited to the needs of the Software Driven Cloud Data Center
Definitions Micro-Segmentation Inserting services in the path of inter-vm traffic (e.g. intra-tenant) Policies defined by VMware NSX for each workload Enforced in the Distributed vswitch based application, tag, etc., Macro-Segmentation TM Inserting services between workgroups (inter-tenant) in the physical network by defining inter-workgroup policies Arista Macro-Segmentation Security (MSS TM ) An extension in EOS that utilizes CloudVision to automate security service insertion in the network Integration with leading next-generation firewalls
Micro-Segmentation VMware NSX distributed firewalling addresses security policy and tenant isolation inside the hypervisors (Implemented by the VMware distributed virtual switch) Provides very fine-grained security policies at VM-level in conjunction with virtual instances of next generation firewalls for advanced security Utilizes the full context of the hypervisor with visibility into end-user, application, and tenant related information Challenges around physical devices Micro-segmentation is complementary to Macro-Segmentation (MSS is implemented network-wide via CloudVision and the Arista TOR switches)
Arista Macro-Segmentation Services Transparent Insertion of Firewall/ Service No new tagging or encapsulation One point of control e.g. the security policy manager For both physical and virtual firewalls Directly maps to security model zones etc. No server reconfiguration No per application overhead Virtual Virtual Physical Firewalls Physical Servers & Storage
Arista Macro-Segmentation Services Physical Topology Logical Topology Enables Logical Topology to Enable Services in the Network Instantiates logical network topology to enforce service policies No constraints on physical topology - or device placement Policy comes from the service devices themselves
Arista Macro-Segmentation Services Security Admin owns the security policies No Network Admin involvement required Network Admin owns the network configuration. PAN service is enabled within CloudVision, which: Learns security policies and associated end devices Logically instantiates them in the network
Arista Macro-Segmentation Services Dynamic Insert security between any data center physical and virtual workload Automatic and seamless service insertion Follows host and application throughout the network Open No proprietary frame formats Works in multi-vendor network architecture Open APIs Ecosystem Works with leading Security, Cloud Orchestration and Overlay Controllers
Thank You Spring 2016