Using Network Virtualization in DevOps environments Yves Fauser, 22. March 2016 (Technical Product Manager VMware NSBU) 2014 VMware Inc. All rights reserved.
Who is standing in front of you? Yves Fauser Technical Product Manager @ VMware I m working with VMware s network virtualization product called NSX in VMware s Network and Security Business Unit (NSBU) working on Networking within Containers, API / Automation and OpenStack I m the co-organizer for the OpenStack and the Ansible Munich Meetup group I ve spend 3 years working at VMware as Systems Engineer & Solution Architect, 7 years as a Systems Engineer at Cisco, and I was a networking / OS consultant and developer before Topics I love to discuss and work with: Configuration Management, Automation, Containers / Cloud, OpenStack, Networking,
Agenda 1 Very quick overview of Network Virtualization 2 Network Virtualization vs. pre-configured networks 3 Key DevOps use cases 4 Takeaways / Questions 3
A quick overview of Network Virtualization
The Operational Model of a VM for the Networking Internet
A Virtual Network?
Non-Disruptive Deployment
Programmatically Provisioned
Problem: Data Center Network Security Perimeter-centric network security has proven insufficient, and micro-segmentation is operationally infeasible Internet Internet Little or no lateral controls inside perimeter Insufficient Operationally Infeasible
Leverage SDDC Approach for Micro-Segmentation Hypervisor-based, in kernel distributed firewalling Platform-based automated provisioning and workload adds/ moves/changes Security Policy Cloud Management Platform Internet Perimeter Firewalls 10
Provides A faithful reproduction of network and security services in software Management APIs, UI Switching Routing Load balancing Connectivity to physical networks Policies, groups, tags Firewalling VPN Data security Activity monitoring CONFIDENTIAL 11
VMware NSBU-supported Open Source Projects Three major open source projects: Open vswitch (OVS) OpenStack Networking ( Neutron ) Open Virtual Network (OVN) Involvement in other open source projects includes OpenStack Policy ( Congress ), and numerous other OpenStack projects as well as Kubernetes, Docker Libnetwork, Ansible, etc. 12
Why Network Virtualization and not pre-configured networks?
Common starting point simple predefined VLANs A lot of customers start with just a few VLANs with /23 or /22 subnets Easy entry point for the DevOps guys just ask for a few pre-configured VLANs and deploy VMs into them Routing, Perimeter FW, Load-Balancing is done in the physical network out of scope for the DevOps team 10.24.2.0/22 VM VM VM VM VM VM VM / Container
Limitations of the simple pre-defined VLANs 1/2 Security and Compliance Missing Micro-Segmentation; Everybody sees everybody on the same Layer 2 VLAN No Application Tiering Web/App/DB (only if pre-configured) Networking No ability to clone VMs or vapps while retaining their IPs Limited mobility of workloads between DCs / Pods / Rack-Rows (whatever your L2/L3 boundary is) Solutions to span DCs / Pods / Rack-Rows using L2 extensions are expensive (Opex & Capex) and introduce complexity (Stability Risks) A lot of manual steps are needed if changes and extensions have to be made This is what slows down the provisioning times to days or weeks
Limitations of the simple pre-defined VLANs 2/2 Continuous Delivery / Testing Development environments do not closely resemble the staging and production environment Missing ability of cloning while retaining IPs, Segments (App Tiers), Firewall Rules, Load-Balancer Rules furthermore limits the usefulness in development environments Not a viable solution to be promoted into Staging and Production anytime soon Operational This solution fortifies the silo mentality between cloud operations (virtualization / automation team) and the networking and security teams Slow and manual provisioning process when changes and extensions have to be made results in finger pointing between teams
Network Virtualization and Continuous Deployment Code done Build & Unit Test Integration Test QA/ Staging Production Time lost because of failures at handoff from Dev to Test and Test to Production Dev / Stage / Prod environments have different hardware setup Configuration differences between environments Version and dependency differences SDDC with Network Virtualization Configuration Management Configuration Management CONFIDENTIAL 17
Network Virtualization use cases with NSX
Developer Cloud use cases for Network Virtualization with NSX NSX used with configuration Management and custom build automation systems: REST API documented using RAML and in future also OpenAPI Python library and code samples Ansible Modules for Installation a logical switch operation of NSX NSX used within private cloud NSX in OpenStack for both vsphere and KVM hypervisors as well as mixed hypervisor environments Key component is VIO (VMware Integrated OpenStack) but also integrated with Mirantis, SUSE, RedHat, Canonical NSX in vrealize Automation VMware s own cloud management / automation stack NSX in containers: Working on Docker Libnetworking and Kubernetes CNI Plugins
NSX-v RAML What s available https://github.com/vmware/nsxraml RAML Description of the NSX-v API Generated Postman collection Generated HTML and md documentation Special thanks to Kevin Renskers for his work on the raml2html and raml2md generator! https://github.com/kevinrenskers CONFIDENTIAL 20
NSX RAML Python Client https://github.com/yfauser/nsxramlclient (community supported) A dynamic client based of the NSX RAML work Provides a Python native access to NSX Objects through Python native datatypes like dictionaries Supports CRUD operations for all resources described in the NSX RAML file. Accessed through the displayname attribute of the RAML resource Python 2.7.8 (v2.7.8:ee879c0ffa11, Jun 29 2014, 21:07:35) Type "copyright", "credits" or "license" for more information. In [1]: from tests.config import * In [2]: from nsxramlclient import NsxClient In [3]: client_session = NsxClient(nsxraml_file, nsxmanager, nsx_username, nsx_password, debug=false) In [4]: new_lswitch = client_session.create('logicalswitches, uri_parameters={'scopeid': vdn_scope}, request_body_dict=lswitch_create_dict) CONFIDENTIAL 21
Ansible NSX Module https://github.com/yfauser/nsxansible (community supported) Ansible modules based of the NSX RAML and NSX RAML client work A set of fully idempotent Ansible modules for NSX Currently in prototype state, supports CRUD operations for logical switches and the installation of NSX $ ansible-playbook test_logicalswitch.yml PLAY [localhost] ************************************************************** TASK: [logicalswitch Operation] *********************************************** changed: [localhost] PLAY RECAP ******************************************************************** localhost : ok=1 changed=1 unreachable=0 failed=0 CONFIDENTIAL 22
Cloud Native Apps with Docker Containers Docker Benefits 1. Faster Deployment 2. Microservices 3. Portable dev, stage, prod & multi-cloud Top CNA Use Cases 1. Devops building CI / CD 2. Platform as a Service 3. Containers as a Service 4. Dev / Test 23 CONFIDENTIAL
NSX for Cloud Native Apps Solution overview Orchestration 1. Container Cluster management tools are used to deploy and manage Cloud Native Apps K8 Spec NSX Kubernetes Plugin Docker Compose NSX Docker Plugin 2. NSX integrates with Docker and Kubernetes Cluster Management via plugins and configures Networking and Security for the Docker Container VM KVM & vsphere App Containers App App Container Hosts Linux Server - Baremetal App Containers App Connectivity Availability Security App 3. Support for multiple Containers / PODs in a VM (vsphere and KVM) 4. NSX enables per Container Network and Security Policy configuration 5. NSX Troubleshooting and Operation tools enable per Container visibility - e.g. SPAN, IPFIX. Traceflow
K8s NSX Plugin Current early work Map a container interface to a VM vnic (VIF) DFW rules are applied to one VIF per POD on the hypervisor vif Distributed Logical Routing is used to route traffic between the Pods on different minions. The default gateway of the POD is the IP Interface of the Distributed Logical Router The Minions management IP Stack is separated from the POD traffic and can be connected through NSX logical switches or VLAN port-groups Hypervisor (ESXi & KVM) DLR vif mgmt network vif mgmt network DFW We can now enforce fine grain rules on the Hypervisor DFW even for inter-pod traffic on the same minion vif DFW eth0 eth1 eth2 Minion VM Minion Mgmt. IP Stack Lx bridge eth2 eth1 eth0 Lx bridge Lx bridge Pod Pod Lx bridge Pod Pod See more here: https://www.youtube.com/watch? v=841g3dukht4 CONFIDENTIAL Minion VM Minion Mgmt. IP Stack
Questions?