OpenStack and Cumulus Linux Validated Design Guide. Deploying OpenStack with Network Switches Running Cumulus Linux

Size: px
Start display at page:

Download "OpenStack and Cumulus Linux Validated Design Guide. Deploying OpenStack with Network Switches Running Cumulus Linux"

Transcription

1 OpenStack and Cumulus Linux Validated Design Guide Deploying OpenStack with Network Switches Running Cumulus Linux

2 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Contents Contents... 2 OpenStack with Cumulus Linux... 5 Objective... 5 Enabling Choice of Hardware in the Data Center... 5 Combined Solution Using OpenStack and Cumulus Linux... 5 Driving Towards Operational Efficiencies... 6 Intended Audience for Network Design and Build... 7 OpenStack Network Architecture in a PoC or Small Test/Dev Environment... 7 Network Architecture and Design Considerations... 7 OpenStack Network Architecture in a Cloud Data Center... 9 Network Architecture... 9 Scaling Out Out-of-Band Management Building an OpenStack Cloud with Cumulus Linux Minimum Hardware Requirements Network Assumptions and Numbering Build Steps Set Up Physical Network Basic Physical Network Configuration Verify Connectivity Set Up Physical Servers Configure Spine Switches Configure Each Pair of Leaf Switches Configure Host Devices Install and Configure OpenStack Services Add the Identity Service...31 Add the Image Service...31 Add the Compute Service...31 Add the Networking Service...32 Install and Configure the Compute Node Create Project Networks Launch an Instance...34 Create Virtual Networks

3 CONTENTS Create the Public Provider Network...34 Private Project Networks Creating VMs on OpenStack Launch an Instance on the Public Network...36 Launch an Instance on the Private Network...36 Launch an Instance from Horizon...36 Conclusion Summary References Appendix A: Example /etc/network/interfaces Configurations leaf leaf leaf leaf spine spine Appendix B: Network Setup Checklist Appendix C: Neutron Under the Hood Neutron Bridges Agents and Namespaces Neutron Routers (L3 Agents)...57 Neutron DHCP Agent...57 Compute Hosts

4 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Version February 3, 2016 About Cumulus Networks Unleash the power of Open Networking with Cumulus Networks. Founded by veteran networking engineers from Cisco and VMware, Cumulus Networks makes the first Linux operating system for networking hardware and fills a critical gap in realizing the true promise of the software-defined data center. Just as Linux completely transformed the economics and innovation on the server side of the data center, Cumulus Linux is doing the same for the network. It is radically reducing the costs and complexities of operating modern data center networks for service providers and businesses of all sizes. Cumulus Networks has received venture funding from Andreessen Horowitz, Battery Ventures, Sequoia Capital, Peter Wagner and four of the original VMware founders. For more information visit cumulusnetworks.com 2016 Cumulus Networks. CUMULUS, the Cumulus Logo, CUMULUS NETWORKS, and the Rocket Turtle Logo (the Marks ) are trademarks and service marks of Cumulus Networks, Inc. in the U.S. and other countries. You are not permitted to use the Marks without the prior written consent of Cumulus Networks. The registered trademark Linux is used pursuant to a sublicense from LMI, the exclusive licensee of Linus Torvalds, owner of the mark on a world-wide basis. All other marks are used under fair use or license from their respective owners. The OpenStack Word Mark and OpenStack Logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community. 4

5 OPENSTACK WITH CUMULUS LINUX OpenStack with Cumulus Linux Objective This Validated Design Guide presents a design and implementation approach for deploying OpenStack with network switches running Cumulus Linux. Detailed steps are included for installing and configuring both switches and servers. Enabling Choice of Hardware in the Data Center Cloud-oriented infrastructure designs revolutionized how server applications are delivered in the data center. They reduce CapEx costs by commoditizing server hardware platforms and OpEx costs by automating and orchestrating infrastructure deployment and management. The same benefits of choice of commodity hardware and automation are available to networking in the data center. With Cumulus Linux, network administrators now have a multi-platform network OS that provides freedom of choice with network switch hardware. Because Cumulus Linux is Linux, data center administrators have access to a rich ecosystem of existing Linux automation tools and now the ability for converged deployment, administration, and monitoring of compute servers and network switches. OpenStack is a cloud platform for enterprise and commercial IT environments. Widely deployed in private and public cloud applications, OpenStack offers a rich variety of components that can be combined to build a tailored cloud solution. OpenStack enables data center architects to use commodity server hardware to build infrastructure environments that deliver the agility and easy scaling promised by the cloud. The cloud allows infrastructure consumers to request and utilize capacity in seconds rather than hours or days, providing you with radical CapEx and OpEx savings while delivering rapid, self-service deployment of capacity for IT consumers. Cumulus Networks believes the same design principles should hold true for networking. A network device can be configured at first boot, so an administrator can quickly replace failed equipment instead of spending valuable time and resources troubleshooting hardware. This enables new support models to be leveraged to drive down operational costs. Imagine managing your own set of hot spare switches, guaranteeing that a replacement will always be available instead of paying for ongoing support for every device. This is the same model currently used by most organizations for managing large fleets of servers. Additionally, Cumulus Linux can help you achieve the same CapEx and OpEx efficiencies for your networks by enabling an open market approach for switching platforms, and by offering a radically simple automated lifecycle management framework built on the industry s best open source tools. By using bare metal servers and network switches, you can achieve cost savings that would be impossible just a few years ago. Combined Solution Using OpenStack and Cumulus Linux Both Cumulus Linux and Linux/OpenStack are software solutions run on top of bare metal hardware. Because both solutions are hardware-agnostic, customers can select their chosen platform from a wide array of suppliers who often employ highly competitive pricing models. The software defines the performance and behavior of the environment and allows the administrator to exercise version control and programmatic approaches that are already in use by DevOps teams. Refer to the Cumulus Linux Hardware Compatibility List (HCL) at cumulusnetworks.com/hcl for a list of hardware vendors and their supported model numbers, descriptions, switch silicon, and CPU type. 5

6 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Driving Towards Operational Efficiencies Figure 1. OpenStack and Cumulus Linux OpenStack enables the building of cloud environments using commodity off-the-shelf servers combined with standard Linux virtualization, monitoring, and management technologies. Cloud users can request resources (compute VMs, storage, network) using APIs and self-service Web interfaces, and those resources will be allocated and delivered without human intervention. The hardware in the cloud is thus homogenous, and users neither know nor care where their resources are physically allocated. Operators monitor aggregate resource utilization, so management is done at the level of a capacity planning exercise, rather than worrying about individual workloads and users. OpenStack comprises a number of components that work together to deliver a cloud. The major components are: 1. Nova, which manages compute resources for VMs. 2. Glance, which manages OS disk images. 3. Cinder, which manages VM block storage. 4. Swift, which manages unstructured data objects. 5. Keystone, which provides authentication and authorization services. 6. Horizon, a Web-based UI. 7. Neutron, which provides virtual networking and services. Cumulus Linux complements OpenStack by delivering the same automated, self-service operational model to the network. And since the underlying operating system is the same on the OpenStack nodes and the switches, the same automation, monitoring and management tools can be used, greatly simplifying provisioning and operations. Cumulus Linux offers powerful automation capabilities, by way of technologies such as ONIE, zero touch provisioning, Ansible, Chef, Puppet, and many others. The combination of bare metal hardware with a consistent Linux platform enables you to leverage automation to deploy servers and networks together. Thus, you can use a unified set of tools to automate the installation and configuration of both switches and servers. You can use a common automation framework that uses a simple config file to install and configure an entire pod of switches and call OpenStack to install and configure the servers, all without any human intervention. 6

7 OPENSTACK NETWORK ARCHITECTURE IN A POC OR SMALL TEST/DEV ENVIRONMENT Intended Audience for Network Design and Build The rest of this document is aimed at the data center architect or administrator interested in evaluating a Proof of Concept (PoC) or deploying a production cloud using Cumulus Linux and OpenStack. The implementer is expected to have basic knowledge of Linux commands, logging in, navigating the file system, and editing files. Basic understanding of Layer 2 networking is assumed, such as interfaces, bonds (also known as LAGs), and bridges. If you are using this guide to help you with setting up your OpenStack and Cumulus Linux environment, we assume you have Cumulus Linux installed and licensed on switches from the Cumulus Linux HCL. Additional information on Cumulus Linux software, licensing, and supported hardware may be found on cumulusnetworks.com or by contacting sales@cumulusnetworks.com. This guide references the Kilo release of OpenStack. OpenStack Network Architecture in a PoC or Small Test/Dev Environment Network Architecture and Design Considerations Figure 2 shows the network design of a typical Proof of Concept (PoC) or small test/dev environment running OpenStack. Figure 2. PoC or Test/Dev OpenStack Environment 7

8 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Figure 3 below details the connectivity for the hypervisor. Figure 3. Hypervisor Host Detail The network architecture for an OpenStack PoC follows a simplified Top of Rack (ToR) access-tier-only design, all within Layer 2, while the single services rack provides a gateway to the rest of the network, and also contains all the hypervisor hosts. The services rack contains the OpenStack controller, and can optionally contain any load balancers, firewalls, and other network services. For optimal network performance, 10G switches are used for the ToR/access switches. The network design employs multi-chassis Link Aggregation (MLAG) for host path redundancy and link aggregation for network traffic optimization. The switches are paired into a single logical switch for MLAG, with a peer LACP bond link between pair members. No breakout cables are used in this design. A single OpenStack controller instance is assumed in this design. Connectivity to external networks is assumed to be via a pair of links to routers, with a single upstream default route. These links are connected to the leaf switches in the services rack, since it contains the controller. This guide assumes the routers have been configured with VRR or some other first-hop redundancy protocol. If there is only one upstream router link, connect it to either of the leaf switches in the services rack. The Neutron networking agents handle the creation of the bridge interface and other virtual interfaces on the compute node. The actual naming of the bridge and vnet interfaces may be different in a live deployment. 8

9 OPENSTACK NETWORK ARCHITECTURE IN A CLOUD DATA CENTER OpenStack Network Architecture in a Cloud Data Center Network Architecture The network design of a typical cloud data center running OpenStack is shown in Figure 4. Figure 4. Enterprise Data Center Network OpenStack Environment The network architecture for an OpenStack data center follows the traditional hierarchical core, aggregation switch (also known as spine), and access switch (also known as leaf) tiers, all within Layer 2, while a single services rack provides a gateway to the rest of the network. The services rack contains the OpenStack controller, compute nodes, and can optionally contain load balancers, firewalls, and other network services. For optimal network performance, 40G switches are used for aggregation switches, and 10G switches are used for access switches. The network design employs MLAG for host and network path redundancy and link aggregation for network traffic optimization. Switches are paired into logical switches for MLAG, with a peer LACP bond link between pair members. No breakout cables are used in this design. A single OpenStack controller instance is assumed in this design. Connectivity to external networks is assumed to be via a pair of links to routers, with a single upstream default route. These links are connected to the leaf switches in the services rack, which is the one that contains the controller. This guide assumes the routers have been configured with VRR or some other first-hop router redundancy protocol. If there is only one upstream router link, connect it to either of the leaf switches in the services rack. 9

10 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Scaling Out Scaling out the architecture involves adding more hosts to the access switch pairs, and then adding more access switches in pairs as needed, as shown in Figure 5. Figure 5. Adding Additional Switches Once the limit for the aggregation switch pair has been reached, an additional network pod of aggregation/access switch tiers may be added, as shown in Figure 6. Each new pod has its own services rack and OpenStack controller. Figure 6. Adding Network Pods/OpenStack Clusters 10

11 OUT-OF-BAND MANAGEMENT Out-of-Band Management An important supplement to the high capacity production data network is the management network used to administer infrastructure elements, such as network switches, physical servers, and storage systems. The architecture of these networks vary considerably based on their intended use, the elements themselves, and access isolation requirements. This solution guide assumes that a single Layer 2 domain is used to administer the network switches and management interfaces on the controller and hypervisor hosts. These operations include installing the elements, configuring them, and monitoring the running system. This network is expected to host both DHCP and HTTP servers, such as isc-dhcp and apache2, as well as provide DNS reverse and forward resolution. In general, these networks provide some means to connect to the corporate network, typically a connection through a router or jump host. Figure 7 below shows the logical and, where possible, physical connections of each element as well as the services required to realize this deployment. Figure 7. Out-of-Band Management 11

12 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Building an OpenStack Cloud with Cumulus Linux Minimum Hardware Requirements For PoC, test/dev: 3x x86 servers, each with 2x 10G NICs + 1x 1G NIC 2x 48 port 10G switches, with 40G uplinks Note that this design may be scaled up to 47 hypervisor nodes. For a cloud data center: 5x x86 servers, each with 2x 10G NICs + 1x 1G NIC 4x 48 port 10G leaf switches, with 40G uplinks 2x 32 port 40G spine switches Note that this design may be scaled up to 1535 hypervisor nodes. If required, additional OpenStack clusters may be configured and connected to the core/external routers. OpenStack scalability limits will be hit before full scale is achieved. 12

13 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Network Assumptions and Numbering The network design for the full cloud deployment (6 switches, 5 servers) is shown in Figure 8 below. The PoC subset is just the first pair of leafs and no spine switches. The implementation does not assume use of IPMI, as it is intended to demonstrate the most generic network as possible. Figure 8. Cloud Data Center Network Topology Note that the peer bonds for MLAG support are always the last two interfaces on each switch. For spines, they are swp31 and swp32. For leafs, they are swp51 and swp52. The next-to-last two interfaces on each leaf are for the uplinks to spine01 and spine02. Also note that the same subnet is used for every MLAG peer pair. This is safe because the addresses are only used on the link between the pairs. Routing protocols will not distribute these routes because they are part of the link-local /16 subnet. The details for the switches, hosts, and logical interfaces are as follows: 13

14 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE leaf01 connected to Logical Interface Description Physical Interfaces leaf02 peerlink peer bond utilized for MLAG traffic swp51, swp52 leaf02 peerlink.4094 subinterface used for clagd communication N/A spine01, spine02 uplink for MLAG between spine01 and spine02 swp49, swp50 external router N/A for accessing the outside network swp48 multiple hosts access ports connect to compute hosts swp1 through swp44 controller compute01 bond to controller for host-to-switch MLAG swp1 compute01 compute02 bond to compute01 for host-to-switch MLAG swp2 out-of-band management N/A out-of-band management interface eth0 leaf02 connected to Logical Interface Description Physical Interfaces leaf01 peerlink peer bond utilized for MLAG traffic swp51, swp52 leaf01 peerlink.4094 subinterface used for clagd communication N/A spine01, spine02 uplink for MLAG between spine01 and spine02 swp49, swp50 external router N/A for accessing the outside network swp48 multiple hosts access ports connect to hosts swp1 through swp44 controller compute01 bond to controller for host-to-switch MLAG swp1 compute01 compute02 bond to compute01 for host-to-switch MLAG swp2 out-of-band management N/A out-of-band management interface eth0 14

15 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX leaf0n connected to Logical Interface Description Physical Interfaces Repeat the above configurations for each additional pair of leafs, minus the external router interfaces. spine01 connected to Logical Interface Description Physical Interfaces spine02 peerlink peer bond utilized for MLAG traffic swp31, swp32 spine02 peerlink.4094 subinterface used for clagd communication N/A multiple leafs leaf ports connect to leaf switch pairs swp1 through swp30 leaf01, leaf02 downlink1 bond to another leaf switch pair swp1, swp2 leaf03, leaf04 downlink2 bond to another leaf switch pair swp3, swp4 out-of-band management N/A out-of-band management interface eth0 spine02 connected to Logical Interface Description Physical Interfaces spine01 peerlink peer bond utilized for MLAG traffic swp31, swp32 spine01 peerlink.4094 subinterface used for clagd communication N/A multiple leafs leaf ports connect to leaf switches swp1 through swp30 leaf01, leaf02 downlink1 bond to another peerlink group swp1, swp2 leaf03, leaf04 downlink2 bond to another peerlink group swp3, swp4 out-of-band management N/A out-of-band management interface eth0 The manual setup process detailed below has some fixed parameters for things like VLAN ranges and IP addresses. These can be changed if you want to use different parameters, but be careful to modify the numbers in the configuration to match. 15

16 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE The parameters you are most likely to need to change are the external subnet and default route. Get this information from whoever configured your access to the outside world (either the Internet or the rest of the data center network). Parameter Default Setting OpenStack tenant VLANs OpenStack tenant subnets TENANT#.0/24 VXLAN tunnel/overlay VLAN 101 VXLAN tunnel/overlay subnet /24 VXLAN tunnel/overlay default route VXLAN tunnel/overlay IP of controller VXLAN tunnel/overlay IP of first compute node OpenStack API VLAN 102 OpenStack API subnet /20 OpenStack API IP of controller OpenStack API IP of first compute node Out-of-band management network /24 clagd peer VLAN 4094 clagd peer subnet /30 clagd system ID (base) 44:38:39:ff:00:01 16

17 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Build Steps Here are the detailed steps for manually installing and configuring the cloud. If you are building the simpler PoC/test/dev configuration, skip step 5 (configure spine switches), as well as any steps that reference spine01, spine02, leaf03, and leaf04. The steps are: Step Tasks Physical Network and Servers 1. Set up physical network. Rack and cable all network switches. Install Cumulus Linux. Install license. 2. Basic physical network configuration. Name switches. Bring up out of band management ports. Bring up front panel ports. 3. Verify connectivity. Use LLDP to ensure that the topology is as expected, and that switches can communicate. 4. Set up physical servers. Install Ubuntu Server on each of the servers. Network Topology 5. Configure spine switches. Configure MLAG peer bond between the pair. 6. Configure each pair of leaf switches. Configure MLAG peer bond between the pair. 7. Configure host devices. Configure the hosts networking and connectivity. OpenStack 8. Install and Configure each OpenStack compute node services. Install all software components and configure. 9. Create tenant networks. Use Neutron CLI 10. Start VMs using the OpenStack Horizon Web UI. Attach a laptop to the external network. Point a Web browser at and log in (user: admin, pass: adminpw). Start a VM in your new OpenStack cloud. Note that you can also plug the laptop into the management network, if that is easier. 17

18 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE 1. Set Up Physical Network Rack all servers and switches, and wire them together according to the wiring plan. Install Cumulus Linux, install your license, and gain serial console access on each switch, as described in the Quick Start Guide of the Cumulus Linux documentation. 2. Basic Physical Network Configuration Cumulus Linux contains a number of text editors, including nano, vi, and zile; this guide uses nano in its examples. First, edit the hostname file to change the hostname: cumulus@cumulus$ nano /etc/hostname Change cumulus to spine01, and save the file. Make the same change to /etc/hosts: cumulus@cumulus$ nano /etc/hosts Change the first occurrence of cumulus on the line that starts with , then save the file. For example, for spine01, you would edit the line to look like: spine01 cumulus Reboot the switch so the new hostname takes effect: cumulus@cumulus$ sudo reboot Configure Interfaces on Each Switch By default, a switch with Cumulus Linux freshly installed has no switch port interfaces defined. Define the basic characteristics of swp1 through swpn by creating stanza entries for each switch port (swp) in the /etc/network/interfaces file. Each stanza should include the following statements: auto <switch port name> allow-<alias> <switch port name> iface <switch port name> The auto keyword above specifies that the interface is brought up automatically after issuing a reboot or service networking restart command. The allow- keyword is a way to group interfaces so they can be brought up or down as a group. For example, allow-hosts compute01 adds the device compute01 to the alias group hosts. Using ifup --allow=hosts brings up all of the interfaces with allow-hosts in their configuration. On each switch, define the physical ports to be used according to the network topology as described in Figure 8 and the corresponding table that follows the figure. 18

19 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX For the leaf switches, the basic interface configuration is the range of interfaces from swp1 to swp52. On the spine switches, the range is swp1 to swp32. For example, the configuration on leaf01 would look like: nano /etc/network/interfaces.. # physical interface configuration auto swp1 allow-compute swp1 iface swp1 auto swp2 allow-compute swp2 iface swp2.. auto swp52 iface swp52 Additional attributes such as speed and duplex can be set. Refer to the Settings section of the Configuring Switch Port Attributes chapter of the Cumulus Linux documentation for more information. Configure all leaf switches identically. Instead of manually configuring each interface definition, you can programmatically define them using shorthand syntax that leverages Python Mako templates. For information about configuring interfaces with Mako, read this knowledge base article. Once all configurations have been defined in the /etc/network/interfaces file, run the ifquery command to ensure that all syntax is proper and the interfaces are created as expected: cumulus@leaf01$ ifquery -a auto lo iface lo inet loopback auto eth0 iface eth0 address /24 gateway auto swp1 iface swp

20 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Once all configurations have been defined in /etc/network/interfaces, apply the configurations to ensure they are loaded into the kernel. There are several methods for applying configuration changes depending on when and what changes you want to apply: Command sudo ifreload -a sudo service networking restart sudo ifup <swpx> Action Parse interfaces labelled with auto that have been added to or modified in the configuration file, and apply changes accordingly. Note: This command is disruptive to traffic only on interfaces that have been modified. Restart all interfaces labelled with auto as defined in the configuration file, regardless of what has or has not been recently modified. Note: This command is disruptive to all traffic on the switch, including the eth0 management network. Parse an individual interface labelled with auto as defined in the configuration file and apply changes accordingly. Note: This command is disruptive to traffic only on interface swpx. For example, on leaf01, to apply the new configuration to all changed interfaces labeled with auto: or individually: sudo ifreload -a sudo ifup swp1 sudo ifup swp2... sudo ifup swp52 The above configuration in the /etc/network/interfaces file is persistent, which means the configuration applies even after you reboot the switch. Another option to test network connectivity is to run a shell loop to bring up each front-panel interface temporarily (until the next reboot), so that LLDP traffic can flow. This lets you verify the wiring is done correctly in the next step: cumulus@spine01$ for i in `grep '^swp' /var/lib/cumulus/porttab cut -f1`; do sudo ip link set dev $i up; done Repeat the above steps on each of spine02, leaf01, leaf02, leaf03, and leaf04, changing the hostname appropriately in each command or file. 20

21 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX 3. Verify Connectivity Back on spine01, use LLDP to verify that the cabling is correct, according to the cabling diagram: sudo lldpctl less snip Interface: swp31, via: LLDP, RID: 4, Time: 0 day, 00:12:10 Chassis: ChassisID: mac 44:38:39:00:49:0a SysName: spine02 SysDescr: Cumulus Linux Capability: Bridge, off Capability: Router, on Port: PortID: ifname swp31 PortDescr: swp Interface: swp32, via: LLDP, RID: 4, Time: 0 day, 00:12:10 Chassis: ChassisID: mac 44:38:39:00:49:0a SysName: spine02 SysDescr: Cumulus Linux Capability: Bridge, off Capability: Router, on Port: PortID: ifname swp32 PortDescr: swp The output above shows only the last 2 interfaces, which you can see are correctly connected to the other spine switch, based on the SysName field being spine02 (shown in green above). Verify that the remote-side interfaces are correct per the wiring diagram, using the PortID field. Note: Type q to quit less when you are done verifying. Repeat the lldpctl command on spine02 to verify the rest of the connectivity. 4. Set Up Physical Servers Install Ubuntu Server LTS release on each server, as described in Ubuntu s Installing from CD documentation. During the install, configure the two drives into a RAID1 mirror, and then configure LVM on the mirror. Create a 1G swap partition, and a 50G root partition. Leave the rest of the mirror s space free for the creation of VMs. Make sure that openssh server is installed, and configure the management network such that you have out-of-band SSH access to the servers. As part of the installation process you will create a user with sudo access. Remember the username and password you created for later. Name the controller node (the one attached to swp1 on leaf01/leaf02) controller and name the compute nodes compute01, compute02, and so on. Populate the hostname alias for the controller and each of the compute nodes in the /etc/hosts file. Using the name controller matches the sample configurations in the official OpenStack install guide. Edit /etc/hosts file on the controller and each compute node, by adding the following entries at the end: 21

22 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE controller compute compute Configure Spine Switches Enable MLAG Peering between Switches An instance of the clagd daemon runs on each MLAG switch member to keep track of various networking information, including MAC addresses, which are needed to maintain the peer relationship. clagd communicates with its peer on the other switch across a Layer 3 interface between the two switches. This Layer 3 network should not be advertised by routing protocols, nor should the VLAN be trunked anywhere else in the network. This interface is designed to be a keep-alive reachability test and for synchronizing the switch state across the directly attached peer bond. Create the VLAN subinterface for clagd communication and assign an IP address for this subinterface. A unique.1q tag is recommended to avoid mixing data traffic with the clagd control traffic. To enable MLAG peering between switches, configure clagd on each switch by creating a peerlink subinterface in /etc/network/interfaces with a unique.1q tag. Set values for the following parameters under the peerlink subinterface: address. The local IP address/netmask of this peer switch. o Cumulus Networks recommends you use a link local address; for example X/30. clagd-enable. Set to yes (default). clagd-peer-ip. Set to the IP address assigned to the peer interface on the peer switch. clagd-backup-ip Set to an IP address on the peer switch reachable independently of the peerlink. For example, the management interfaces or a routed interface that does not traverse the peerlink. clagd-sys-mac. Set to a unique MAC address you assign to both peer switches. o Cumulus Networks recommends you use addresses within the Cumulus Linux reserved range of 44:38:39:FF:00:00 through 44:38:39:FF:FF:FF. On both spine switches, edit /etc/network/interfaces and add the following sections at the bottom: #Bond for the peerlink. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp31 swp32 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 22

23 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX On spine01, add a VLAN for the MLAG peering communications: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-enable yes clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:00 On spine02, add a VLAN for the MLAG peering communications. Note the change of the last octet in the address and clagd-peer-ip lines: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-enable yes clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:00 On both spine switches, bring up the peering interfaces. The --with-depends option tells ifup to bring up the peer first, since peerlink.4094 depends on it: cumulus@spine0n:~$ sudo ifup --with-depends peerlink.4094 On spine01, verify that you can ping spine02: cumulus@spine01$ ping -c PING ( ) 56(84) bytes of data. 64 bytes from : icmp_req=1 ttl=64 time=0.716 ms 64 bytes from : icmp_req=2 ttl=64 time=0.681 ms 64 bytes from : icmp_req=3 ttl=64 time=0.588 ms ping statistics packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.588/0.661/0.716/0.061 ms Now on both spine switches, verify that the peers are connected: cumulus@spine01:~$ clagctl The peer is alive Peer Priority, ID, and Role: :38:39:00:49:87 secondary Our Priority, ID, and Role: :38:39:00:49:06 primary Peer Interface and IP: peerlink Backup IP: (active) System MAC: 44:38:39:ff:00:00 The MAC addresses in the output vary depending on the MAC addresses issued to your hardware. 23

24 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Now that the spines are peered, create the bonds for the connections to the leaf switches. On both spine switches, edit /etc/network/interfaces and add the following at the end: #Bonds down to the pairs of leafs. auto downlink1 allow-leafs downlink1 iface downlink1 bond-slaves swp1 swp2 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1 auto downlink2 allow-leafs downlink2 iface downlink2 bond-slaves swp3 swp4 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 2 You can add more stanzas for more pairs of leaf switches as needed, modifying the sections in green above. For example, to add a third stanza, you d use downlink3; the corresponding swp interfaces would be swp5 and swp6 and clag-id 3. Bridge together the MLAG peer bond and all the leaf bonds. On both switches, edit /etc/network/interfaces and add the following at the end: #Bridge that connects our peer and downlinks to the leafs. auto bridge iface bridge bridge-vlan-aware yes bridge-ports peerlink downlink1 downlink2 bridge-stp on bridge-vids mstpctl-treeprio If you added more downlink# interfaces in the previous step, add them to the bridge-ports line, at the end of the line. If you re familiar with the traditional Linux bridge mode, you may be surprised that we called the bridge bridge instead of br0. The reason is that we re using the new VLAN-aware Linux bridge mode in this example, which doesn t require multiple bridge interfaces for common configurations. It trades off some of the flexibility of the traditional mode in return for supporting very large numbers of VLANs. See the Cumulus Linux documentation for more information on the two bridging modes supported in Cumulus Linux. 24

25 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Finally, on both spine01 and spine02, bring up all the interfaces, bonds, and bridges. The --with-depends option tells ifup to bring up any down interfaces that are needed by the bridge: sudo ifup --with-depends bridge 6. Configure Each Pair of Leaf Switches On each leaf switch, edit /etc/network/interfaces, and add the following sections at the bottom: #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 On odd numbered leaf switches, add a VLAN for the MLAG peering communications. Note that the last octet of the clagdsys-mac must be the same for each switch in a pair, but incremented for subsequent pairs. For example, leaf03 and leaf04 should have 03 as the last octet: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-enable yes clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:02 On even numbered leaf switches, add a VLAN for the MLAG peering communications. Note the change of the last octet in the address and clagd-sys-peer-ip lines. Also note that for subsequent pairs of switches, the last octet of clagdsys-mac must match as described for the odd-numbered switches: #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-enable yes clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:02 On each leaf switch, bring up the peering interfaces: cumulus@leaf0n:~$ sudo ifup --with-depends peerlink

26 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE On each odd numbered leaf switch, verify that you can ping its corresponding even-numbered leaf switch: ping -c PING ( ) 56(84) bytes of data. 64 bytes from : icmp_req=1 ttl=64 time=0.716 ms 64 bytes from : icmp_req=2 ttl=64 time=0.681 ms 64 bytes from : icmp_req=3 ttl=64 time=0.588 ms ping statistics packets transmitted, 3 received, 0% packet loss, time 2001ms rtt min/avg/max/mdev = 0.588/0.661/0.716/0.061 ms Now, on each leaf switch, verify that the peers are connected: cumulus@leaf0n:~$ clagctl The peer is alive Peer Priority, ID, and Role: c:64:1a:00:39:5a primary Our Priority, ID, and Role: c:64:1a:00:39:9b secondary Peer Interface and IP: peerlink Backup IP: (active) System MAC: 44:38:39:ff:00:02 Now that the leafs are peered, create the uplink bonds connecting the leafs to the spines. On each leaf switch, edit /etc/network/interfaces and add the following at the end: #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1000 On each leaf switch, bring up the bond up to the spine: cumulus@leaf0n:~$ sudo ifup --with-depends uplink On each leaf switch, verify that the link to the spine is up: cumulus@leaf0n:~$ ip link show dev uplink 2: uplink: <BROADCAST,MULTICAST,UP,LOWER_UP> qdisc pfifo_fast state UP qlen 1000 link/ether 44:38:39:00:49:06 brd ff:ff:ff:ff:ff:ff The UP,LOWER_UP (shown in green above) line means that the bond itself is up (UP), and slave interfaces (swp49 and swp50) are up (LOWER_UP). 26

27 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX On leaf01 and leaf02, and only leaf01 and leaf02, configure the interfaces going to the core/external routers. These are associated with external VLAN (101), but are configured as access ports and therefore untagged. Edit /etc/network/interfaces and add the following at the end: auto swp48 iface swp48 bridge-access 101 Create the bonds for the connections to the servers. On each leaf switch, edit /etc/network/interfaces and add the following at the end: #Bonds down to the host. #Only one swp, because the other swp is on the peer switch. auto compute01 allow-hosts compute01 iface compute01 bond-slaves swp1 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 1 Repeat the above stanza for each front panel port that has servers attached. You ll need to adjust compute01, swp1 and the value for clag-id everywhere they appear (in green). For example, for swp2, change each compute01 to compute02 and swp1 to swp2, and change clag-id from 1 to 2. Bridge together the MLAG peer bond, the uplink bond, and all the leaf bonds. On each leaf switch, edit /etc/network/interfaces and add the following at the end: #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplink swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids mstpctl-treeprio If you added more host# interfaces in the previous step, add them to the bridge-ports line, at the end of the line. Note that swp48 (in green above) should only be present on leaf01 and leaf02, not on subsequent leafs. Finally, on each leaf switch, bring up all the interfaces, bonds, and bridges: cumulus@leaf0n:~$ sudo ifup --with-depends bridge 27

28 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE 7. Configure Host Devices The server connected to swp1 on leaf01 and leaf02 is the OpenStack controller. It manages all the other servers, which run VMs. ssh into it as the user you configured when installing the OS. Configure the Uplinks The server has two 10G interfaces, in this example they are called p1p1 and p2p2. They may be named differently on other server hardware platforms. The ifenslave package must be installed for bonding support, and the vlan package must be installed for VLAN support. To install them, run: sudo apt-get install ifenslave vlan For the bond to come up, the bonding driver needs to be loaded. Similarly, for VLANs, the 802.1q driver must be loaded. So that they will be loaded automatically at boot time, edit /etc/modules and add the following to the end: bonding 8021q Now load the modules: sudo modprobe bonding sudo modprobe 8021q Edit /etc/network/interfaces to add the following at the end: #The bond, one subinterface goes to each leaf. auto bond0 iface bond0 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down bond-slaves none #First 10G link. auto p1p1 iface p1p1 inet manual bond-master bond0 #Second 10G link. auto p1p2 iface p1p2 inet manual bond-master bond0 #OpenStack Networking VXLAN (tunnel/overlay) VLAN auto bond0.101 iface bond0.101 inet static address netmask gateway

29 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX #OpenStack API VLAN auto bond0.102 iface bond0.102 inet static address netmask Note that Ubuntu uses ifupdown, while Cumulus Linux uses ifupdown2. The configuration format is similar, but many constructs that work on the switch will not work in Ubuntu. Now bring up the interfaces: sudo ifup -a Verify that the VLAN interface is UP and LOWER_UP: sudo ip link show bond : <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 90:e2:ba:7c:28:28 brd ff:ff:ff:ff:ff:ff The remaining servers are all compute nodes. They run VMs, as directed by the controller. Connect to the node, using ssh as the user you configured when installing the OS. In this example, that user is called cumulus. Configure the uplinks. The server has two 10G interfaces; in this example they are called p1p1 and p2p2. They may be named differently on other server hardware platforms. The ifenslave package must be installed for bonding support, and the vlan package must be installed for VLAN support. sudo apt-get install ifenslave vlan For the bond to come up, the bonding driver needs to be loaded. Similarly, for VLANs, the 802.1q driver must be loaded. So that they will be loaded automatically at boot time, edit /etc/modules and add the following to the end: bonding 8021q Now load the modules: sudo modprobe bonding sudo modprobe 8021q 29

30 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Edit /etc/network/interfaces and add the following at the end: #The bond, one interface goes to each leaf. auto bond0 iface bond0 inet manual up ip link set dev $IFACE up down ip link set dev $IFACE down bond-slaves none #First 10G link. auto p1p1 iface p1p1 inet manual bond-master bond0 #Second 10G link. auto p1p2 iface p1p2 inet manual bond-master bond0 #OpenStack Networking VXLAN (tunnel/overlay) VLAN auto bond0.101 iface bond0.101 inet static address netmask gateway #OpenStack API VLAN. auto bond0.102 iface bond0.102 inet static address netmask You ll need to increment the API VLAN s IP address (show in green above, on bond0.102) for each compute node. You ll also need to increment the VXLAN VLAN s IP address (show in green above, on bond0.101). The examples given above are for compute01. For compute02, you would use and Note: Ubuntu uses ifupdown, while Cumulus Linux uses ifupdown2. The configuration format is similar, but many advanced configurations that work on the switch will not work in Ubuntu. Now bring up the interfaces: cumulus@compute0n:~$ sudo ifup -a Verify that the VLAN interface is UP and LOWER_UP: cumulus@compute0n:~$ sudo ip link show bond : bond0.102@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default link/ether 90:e2:ba:7c:28:28 brd ff:ff:ff:ff:ff:ff 30

31 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Add a hostname alias for the controller. Edit /etc/hosts and add the following at the end: controller Verify that this node can talk to the controller over the API VLAN: cumulus@compute0n:~$ ping -c 3 controller PING controller ( ) 56(84) bytes of data. 64 bytes from controller ( ): icmp_seq=1 ttl=64 time=0.229 ms 64 bytes from controller ( ): icmp_seq=2 ttl=64 time=0.243 ms 64 bytes from controller ( ): icmp_seq=3 ttl=64 time=0.220 ms --- controller ping statistics packets transmitted, 3 received, 0% packet loss, time 1998ms rtt min/avg/max/mdev = 0.220/0.230/0.243/0.019 ms 8. Install and Configure OpenStack Services In the following section, before you follow the OpenStack install guide sections, read the notes mentioned in this document, as they contain important additional information you ll need. In some cases this will save a lot of trouble by avoiding errors in the official documentation. Use the official OpenStack Installation Guide for Ubuntu (Liberty Release). In the Liberty install guide, follow the instructions as written, to install and configuring the devices, Identity service, Image, and Compute services. Note that you ll have to use sudo when installing the packages. The following notes indicate some additional information related to the corresponding sections: Add the Identity Service Create OpenStack client environment scripts. This simplifies running commands as various OpenStack users; just source the rc file any time you want to change users. To help identify the user environment sourced, it is beneficial to also set the prompt in each script indicating the user. Append this line after the other export commands in the rc files: export PS1='\u[OS_${OS_USERNAME}]@\h:\w\$ ' Add the Image Service Verify operation. The guide assumes your server has direct access to the Internet; however, if you need an HTTP proxy to access the Internet from your environment, you can specify the proxy prior to wget: cumulus@controller$ http_proxy=" wget Add the Compute Service Install and configure the controller node An error occurs while installing the compute service. The default configuration of the Nova package has a bug wherein the default nova.conf has the key logdir; however, the key should be log_dir. You can fix this easily using the following command: sed -i "s/\(log\)\(dir\)/\1_\2/g" /etc/nova/nova.conf Alternately, make the following change in /etc/nova.conf: [DEFAULT]... #Ubuntu has a packaging issue, make this fix: logdir -> log_dir log_dir=/var/log/nova 31

32 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Install and Configure the Compute Node As mentioned above, you need to correct the default nova.conf again for the directive log_dir. There is also an error in the OpenStack guide in the configuration of the RabbitMQ settings. This appears to be a bug, and the settings must be configured under the [DEFAULT] section, rather than the [oslo_messaging_rabbit] section of the ini file, as the Liberty guide instructs. Make the following changes to the /etc/nova/nova.conf to correct the rabbitmq and log_dir issues: [DEFAULT]... #Ubuntu has a packaging issue, make this fix: logdir -> log_dir log_dir=/var/log/nova... rpc_backend = rabbit rabbit_host = os-controller rabbit_userid = openstack rabbit_password = cn321 [oslo_messaging_rabbit] # Add the Networking Service Working with Neutron requires some understanding of the requirements for the OpenStack deployment. Neutron is multifaceted, in that it can provide layer 3 routing, layer 2 switching, DHCP service, firewall services, and load balancing services, to name just a few. The OpenStack Liberty install guide provides two options for setting up networking: 1) Provider networks This is the simpler deployment, relying on layer 2 (bridging/switching) services and VLAN segmentation to forward virtual network traffic out to the networking infrastructure. It relies on the physical network infrastructure for layer 3 services. It does provide the DHCP service to handle addressing of the virtual instances. This is similar to the VMware networking design. 2) Self-service networks This option adds to the provider network option by including layer 3 (routing) services using NAT. This also enables "self-service networks using network segmentation methods like VLAN or VXLAN. Furthermore, this option provides the foundation for advanced services like FWaaS, and LBaaS, which are not covered in this guide. This guide uses networking option 2. Where the OpenStack guide provides links to select either networking option, select option 2. Notice at the bottom of the networking option sections the links that take you to the correct next section, rather than simply clicking the next arrow. These links actually jump back to where the guide initially provided the option links. Install and Configure the Controller Node Choose Configure Networking Options > Networking Option 2: Self-service Networks Configure the Modular Layer 2 (ML2) Plugin In the ML2 configuration, the flat network is used for the layer 3 routed traffic. The OpenStack guide only specifies the VXLAN tenant separation, but this design uses VLANs for tenant separation. Therefore you need to add the [ml2_type_vlan] network type to allow for creating VLAN segmentation of tenants. This utilizes the same public interface, and restricts the VLANs to , making the public interface an 802.1q trunk. Leave the VXLAN configuration, in case you want to use VXLAN tenant separation in the future. [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security [ml2_type_flat] 32

33 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX flat_networks = public [ml2_type_vlan] network_vlan_ranges = public:201:299 [ml2_type_vxlan] vni_ranges = 1:1000 [securitygroup] enable_ipset = True Configure the Linux Bridge Agent In this section you are mapping the physical host interfaces to the provider network names. In the [linux_bridge] section, for the physical interface mappings, the variable PHYSICAL_INTERFACE_NAME is bond0. Under the [vxlan] section, the OVERLAY_INTERFACE_IP_ADDRESS variable is the local IP address for the bond0.101 interface. [linux_bridge] physical_interface_mappings = public:bond0 [vxlan] enable_vxlan = True local_ip = l2_population = True [agent] prevent_arp_spoofing = True [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.iptablesfirewalldriver enable_security_group = True Install and Configure the Compute Node Choose Configure Networking Options > Networking Option 2: Self-service Networks Configure the Linux Bridge Agent The compute nodes have a simpler setup where the Linux bridge agent just needs to know the logical-to-physical interface mapping. As above, you are mapping the physical host interface to the provider network name public. In the [linux_bridge] section, for the physical interface mappings, the variable PHYSICAL_INTERFACE_NAME is bond0. Under the [vxlan] section, the OVERLAY_INTERFACE_IP_ADDRESS variable is the local IP address for the bond0.101 interface. [linux_bridge] physical_interface_mappings = public:bond0 [vxlan] enable_vxlan = True local_ip = l2_population = True [agent] prevent_arp_spoofing = True 33

34 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.iptablesfirewalldriver enable_security_group = True Repeat the all the steps in this section on the rest of the compute nodes, changing the hostnames and IP addresses appropriately in each command or file. Add the Dashboard Follow the guide to install the Horizon dashboard, then remove the openstack-dashboard-ubuntu-theme package, as it may cause rendering issues: cumulus@controller$ sudo apt-get install apache2 memcached libapache2-mod-wsgi openstack-dashboard cumulus@controller$ sudo apt-get remove --purge openstack-dashboard-ubuntu-theme Installing the Horizon Web interface is optional. If installed, it is not a good idea to expose the Horizon Web interface to untrusted networks without hardening the configuration. 9. Create Project Networks Launch an Instance In this final section, follow the guide to set up the virtual networks, generate a key pair, and add security group rules. Below is more detail on creating the provider and private networks. Create Virtual Networks Public provider network. In general, these steps follow the OpenStack Liberty guide. In Neutron, the network is owned by the project or tenant. Alternately, a network may be shared by all projects using the --shared option. It is important to remember that the Admin user is in the Admin project. Create the Public Provider Network This creates the external layer 3 network, used for routing traffic from any of the tenant subnets via the tenant routers. First use the neutron net-create command, adding the --shared option to allow any project to use this network. The - -provider options reference the Neutron ML2 plugin providing the service. The physical_network is the same name specified in the ml2_conf.ini. The network_type is flat, meaning the traffic is untagged out of the bond0 interface. Furthermore, since you are creating an external network for tenant routers to connect to the outside, this network is designated as such using the --router:external option. cumulus[os_admin]@os-controller:~$ neutron net-create external \ --shared --router:external \ --provider:physical_network public \ --provider:network_type flat Next create the IP address subnet to be used here. This provides DHCP for connecting tenant routers, as well as the floating IP addresses allocated to instances. This would typically be a publicly routable subnet, though this example uses /24: cumulus[os_admin]@os-controller:~$ neutron subnet-create external /24 \ --name ext-net --allocation-pool start= ,end= \ --dns-nameserver gateway

35 BUILDING AN OPENSTACK CLOUD WITH CUMULUS LINUX Private Project Networks Create the private project network using VLAN segmentation Here you need to do things a little differently to make it a little more deterministic. Using the neutron net-create command, the physical_network is the same name specified in the ml2_conf.ini. The network_type is vlan, and the segmentation_id is the VLAN ID for the tenant. As the admin user, you can create this network on behalf of another project/tenant (the demo project in this case), so you need the tenant ID. The admin user can specify that the network is tied to a given tenant using the --tenant option. Once the network is created for that tenant, the resource can be configured by any member of the designated tenant. TENANT_NAME=demo TENANT_ID="$(openstack project show $TENANT_NAME grep " id " head -n1 \ awk -F' ' '{print $3;}' xargs -n1 echo)" cumulus[os_admin]@os-controller:~$ neutron net-create vmnet1 \ --tenant-id $TENANT_ID \ --provider:physical_network public \ --provider:network_type vlan \ --provider:segmentation_id 201 Why can t the demo user create their own neutron network? This is enforced by the default administrative policy in OpenStack, thus permitting the admin user, or any member of the admin project, super-user rights on the cluster. Thinking about it more, if you allow any regular tenant user to do any operation, there is no point having roles and projects, and the end result would likely be chaos. Therefore aligning with industry standards, the user/role/project policy is designed to work in a structured and orderly manner. To look at the policies for the entire OpenStack cluster, look at the file /etc/nova/policy.json. Next, source the open.rc script for the private tenant using the demo user to follow along with the OpenStack guide. Create a subnet for the network using the neutron subnet-create command. The allocation-pool defines the DHCP address pool used on the subnet. cumulus[os_demo]@os-controller:~$ neutron subnet-create vmnet /24 \ --name SUBNET1 --allocation-pool start= ,end= \ --dns-nameserver gateway Basic Layer-2 Switched Connectivity You can stop here for this tenant, and it will simply have the common networking connectivity that is most analogous to the way VMware vswitch connections operate. Here the instance or VM will have basic layer 2 reachability to the network infrastructure switches. These devices can easily handle the inter-tenant routing, and intra-tenant switching. However, if this instance needs to send traffic out to the Internet, it must have an address from a publicly routable subnet, otherwise it will require NAT, possibly at the enterprise edge router or firewall. If there is no NAT-enabled device at the edge of the network, then the layer 3 agent within OpenStack Neutron can provide this functionality as north-south traffic egresses the OpenStack cluster. Create a Router This section explains how to create a tenant router, which connects to the provider network. It follows the OpenStack guide. cumulus[os_demo]@os-controller:~$ neutron router-create demo-rtr cumulus[os_demo]@os-controller:~$ neutron router-interface-add demo-rtr SUBNET1 cumulus[os_demo]@os-controller:~$ neutron router-gateway-set demo-rtr external Now that you have a router and an external subnet, you can allocate a floating IP address to an instance that requires external network connectivity. This simply creates the source NAT IP address that the traffic from an instance uses to send 35

36 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE traffic out on the public network, and allows traffic to return. Since you are using the Horizon Web console to launch your instance, you can create and associate the floating IP address for the instance there. 10. Creating VMs on OpenStack Launch an Instance on the Public Network Since the external or public network is simply another Neutron network, you can put an instance on the public network, and it will get an address via DHCP, thus directly providing an address from the associated DHCP pool. This instance uses the flat or untagged network. Launch an Instance on the Private Network Typically, the instance will be located on a private tenant network. This allows for the neutron network to easily connect to the network infrastructure devices, maintaining tenant separation using VLAN segmentation. Therefore the traffic is sent out from the compute host, on an Ethernet trunk as VLAN-tagged frames. The instance may require connectivity to the external public network. To allow an instance to send traffic on the public network, requires the use of floating-ip s allocated from the DHCP address pool. This traffic will transit the L3-agent, and exit using the flat or untagged network. Launch an Instance from Horizon The OpenStack Web UI named Horizon provides a nice Web interface with many of the typical enterprise features for a virtualization platform. Simply point a Web browser at and log in (user: admin, password: adminpw). Orchestration Service The Heat service provides an automation infrastructure, using templates to assist in deployment. The templates provide an easy way to create most OpenStack resource types, such as instances, floating IPs, volumes, security groups and users. 36

37 CONCLUSION Conclusion Summary The fundamental abstraction of hardware from software and providing customers a choice through a hardware agnostic approach is core to the philosophy of Cumulus Networks and fits very well within the software-centric, commodity hardware friendly design of OpenStack. Just as OpenStack users have choice in server compute and storage, they can tap the power of Open Networking and select from a broad range of switch providers running Cumulus Linux. Choice and CapEx savings are only the beginning. OpEx savings come from agility through automation. Just as OpenStack orchestrates the cloud by enabling the automated provisioning of hosts, virtual networks, and VMs through the use of APIs and interfaces, Cumulus Linux enables network and data center architects to leverage automated provisioning tools and templates to define and provision physical networks. References Article/Document OpenStack Documentation URL Database Install Guide Message Queue Install Guide Keystone Install Guide Users Install Guide Services Install Guide Openrc Install Guide Keystone Verification Install Guide Glance Install Guide Nova Install Guide Neutron Network Install Guide Cumulus Linux Documentation Quick Start Guide Understanding Network Interfaces MLAG LACP Bypass Authentication, Authorization, and Accounting Zero Touch Provisioning 37

38 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Cumulus Linux KB Articles Configuring /etc/network/interfaces with Mako Demos and Training Installing collectd and graphite Manually Putting All Switch Ports into a Single VLAN Cumulus Linux Product Information Software Pricing Hardware Compatibility List Cumulus Linux Downloads Cumulus Linux Repository Cumulus Networks GitHub Repository 38

39 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS Appendix A: Example /etc/network/interfaces Configurations leaf01 cat /etc/network/interfaces auto eth0 iface eth0 address /24 gateway # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp48 iface swp48 bridge-access auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp

40 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:02 #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on the peer switch. auto compute01 allow-hosts compute01 iface compute01 bond-slaves swp1 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 1 auto compute02 allow-hosts compute02 iface compute02 bond-slaves swp2 40

41 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 2 auto controller allow-hosts controller iface controller bond-slaves swp3 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids mstpctl-treeprio

42 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE leaf02 cat /etc/network/interfaces auto eth0 iface eth0 address /24 gateway # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp48 iface swp48 bridge-access auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 #VLAN for the MLAG control traffic. auto peerlink

43 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS iface peerlink.4094 address /30 clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:02 #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on the peer switch. auto compute01 allow-hosts compute01 iface compute01 bond-slaves swp1 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 1 auto compute02 allow-hosts compute02 iface compute02 bond-slaves swp2 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id

44 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE auto controller allow-hosts controller iface controller bond-slaves swp3 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids mstpctl-treeprio

45 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS leaf03 cat /etc/network/interfaces auto eth0 iface eth0 address /24 gateway # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:

46 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE #Bond up to the spines. auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on peer switch. auto compute03 allow-hosts compute03 iface compute03 bond-slaves swp1 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 auto compute04 allow-hosts compute04 iface compute04 bond-slaves swp2 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 4 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids mstpctl-treeprio

47 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS leaf04 cat /etc/network/interfaces auto eth0 iface eth0 address /24 gateway # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3.. auto swp52 iface swp52 # peerlink bond for clag #Bond for the peer link. MLAG control traffic and data when links are down. auto peerlink iface peerlink bond-slaves swp51 swp52 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:03 #Bond up to the spines. 47

48 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE auto uplink iface uplink bond-slaves swp49 swp50 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1000 #Bonds down to the host. Only one swp, because the other swp is on peer switch. auto compute03 allow-hosts compute03 iface compute03 bond-slaves swp1 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 3 auto compute04 allow-hosts compute04 iface compute04 bond-slaves swp2 bond-min-links 1 bond-xmit-hash-policy layer3+4 bond-lacp-bypass-allow 1 mstpctl-portadminedge yes mstpctl-bpduguard yes clag-id 4 #Bridge that connects our peer, uplink to the spines, and the hosts. auto bridge iface bridge bridge-vlan-aware yes bridge-ports uplinks swp48 peerlink compute01 compute02 compute03 bridge-stp on bridge-vids mstpctl-treeprio

49 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS spine01 sudo vi /etc/network/interfaces auto eth0 iface eth0 address /24 gateway # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3... auto swp32 iface swp32 # peerlink bond for clag auto peerlink iface peerlink bond-slaves swp31 swp32 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-enable yes clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:

50 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE # leaf01-leaf02 downlink auto downlink1 allow-leafs downlink2 iface downlink1 bond-slaves swp1 swp2 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1 # leaf03-leaf04 downlink auto downlink2 allow-leafs downlink2 iface downlink2 bond-slaves swp3 swp4 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 2 #Bridge that connects our peer and downlinks to the leafs. auto bridge iface bridge bridge-vlan-aware yes bridge-ports peerlink downlink1 downlink2 bridge-stp on bridge-vids mstpctl-treeprio

51 APPENDIX A: EXAMPLE /ETC/NETWORK/INTERFACES CONFIGURATIONS spine02 sudo vi /etc/network/interfaces auto eth0 iface eth0 address /24 gateway # physical interface configuration auto swp1 iface swp1 auto swp2 iface swp2 auto swp3 iface swp3... auto swp32 iface swp32 # peerlink bond for clag auto peerlink iface peerlink bond-slaves swp31 swp32 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 #VLAN for the MLAG control traffic. auto peerlink.4094 iface peerlink.4094 address /30 clagd-enable yes clagd-peer-ip clagd-backup-ip /24 clagd-sys-mac 44:38:39:ff:00:

52 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE # leaf01-leaf02 downlink auto downlink1 allow-leafs downlink2 iface downlink1 bond-slaves swp1 swp2 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 1 # leaf03-leaf04 downlink auto downlink2 allow-leafs downlink2 iface downlink2 bond-slaves swp3 swp4 bond-use-carrier 1 bond-min-links 1 bond-xmit-hash-policy layer3+4 clag-id 2 #Bridge that connects our peer and downlinks to the leafs. auto bridge iface bridge bridge-vlan-aware yes bridge-ports peerlink downlink1 downlink2 bridge-stp on bridge-vids mstpctl-treeprio

53 APPENDIX B: NETWORK SETUP CHECKLIST Appendix B: Network Setup Checklist Tasks Considerations 1. Set up physical network. Select network switches Plan cabling Install Cumulus Linux Refer to the HCL and hardware guides at Refer to KB article, Suggested Transceivers and Cables: Generally, higher number ports on a switch are reserved for uplink ports, so: Assign downlinks or host ports to the lower end, like swp1, swp2 Reserve higher number ports for network Reserve highest ports for MLAG peer links Connect all console ports. Obtain the latest version of Cumulus Linux. Obtain license key, which is separate from Cumulus Linux OS distribution. To minimize variables and aid in troubleshooting, use identical versions across switches same version X.Y.Z, packages, and patch levels. See the Quick Start Guide in the Cumulus Linux documentation. 2. Basic Physical Network Configuration Reserve management space Edit configuration files Define switch ports (swp) in /etc/network/interfaces on a switch Reserve pool of IP addresses. Define hostnames and DNS. RFC 1918 should be used where possible. Note: We used RFC 6598 in our automation explicitly to avoid the use of any existing RFC 1918 deployments. Apply standards and conventions to promote similar configurations. For example, place stanzas in the same order in configuration files across switches and specify the child interfaces before the parent interfaces (so a bond member appears earlier in the file than the bond itself, for example). This allows for standardization and easier maintenance and troubleshooting, and ease of automation and the use of templates. Consider naming conventions for consistency, readability, and manageability. Doing so helps facilitate automation. For example, call your leaf switches leaf01 and leaf02 rather than leaf1 and leaf02. Use all lowercase for names Avoid characters that are not DNS-compatible. Define child interfaces before using them in parent interfaces. For example, create the member interfaces of a bond before defining the bond interface itself. Instantiate swp interfaces for using the ifup and ifdown commands. 53

54 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Tasks Set speed and duplex Considerations These settings are dependent on your network. 3. Verify connectivity. Use LLDP (Link Layer Discovery Protocol) LLDP is useful to debug or verify cabling between directly attached switches. By default, Cumulus Linux listens and advertises LLDP packages on all configured Layer 3 routed or Layer 2 access ports. LLDP is supported on tagged interfaces or those configured as an 802.1q sub interface. The command lldpctl will display a dump of the connected interfaces. 4. Set up physical servers. Install Ubuntu 5. Configure spine switches. Create peer link bond between pair of switches Enable MLAG Assign clagd-sys-mac Assign priority Assign IP address for clagd peerlink. Consider using a link local address (RFC 3927, /16) to avoid advertising, or an RFC 1918 private address. Use a very high number VLAN if possible to separate the peer communication traffic from typical VLANs handling data traffic. Valid VLAN tags end at Set up MLAG in switch pairs. There s no particular order necessary for connecting pairs. Assign a unique clagd-sys-mac value per pair. This value is used for spanning tree calculation, so assigning unique values will prevent overlapping MAC addresses. Use the range reserved for Cumulus Networks: 44:38:39:FF:00:00 through 44:38:39:FF:FF:FF. Define primary and secondary switches in an MLAG switch pair, if desired. Otherwise, by default the switches will elect a primary switch on their own. Set priority if you want to explicitly control which switches are designated primary switches. 6. Configure each pair of leaf switches. Repeat steps for configuring spine switches Connect to core routers Steps for leaf switches are similar. 7. Configure the OpenStack controller. Install all components and configure 54

55 APPENDIX B: NETWORK SETUP CHECKLIST 8. Configure each compute node. Enable IP forwarding Configure uplinks Load modules 9. Create tenant networks. Create Networks and VLANs Create subnets and IP address range 10. Start VMs using the OpenStack Horizon Web UI. Log into admin web UI There is no Network tab 55

56 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE Appendix C: Neutron Under the Hood This guide explained how to add the external public network, subnets, and user networks. What does this entire setup look like on the bare metal? Let s take a look. To understand the current state of the system used in the output below, there are: External network (shared), with DHCP, utilizing the flat or untagged network Vmnet1 network (demo), with DHCP & router, VLAN201 Vmnet2 network (admin), with DHCP, VLAN202 Vmnet3 network (demo2), with DHCP & router, vlan 203 Neutron Bridges Starting with the linux-bridge-agent, when a neutron network was created, it creates a traditional linux bridge on the controller and the compute nodes. This is easily seen with the command brctl show : root[os_admin]@os-controller:~$ brctl show bridge name bridge id STP enabled interfaces br-mgmt f1b5 no bond0.100 br-vxlan f1b5 no bond0.101 brq4330ef9a-4b f1b5 no bond0.202 tapf6d53e53-df brqdcdd11f eb1e6e86a71 no bond0.201 tap2a00771a-31 tap33e5cd5d-f4 brqe7f132e c51080 no bond0 tap4a42f26c-af tapdcd25fa2-e1 tapf5b970ca-83 brqef742ab3-e f1b5 no bond0.203 tap2507fb35-0d tapd16c94c3-fe Each of the interfaces connected to the bridge is an Ethernet subinterface, or a virtual Ethernet link (veth). The Ethernet subinterface is handling the internal tenant traffic between the compute host neutron bridges, and the controller. The virtual ethernet connections link the neutron bridge to the service agents running in namespaces. Agents and Namespaces Remember the controller is handling the DHCP-Agent and L3-Agent functions, which actually are contained with network namespaces. Clearly there are four DHCP services created, and two routers (L3-agents). This looks correct for the current configurations of the OpenStack cluster. root[os_admin]@os-controller:~$ ip netns list qrouter-f9eff951-24e a21a-1b qdhcp-ef742ab3-e fb6-aba9e9487c95 qrouter-eb65e2d0-2b67-4c a857cb33c qdhcp-dcdd11f f-b100-fe2d e qdhcp-4330ef9a-4b10-4ce0-9d09-53f f2 qdhcp-e7f132e b46-b48f45db708c 56

57 APPENDIX C: NEUTRON UNDER THE HOOD Neutron Routers (L3 Agents) Executing the command ip addr show inside the network namespace, shows us the router associated with vmnet1. Since there are a few instances running in this tenant, with floating-ip s allocated, notice the multiple addresses under the external network interface in the x subnet. The.105 is the external address for the router, and.107, and.108 are the two floating IP s. On the private side is the default gateway as specified for the tenant subnet. root[os_admin]@os-controller:~$ ip netns exec qrouter-eb65e2d0-2b67-4c a857cb33c ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet /8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: qr-33e5cd5d-f4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:cf:85:b5 brd ff:ff:ff:ff:ff:ff inet /24 brd scope global qr-33e5cd5d-f4 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fecf:85b5/64 scope link valid_lft forever preferred_lft forever 3: qg-dcd25fa2-e1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:25:14:8e brd ff:ff:ff:ff:ff:ff inet /24 brd scope global qg-dcd25fa2-e1 valid_lft forever preferred_lft forever inet /32 brd scope global qg-dcd25fa2-e1 valid_lft forever preferred_lft forever inet /32 brd scope global qg-dcd25fa2-e1 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe25:148e/64 scope link valid_lft forever preferred_lft forever Neutron DHCP Agent Looking at the namespaces for the DHCP agents of the external and vmnet1 neutron networks. Nothing really interesting here, except that they are essentially a host attached to the neutron bridge answering DHCP requests. root[os_admin]@os-controller:~$ ip netns exec qdhcp-e7f132e b46- b48f45db708c ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet /8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ns-4a42f26c-af: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:4c:11:31 brd ff:ff:ff:ff:ff:ff 57

58 OPENSTACK AND CUMULUS LINUX: VALIDATED DESIGN GUIDE inet /24 brd scope global ns-4a42f26c-af valid_lft forever preferred_lft forever inet /16 brd scope global ns-4a42f26c-af valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe4c:1131/64 scope link valid_lft forever preferred_lft forever ip netns exec qdhcp-dcdd11f f-b100- fe2d e ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet /8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ns-2a00771a-31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether fa:16:3e:93:9a:f3 brd ff:ff:ff:ff:ff:ff inet /24 brd scope global ns-2a00771a-31 valid_lft forever preferred_lft forever inet /16 brd scope global ns-2a00771a-31 valid_lft forever preferred_lft forever inet6 fe80::f816:3eff:fe93:9af3/64 scope link valid_lft forever preferred_lft forever Controller Diagram showing the neutron bridges, and namespaces. 58

59 APPENDIX C: NEUTRON UNDER THE HOOD Compute Hosts The compute hosts are much simpler. When the OpenStack controller is launching an instance, it verifies the required resources are available. In the case of Neutron networking resources, it creates the bridges on the compute host once an instance is launched there and requires it. So you will not see all Neutron bridges on all compute nodes. To see the neutron bridges, the same command brctl show is used to display the bridges. brctl show bridge name bridge id STP enabled interfaces br-mgmt e2ba5cb5a5 no bond0.100 br-vxlan e2ba5cb5a5 no bond0.101 brq4330ef9a-4b e2ba5cb5a5 no bond0.202 tapea4adda7-03 brqdcdd11f e2ba5cb5a5 no bond0.201 tap835481fa-c0 tapbb8f03ee-80 virbr yes Here you can see there are only two Neutron bridges. Each bridge has one subinterface, and one or more tap interfaces. Again, the Ethernet subinterface is for the internal tenant traffic. The tap interfaces are where an instance is connected to the Neutron bridge. Compute1 Diagram showing the neutron bridges and instance connections. 59

Dell EMC Ready Architecture for Red Hat OpenStack Platform

Dell EMC Ready Architecture for Red Hat OpenStack Platform Dell EMC Ready Architecture for Red Hat OpenStack Platform Cumulus Switch Configuration Guide Version 13 Dell EMC Service Provider Solutions ii Contents Contents List of Tables...iii Trademarks... iv Notes,

More information

Networking Terminology Cheat Sheet

Networking Terminology Cheat Sheet Networking Cheat Sheet YOUR REFERENCE SHEET FOR GOING WEB-SCALE WITH CUMULUS LINUX With Cumulus Linux, you can build a web-scale data center with all the scalability, efficiency and automation available

More information

Cumulus VX for a POC in pre-sales. Using Cumulus VX to create a virtual POC environment.

Cumulus VX for a POC in pre-sales. Using Cumulus VX to create a virtual POC environment. Cumulus VX for a POC in pre-sales Using Cumulus VX to create a virtual POC environment. Contents Contents Cumulus VX in pre-sales engagement Introduction Cumulus VX in a POC Intended Audience Installation

More information

Cumulus Express RMP. Overview. Out-of-band management for the web-scale age

Cumulus Express RMP. Overview. Out-of-band management for the web-scale age Cumulus Express Overview Cumulus Rack Management Platform () extends the innovative, native Linux approach of Cumulus Linux into the out-of-band management network. This out-of-band management switch is

More information

Build Cloud like Rackspace with OpenStack Ansible

Build Cloud like Rackspace with OpenStack Ansible Build Cloud like Rackspace with OpenStack Ansible https://etherpad.openstack.org/p/osa-workshop-01 Jirayut Nimsaeng DevOps & Cloud Architect 2nd Cloud OpenStack-Container Conference and Workshop 2016 Grand

More information

Openstack Networking Design

Openstack Networking Design Openstack Networking Design Pete Lumbis CCIE #28677, CCDE 2012::3 Cumulus Networks Technical Marketing Engineer 1 Openstack Overview Takes a pool of servers Deploys s (OS, disk, memory, CPU cores, etc)

More information

Introduction to Cumulus Linux

Introduction to Cumulus Linux v Introduction to Cumulus Linux Make networks simpler, more affordable, and faster to deploy Last Updated: March 18, 2015 Challenges in networking Cumulus Linux overview Why Cumulus Linux cumulusnetworks.com

More information

ifupdown2 : The ultimate Network Interface Manager Progress and Status

ifupdown2 : The ultimate Network Interface Manager Progress and Status v ifupdown2 : The ultimate Network Interface Manager Progress and Status Julien Fortin - Cumulus Networks July 5th, DebConf2016 - Cape Town, South Africa 1 Outline! Background & context! What s wrong with

More information

Installation runbook for

Installation runbook for Installation runbook for Arista Networks ML2 VLAN driver, L3 plugin integration Partner Name: Product Name: Product Version: Arista Networks Arista EOS EOS-4.14.5 or above MOS Version: Mirantis OpenStack

More information

Red Hat OpenStack Platform 10 Product Guide

Red Hat OpenStack Platform 10 Product Guide Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team Red Hat OpenStack Platform 10 Product Guide Overview of Red Hat OpenStack Platform OpenStack Team rhos-docs@redhat.com

More information

Layer-4 to Layer-7 Services

Layer-4 to Layer-7 Services Overview, page 1 Tenant Edge-Firewall, page 1 LBaaS, page 2 FWaaS, page 4 Firewall Configuration, page 6 Overview Layer-4 through Layer-7 services support(s) end-to-end communication between a source and

More information

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide

Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Redhat OpenStack 5.0 and PLUMgrid OpenStack Networking Suite 2.0 Installation Hands-on lab guide Oded Nahum Principal Systems Engineer PLUMgrid EMEA November 2014 Page 1 Page 2 Table of Contents Table

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 13.0 Modified: 2018-08-23 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds NephOS A Single Turn-key Solution for Public, Private, and Hybrid Clouds What is NephOS? NephoScale NephOS is a turn-key OpenStack-based service-provider-grade cloud software suite designed for multi-tenancy.

More information

Baremetal with Apache CloudStack

Baremetal with Apache CloudStack Baremetal with Apache CloudStack ApacheCon Europe 2016 Jaydeep Marfatia Cloud, IOT and Analytics Me Director of Product Management Cloud Products Accelerite Background Project lead for open source project

More information

Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks

Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks EVPN to the host Host multitenancy Attilla de Groot Attilla de Groot Sr. Systems Engineer, HCIE #3494 Cumulus Networks 1 Agenda EVPN to the Host Multi tenancy use cases Deployment issues Host integration

More information

Getting Started with Linux on Cumulus Networks

Getting Started with Linux on Cumulus Networks White Paper by David Davis, ActualTech Media Getting Started with Linux on Cumulus Networks In this Paper Linux at the Core... 2 Latest and Greatest Networking Protocols... 2 Network Command Line Utility

More information

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds

NephOS. A Single Turn-key Solution for Public, Private, and Hybrid Clouds NephOS A Single Turn-key Solution for Public, Private, and Hybrid Clouds What is NephOS? NephoScale NephOS is a turn-key OpenStack-based service-provider-grade cloud software suite designed for multi-tenancy.

More information

"Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary

Charting the Course... H8Q14S HPE Helion OpenStack. Course Summary Course Summary Description This course will take students through an in-depth look at HPE Helion OpenStack V5.0. The course flow is optimized to address the high-level architecture and HPE Helion OpenStack

More information

Huawei CloudFabric and VMware Collaboration Innovation Solution in Data Centers

Huawei CloudFabric and VMware Collaboration Innovation Solution in Data Centers Huawei CloudFabric and ware Collaboration Innovation Solution in Data Centers ware Data Center and Cloud Computing Solution Components Extend virtual computing to all applications Transform storage networks

More information

Virtualization Design

Virtualization Design VMM Integration with UCS-B, on page 1 VMM Integration with AVS or VDS, on page 3 VMM Domain Resolution Immediacy, on page 6 OpenStack and Cisco ACI, on page 8 VMM Integration with UCS-B About VMM Integration

More information

INSTALLATION RUNBOOK FOR Triliodata + TrilioVault

INSTALLATION RUNBOOK FOR Triliodata + TrilioVault INSTALLATION RUNBOOK FOR Triliodata + TrilioVault Application Type: [Backup and disaster recovery] Application Version: [2.1] MOS Version: [7.0] OpenStack version: [Kilo] Content Document History 1 Introduction

More information

Contrail Cloud Platform Architecture

Contrail Cloud Platform Architecture Contrail Cloud Platform Architecture Release 10.0 Modified: 2018-04-04 Juniper Networks, Inc. 1133 Innovation Way Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net Juniper Networks, the Juniper

More information

deploying high capacity IP fabrics

deploying high capacity IP fabrics deploying high capacity IP fabrics Thesis.. How can I build a network of 100 switches in 5 minutes? October 3, 2013 2 High Capacity IP Fabric October 3, 2013 3 Bringing the Linux revolution to Networking!

More information

Best Practice Deployment of F5 App Services in Private Clouds. Henry Tam, Senior Product Marketing Manager John Gruber, Sr. PM Solutions Architect

Best Practice Deployment of F5 App Services in Private Clouds. Henry Tam, Senior Product Marketing Manager John Gruber, Sr. PM Solutions Architect Best Practice Deployment of F5 App Services in Private Clouds Henry Tam, Senior Product Marketing Manager John Gruber, Sr. PM Solutions Architect Agenda 1 2 3 4 5 The trend of data center, private cloud

More information

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing.

SUSE OpenStack Cloud Production Deployment Architecture. Guide. Solution Guide Cloud Computing. SUSE OpenStack Cloud Production Deployment Architecture Guide Solution Guide Cloud Computing Table of Contents page Introduction... 2 High Availability Configuration...6 Network Topography...8 Services

More information

Introduction to Neutron. Network as a Service

Introduction to Neutron. Network as a Service Introduction to Neutron Network as a Service Assaf Muller, Associate Software Engineer, Cloud Networking, Red Hat assafmuller.wordpress.com, amuller@redhat.com, amuller on Freenode (#openstack) The Why

More information

OpenStack Havana All-in-One lab on VMware Workstation

OpenStack Havana All-in-One lab on VMware Workstation OpenStack Havana All-in-One lab on VMware Workstation With all of the popularity of OpenStack in general, and specifically with my other posts on deploying the Rackspace Private Cloud lab on VMware Workstation,

More information

Introduction to Cumulus Networks

Introduction to Cumulus Networks v Introduction to Cumulus Networks Sudeep Goswami August 11 th 2014, 11AM AEST Challenges in Networking How do I simplify my network? How do I make my network more affordable? CIO/CTO Network Architect

More information

Building NFV Solutions with OpenStack and Cisco ACI

Building NFV Solutions with OpenStack and Cisco ACI Building NFV Solutions with OpenStack and Cisco ACI Domenico Dastoli @domdastoli INSBU Technical Marketing Engineer Iftikhar Rathore - INSBU Technical Marketing Engineer Agenda Brief Introduction to Cisco

More information

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti

Part2: Let s pick one cloud IaaS middleware: OpenStack. Sergio Maffioletti S3IT: Service and Support for Science IT Cloud middleware Part2: Let s pick one cloud IaaS middleware: OpenStack Sergio Maffioletti S3IT: Service and Support for Science IT, University of Zurich http://www.s3it.uzh.ch/

More information

Quick Start Guide for Vmware. Version 2.5 Vmware vsphere Instance

Quick Start Guide for Vmware. Version 2.5 Vmware vsphere Instance Quick Start Guide for Vmware Version 2.5 Vmware vsphere Instance CONTENTS 1. Introduction 1.1 Running Gemini appliance on Vmware vsphere 1.1.1 Supported Versions 1.1.2 System Requirement 1.1.3 Note on

More information

MidoNet Scalability Report

MidoNet Scalability Report MidoNet Scalability Report MidoNet Scalability Report: Virtual Performance Equivalent to Bare Metal 1 MidoNet Scalability Report MidoNet: For virtual performance equivalent to bare metal Abstract: This

More information

Cumulus Linux What s New and Different since (Technical) Last Updated: June 18, 2015

Cumulus Linux What s New and Different since (Technical) Last Updated: June 18, 2015 v Cumulus Linux 2.5.3 What s New and Different since 2.5.0 (Technical) Last Updated: June 18, 2015 What s New and Different in Cumulus Linux 2.5.3 vs. 2.5.0 BFD additions BGP improvements Hardware watchdog

More information

Cisco Virtual Networking Solution for OpenStack

Cisco Virtual Networking Solution for OpenStack Data Sheet Cisco Virtual Networking Solution for OpenStack Product Overview Extend enterprise-class networking features to OpenStack cloud environments. A reliable virtual network infrastructure that provides

More information

LAN Setup Reflection

LAN Setup Reflection LAN Setup Reflection After the LAN setup, ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external? o Are you able to log into other

More information

INSTALLATION RUNBOOK FOR Hitachi Block Storage Driver for OpenStack

INSTALLATION RUNBOOK FOR Hitachi Block Storage Driver for OpenStack INSTALLATION RUNBOOK FOR Hitachi Block Storage Driver for OpenStack Product Name: Hitachi Block Storage Driver for OpenStack Driver Version: 1.4.10 MOS Version: 7.0 OpenStack Version: Product Type: Kilo

More information

SOA Software API Gateway Appliance 6.3 Administration Guide

SOA Software API Gateway Appliance 6.3 Administration Guide SOA Software API Gateway Appliance 6.3 Administration Guide Trademarks SOA Software and the SOA Software logo are either trademarks or registered trademarks of SOA Software, Inc. Other product names, logos,

More information

Lenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6

Lenovo ThinkSystem NE Release Notes. For Lenovo Cloud Network Operating System 10.6 Lenovo ThinkSystem NE10032 Release Notes For Lenovo Cloud Network Operating System 10.6 Note: Before using this information and the product it supports, read the general information in the Safety information

More information

HOW-TO-GUIDE: demonstrating Fabric Attach using OpenVSwitch

HOW-TO-GUIDE: demonstrating Fabric Attach using OpenVSwitch HOW-TO-GUIDE: demonstrating Fabric Attach using OpenVSwitch 1 Target audience System Engineers interested to understand the Fabric Attach (FA) technology and/or for demo proposes. Why does it make sense

More information

LAN Setup Reflection. Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external?

LAN Setup Reflection. Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external? LAN Setup Reflection Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external? o Are you able to log into other VMs in the classroom?

More information

271 Waverley Oaks Rd. Telephone: Suite 206 Waltham, MA USA

271 Waverley Oaks Rd. Telephone: Suite 206 Waltham, MA USA f Contacting Leostream Leostream Corporation http://www.leostream.com 271 Waverley Oaks Rd. Telephone: +1 781 890 2019 Suite 206 Waltham, MA 02452 USA To submit an enhancement request, email features@leostream.com.

More information

Cisco Modeling Labs OVA Installation

Cisco Modeling Labs OVA Installation Prepare for an OVA File Installation, page 1 Download the Cisco Modeling Labs OVA File, page 2 Configure Security and Network Settings, page 2 Deploy the Cisco Modeling Labs OVA, page 12 Edit the Virtual

More information

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer

Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung. Alexei Agueev, Systems Engineer Weiterentwicklung von OpenStack Netzen 25G/50G/100G, FW-Integration, umfassende Einbindung Alexei Agueev, Systems Engineer ETHERNET MIGRATION 10G/40G à 25G/50G/100G Interface Parallelism Parallelism increases

More information

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases

Cisco CloudCenter Solution with Cisco ACI: Common Use Cases Cisco CloudCenter Solution with Cisco ACI: Common Use Cases Cisco ACI increases network security, automates communication policies based on business-relevant application requirements, and decreases developer

More information

StorageGRID Webscale 10.3 Software Installation Guide for OpenStack Deployments

StorageGRID Webscale 10.3 Software Installation Guide for OpenStack Deployments StorageGRID Webscale 10.3 Software Installation Guide for OpenStack Deployments September 2016 215-10818_A0 doccomments@netapp.com Table of Contents 3 Contents Deployment planning and preparation... 5

More information

Cisco Nexus 1000V InterCloud

Cisco Nexus 1000V InterCloud Deployment Guide Cisco Nexus 1000V InterCloud Deployment Guide (Draft) June 2013 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information. Page 1 of 49 Contents

More information

Road to Private Cloud mit OpenStack Projekterfahrungen

Road to Private Cloud mit OpenStack Projekterfahrungen Road to Private Cloud mit OpenStack Projekterfahrungen Andreas Kress Enterprise Architect Oracle Sales Consulting DOAG Regio Nürnberg/Franken 20. April 2017 Safe Harbor Statement The following is intended

More information

INSTALLATION RUNBOOK FOR. VNF (virtual firewall) 15.1X49-D30.3. Liberty. Application Type: vsrx Version: MOS Version: 8.0. OpenStack Version:

INSTALLATION RUNBOOK FOR. VNF (virtual firewall) 15.1X49-D30.3. Liberty. Application Type: vsrx Version: MOS Version: 8.0. OpenStack Version: INSTALLATION RUNBOOK FOR Juniper vsrx Application Type: vsrx Version: VNF (virtual firewall) 15.1X49-D30.3 MOS Version: 8.0 OpenStack Version: Liberty 1 Introduction 1.1 Target Audience 2 Application Overview

More information

Trend Micro Incorporated reserves the right to make changes to this document and to the product described herein without notice. Before installing and using the product, please review the readme files,

More information

Configuring CloudN using ESXi 5.0 or later (EST mode)

Configuring CloudN using ESXi 5.0 or later (EST mode) Configuring CloudN using ESXi 5.0 or later (EST mode) This document describes the step-by-step procedures to configure CloudN and Ubuntu server that will connect to external devices in access mode. CloudN

More information

VMware Integrated OpenStack Quick Start Guide

VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack Quick Start Guide VMware Integrated OpenStack 1.0.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Installation Runbook for Apcera on Mirantis OpenStack

Installation Runbook for Apcera on Mirantis OpenStack Installation Runbook for Apcera on Mirantis OpenStack Application Version 440 MOS Version 7.0 OpenStack Version Application Type 2015.1.0 Kilo Platform as a Service Content Document History... 3 1 Introduction...

More information

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion

HPE Helion OpenStack Carrier Grade 1.1 Release Notes HPE Helion HPE Helion OpenStack Carrier Grade 1.1 Release Notes 2017-11-14 HPE Helion Contents HP Helion OpenStack Carrier Grade 1.1: Release Notes... 3 Changes in This Release... 3 Usage Caveats...4 Known Problems

More information

Planning and Preparation. VMware Validated Design 4.0 VMware Validated Design for Remote Office Branch Office 4.0

Planning and Preparation. VMware Validated Design 4.0 VMware Validated Design for Remote Office Branch Office 4.0 VMware Validated Design 4.0 VMware Validated Design for Remote Office Branch Office 4.0 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

Quantum, network services for Openstack. Salvatore Orlando Openstack Quantum core developer

Quantum, network services for Openstack. Salvatore Orlando Openstack Quantum core developer Quantum, network services for Openstack Salvatore Orlando sorlando@nicira.com Openstack Quantum core developer Twitter- @taturiello Caveats Quantum is in its teenage years: there are lots of things that

More information

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide VMware Integrated OpenStack 4.0 VMware Integrated OpenStack with Kubernetes Getting Started Guide You can find the most up-to-date technical

More information

Red Hat Satellite 6.2

Red Hat Satellite 6.2 Red Hat Satellite 6.2 Provisioning Guide A guide to provisioning physical and virtual hosts on Red Hat Satellite Servers. Edition 1.0 Last Updated: 2018-05-01 Red Hat Satellite 6.2 Provisioning Guide

More information

Red Hat OpenStack Platform 10

Red Hat OpenStack Platform 10 Red Hat OpenStack Platform 10 OpenStack Integration Test Suite Guide Introduction to the OpenStack Integration Test Suite Last Updated: 2018-03-22 Red Hat OpenStack Platform 10 OpenStack Integration Test

More information

Trend Micro Incorporated reserves the right to make changes to this document and to the product described herein without notice. Before installing and using the product, please review the readme files,

More information

Active Fabric Manager (AFM) Plug-in for OpenStack Guide 2.0

Active Fabric Manager (AFM) Plug-in for OpenStack Guide 2.0 Active Fabric Manager (AFM) Plug-in for OpenStack Guide 2.0 Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer. CAUTION: A CAUTION

More information

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems

Xen and CloudStack. Ewan Mellor. Director, Engineering, Open-source Cloud Platforms Citrix Systems Xen and CloudStack Ewan Mellor Director, Engineering, Open-source Cloud Platforms Citrix Systems Agenda What is CloudStack? Move to the Apache Foundation CloudStack architecture on Xen The future for CloudStack

More information

Networking for Enterprise Private Clouds

Networking for Enterprise Private Clouds Networking for Enterprise Private Clouds Gautam Kulkarni, Ph.D. ZeroStack March 24, 2016 ZeroStack Inc. Inc. zerostack.com zerostack.com About Us ZeroStack SaaS managed private cloud solution for Enterprises

More information

CONTAINERS AND MICROSERVICES WITH CONTRAIL

CONTAINERS AND MICROSERVICES WITH CONTRAIL CONTAINERS AND MICROSERVICES WITH CONTRAIL Scott Sneddon Sree Sarva DP Ayyadevara Sr. Director Sr. Director Director Cloud and SDN Contrail Solutions Product Line Management This statement of direction

More information

getting started guide

getting started guide Pure commitment. getting started guide Cloud Native Infrastructure version 2.0 Contents Introduction... 3 Intended audience... 3 Logging in to the Cloud Native Infrastructure dashboard... 3 Creating your

More information

The Economic Advantages of Open and Web-Scale Networking

The Economic Advantages of Open and Web-Scale Networking The Economic Advantages of Open and Web-Scale Networking Running Cumulus Linux network operating system on Dell EMC open networking switches Summary By the year 2020, it is expected that over 40% of enterprises

More information

CloudStack Administration Guide

CloudStack Administration Guide CloudStack Administration Guide For CloudStack Version 3.0.0 3.0.2 Revised August 16, 2012 4:41 PM 2011, 2012 Citrix Systems, Inc. All rights reserved. Specifications are subject to change without notice.

More information

Cisco VDS Service Broker Software Installation Guide for UCS Platforms

Cisco VDS Service Broker Software Installation Guide for UCS Platforms Cisco VDS Service Broker 1.0.1 Software Installation Guide for UCS Platforms Revised: May 2013 This document provides detailed instructions for installing the Cisco Videoscape Distribution Suite Service

More information

Openstack Installation Guide

Openstack Installation Guide Openstack Installation Guide Installation Steps Step1: Making the Host Machine ready f Installation Step2: Configuring the Host Machine Interfaces Step3: Checkout the Installation Scripts Step4: Installing

More information

PLACE IMAGE OVER THIS SPACE. Docker IP Routing. Having your first-hop load-balancer on Docker. Medallia Copyright 2015.

PLACE IMAGE OVER THIS SPACE. Docker IP Routing. Having your first-hop load-balancer on Docker. Medallia Copyright 2015. PLACE IMAGE OVER THIS SPACE Docker IP Routing Having your first-hop load-balancer on Docker 1 Who are you? thorvald@medallia.com Medallia: Software to improve the customer experience Aggregating 1B documents

More information

Data Center Automation

Data Center Automation Data Center Automation About Arista Networks 10/40/100GbE Networks for the Virtualized Cloud & Data Center Founded in 2004 Shipping Since Mid-2008 ANET, IPO (NYSE) in June 2014 1000+ Employees More than

More information

Unit- 5. Linux Systems

Unit- 5. Linux Systems Unit- 5 Linux System- Basic Concepts; System Administration-Requirements for Linux System Administrator, Setting up a LINUX Multifunction Server, Domain Name System, Setting Up Local Network Services;

More information

Alteon Virtual Appliance (VA) version 29 and

Alteon Virtual Appliance (VA) version 29 and Alteon Virtual Appliance (VA) version 29 and Cisco Unified Computing System (UCS) Implementation Guide - 1 Table of Content Solution Overview... 3 Cisco s Unified Computing System Overview... 3 Radware

More information

Release Notes for Cisco Application Policy Infrastructure Controller Enterprise Module, Release x

Release Notes for Cisco Application Policy Infrastructure Controller Enterprise Module, Release x Release s for Cisco Application Policy Infrastructure Controller Enterprise Module, Release 1.3.3.x First Published: 2017-02-10 Release s for Cisco Application Policy Infrastructure Controller Enterprise

More information

Wowza Media Server Pro for Riverbed Steelhead. Installation Guide

Wowza Media Server Pro for Riverbed Steelhead. Installation Guide Wowza Media Server Pro for Riverbed Steelhead Installation Guide Wowza Media Server Pro for Riverbed Steelhead Installation Guide Version 2.0 Wowza Media Systems, Inc. 1153 Bergen Parkway, #181 Evergreen,

More information

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES

Helion OpenStack Carrier Grade 4.0 RELEASE NOTES Helion OpenStack Carrier Grade 4.0 RELEASE NOTES 4.0 Copyright Notice Copyright 2016 Hewlett Packard Enterprise Development LP The information contained herein is subject to change without notice. The

More information

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4 NSX-T Data Center Migration Coordinator Guide 5 APR 2019 VMware NSX-T Data Center 2.4 You can find the most up-to-date technical documentation on the VMware website at: https://docs.vmware.com/ If you

More information

Managing Demand Spikes in a highly flexible and agile deployment

Managing Demand Spikes in a highly flexible and agile deployment Managing Demand Spikes in a highly flexible and agile deployment Yuki Sato S2 (Akita, Japan) Jan Hilberath Midokura (Tokyo, Japan) Agenda Company Introduction Why SUSE OpenStack with MidoNet? MidoNet Introduction

More information

BRKDCT-1253: Introduction to OpenStack Daneyon Hansen, Software Engineer

BRKDCT-1253: Introduction to OpenStack Daneyon Hansen, Software Engineer BRKDCT-1253: Introduction to OpenStack Daneyon Hansen, Software Engineer Agenda Background Technical Overview Demonstration Q&A 2 Looking Back Do You Remember What This Guy Did to IT? Linux 3 The Internet

More information

Securing VMware NSX MAY 2014

Securing VMware NSX MAY 2014 Securing VMware NSX MAY 2014 Securing VMware NSX Table of Contents Executive Summary... 2 NSX Traffic [Control, Management, and Data]... 3 NSX Manager:... 5 NSX Controllers:... 8 NSX Edge Gateway:... 9

More information

An Introduction to Red Hat Enterprise Linux OpenStack Platform. Rhys Oxenham Field Product Manager, Red Hat

An Introduction to Red Hat Enterprise Linux OpenStack Platform. Rhys Oxenham Field Product Manager, Red Hat An Introduction to Red Hat Enterprise Linux OpenStack Platform Rhys Oxenham Field Product Manager, Red Hat What is OpenStack? What is OpenStack? Fully open source cloud operating system Comprised of several

More information

Experimenting Internetworking using Linux Virtual Machines Part I

Experimenting Internetworking using Linux Virtual Machines Part I Experimenting Internetworking using Linux Virtual Machines Part I Hui Chen Previous Release on October 27, 2014 Lastly revised on November 4, 2015 Revision: Copyright c 2016. Hui Chen

More information

Red Hat OpenStack Platform 8

Red Hat OpenStack Platform 8 Red Hat OpenStack Platform 8 Director Installation and Usage An end-to-end scenario on using Red Hat OpenStack Platform director to create an OpenStack cloud Last Updated: 2018-01-20 Red Hat OpenStack

More information

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads.

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads. Project Calico v3.2 Overview Benefits Simplicity. Traditional Software Defined Networks (SDNs) are complex, making them hard to deploy and troubleshoot. Calico removes that complexity, with a simplified

More information

Oracle Cloud Infrastructure Virtual Cloud Network Overview and Deployment Guide ORACLE WHITEPAPER JANUARY 2018 VERSION 1.0

Oracle Cloud Infrastructure Virtual Cloud Network Overview and Deployment Guide ORACLE WHITEPAPER JANUARY 2018 VERSION 1.0 Oracle Cloud Infrastructure Virtual Cloud Network Overview and Deployment Guide ORACLE WHITEPAPER JANUARY 2018 VERSION 1.0 Table of Contents Purpose of this Whitepaper 1 Scope & Assumptions 1 Virtual Cloud

More information

vcmp for Appliance Models: Administration Version

vcmp for Appliance Models: Administration Version vcmp for Appliance Models: Administration Version 12.1.1 Table of Contents Table of Contents Introduction to the vcmp System...7 What is vcmp?...7 Other vcmp system components...8 BIG-IP license considerations

More information

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013

Distributed Systems. 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski. Rutgers University. Fall 2013 Distributed Systems 31. The Cloud: Infrastructure as a Service Paul Krzyzanowski Rutgers University Fall 2013 December 12, 2014 2013 Paul Krzyzanowski 1 Motivation for the Cloud Self-service configuration

More information

OpenStack Networking Services and Orchestration 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION

OpenStack Networking Services and Orchestration 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION OpenStack Networking Services and Orchestration 2015 BROCADE COMMUNICATIONS SYSTEMS, INC. COMPANY PROPRIETARY INFORMATION A Brief History of Networking Intelligent Industry Solutions Scale Architecture

More information

vrealize Operations Management Pack for NSX for vsphere 3.0

vrealize Operations Management Pack for NSX for vsphere 3.0 vrealize Operations Management Pack for NSX for vsphere 3.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.

More information

DELL EMC TECHNICAL SOLUTION BRIEF

DELL EMC TECHNICAL SOLUTION BRIEF DELL EMC TECHAL SOLUTION BRIEF ARCHITECTING A CLOUD FABRIC WHEN DEPLOING VIRTUALIZATION OVERLAS Version 2.0 Author: VICTOR LAMA Dell EMC Networking SE May 2017 Architecting a Data Center Cloud Fabric:

More information

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide

Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5 Concepts Guide Revised January 30, 2015 06:00 pm IST Citrix CloudPlatform Citrix CloudPlatform (powered by Apache CloudStack) Version 4.5

More information

Virtual Machine Manager Domains

Virtual Machine Manager Domains This chapter contains the following sections: Cisco ACI VM Networking Support for Virtual Machine Managers, page 1 VMM Domain Policy Model, page 3 Virtual Machine Manager Domain Main Components, page 3,

More information

vcmp for Appliance Models: Administration Version 13.0

vcmp for Appliance Models: Administration Version 13.0 vcmp for Appliance Models: Administration Version 13.0 Table of Contents Table of Contents Introduction to the vcmp System... 7 What is vcmp?... 7 Other vcmp system components... 8 BIG-IP license considerations

More information

Cisco ACI Simulator VM Installation Guide

Cisco ACI Simulator VM Installation Guide Cisco ACI Simulator VM Installation Guide New and Changed Information 2 About the Application Policy Infrastructure Controller 2 About the ACI Simulator Virtual Machine 2 Simulator VM Topology and Connections

More information

IPv6 in Avi Vantage for OpenStack

IPv6 in Avi Vantage for OpenStack Page 1 of 11 view online Overview Starting with release 18.1.1, OpenStack integration with Avi Vantage is IPv6 capable. The integration discussed in this article has been tested for OpenStack Ocata which

More information

Fuel VMware DVS plugin testing documentation

Fuel VMware DVS plugin testing documentation Fuel VMware DVS plugin testing documentation Release 3.1-3.1.1-1 Mirantis Inc. Jan 31, 2017 CONTENTS 1 Testing documents 1 Test Plan for VMware DVS plugin version 3.1.1................................

More information

Introducing VMware Validated Designs for Software-Defined Data Center

Introducing VMware Validated Designs for Software-Defined Data Center Introducing VMware Validated Designs for Software-Defined Data Center VMware Validated Design 4.0 VMware Validated Design for Software-Defined Data Center 4.0 You can find the most up-to-date technical

More information

Virtual Appliance User s Guide

Virtual Appliance User s Guide Cast Iron Integration Appliance Virtual Appliance User s Guide Version 4.5 July 2009 Cast Iron Virtual Appliance User s Guide Version 4.5 July 2009 Copyright 2009 Cast Iron Systems. All rights reserved.

More information

Security Gateway for OpenStack

Security Gateway for OpenStack Security Gateway for OpenStack R77.30 Administration Guide 21 May 2015 Protected 2015 Check Point Software Technologies Ltd. All rights reserved. This product and related documentation are protected by

More information

Load Balancing Bloxx Web Filter. Deployment Guide v Copyright Loadbalancer.org

Load Balancing Bloxx Web Filter. Deployment Guide v Copyright Loadbalancer.org Load Balancing Bloxx Web Filter Deployment Guide v1.3.5 Copyright Loadbalancer.org Table of Contents 1. About this Guide...4 2. Loadbalancer.org Appliances Supported...4 3. Loadbalancer.org Software Versions

More information