Dockercon 2017 Networking Workshop

Similar documents
Docker Networking Deep Dive online meetup

Docker Networking: From One to Many. Don Mills

Container Networking and Openstack. Fernando Sanchez Fawad Khaliq March, 2016

VXLAN Overview: Cisco Nexus 9000 Series Switches

Cloud Networking (VITMMA02) Network Virtualization: Overlay Networks OpenStack Neutron Networking

Project Kuryr. Antoni Segura Puimedon (apuimedo) Gal Sagie (gsagie)

Simplify Container Networking With ican. Huawei Cloud Network Lab

2016 Mesosphere, Inc. All Rights Reserved.

Project Calico v3.1. Overview. Architecture and Key Components

Wolfram Richter Red Hat. OpenShift Container Netzwerk aus Sicht der Workload

Evaluation of virtualization and traffic filtering methods for container networks

January 27, Docker Networking with Linux. Guillaume Urvoy-Keller. Reference Scenario. Basic tools: bridges, VETH

Cloud Native Networking

Higher scalability to address more Layer 2 segments: up to 16 million VXLAN segments.

Run containerized applications from pre-existing images stored in a centralized registry

Zero to Microservices in 5 minutes using Docker Containers. Mathew Lodge Weaveworks

Neutron: peeking behind the curtains

November 11, Docker Networking with Linux. Guillaume Urvoy-Keller. Reference Scenario. Basic tools: bridges, VETH

Fundamental Questions to Answer About Computer Networking, Jan 2009 Prof. Ying-Dar Lin,

Project Calico v3.2. Overview. Architecture and Key Components. Project Calico provides network security for containers and virtual machine workloads.

Life of a Packet. KubeCon Europe Michael Rubin TL/TLM in GKE/Kubernetes github.com/matchstick. logo. Google Cloud Platform

Overview of Container Management

Project Kuryr. Here comes advanced services for containers networking. Antoni Segura

Cilium Documentation. Release v0.8. Cilium Authors

Dan Williams Networking Services, Red Hat

CONTAINERS AND MICROSERVICES WITH CONTRAIL

Kubernetes - Networking. Konstantinos Tsakalozos

BIG-IP TMOS : Tunneling and IPsec. Version 13.0

Networking Approaches in. a Container World. Flavio Castelli Engineering Manager

Defining Security for an AWS EKS deployment

Session objectives and takeaways

Maximizing Network Throughput for Container Based Storage David Borman Quantum

Fully Scalable Networking with MidoNet

This course prepares candidates for the CompTIA Network+ examination (2018 Objectives) N

OpenContrail Overview Architecture & Demo

End-to-end fabric visibility

Quantum, network services for Openstack. Salvatore Orlando Openstack Quantum core developer

Accelerate Cloud Native with FD.io

Orchestration: Accelerate Deployments and Reduce Operational Risk. Nathan Pearce, Product Development SA Programmability & Orchestration Team

Docker LibNetwork Plugins. Explorer s Tale

CompTIA Network+ Study Guide Table of Contents

CS-580K/480K Advanced Topics in Cloud Computing. Network Virtualization

VMware Horizon View Deployment

Kubernetes networking in the telco space

Cisco Tetration Analytics Demo. Ing. Guenter Herold Area Manager Datacenter Cisco Austria GmbH

vcloud Air - Virtual Private Cloud OnDemand Networking Guide

SD-Access Wireless: why would you care?

Table of Contents HOL-PRT-1305

Secure Kubernetes Container Workloads

Singapore. Service Proxy, Container Networking & K8s. Acknowledgement: Pierre Pfister, Jerome John DiGiglio, Ray

Data Center Configuration. 1. Configuring VXLAN

Integration of Hypervisors and L4-7 Services into an ACI Fabric. Azeem Suleman, Principal Engineer, Insieme Business Unit

Contents. EVPN overview 1

Cisco Nexus 1000V InterCloud

Overview. Overview. OTV Fundamentals. OTV Terms. This chapter provides an overview for Overlay Transport Virtualization (OTV) on Cisco NX-OS devices.

NSX Data Center Load Balancing and VPN Services

Cisco Virtual Networking Solution for OpenStack

EASILY DEPLOY AND SCALE KUBERNETES WITH RANCHER

Docker Container Access Reference Design

Implementing VXLAN. Prerequisites for implementing VXLANs. Information about Implementing VXLAN

Building NFV Solutions with OpenStack and Cisco ACI

Barracuda Link Balancer

Docker DCA EXAM. m/ Product: Demo. For More Information: Docker Certified Associate

Kuryr & Fuxi. OpenStack networking and storage for Docker Swarm containers. Hongbin Lu Antoni Segura Puimedon

Network Configuration Guide

TestOut Network Pro - English 5.0.x COURSE OUTLINE. Modified

CCNA Exploration Network Fundamentals

Recommended Configuration Maximums. NSX for vsphere Updated on August 08, 2018

Virtuální firewall v ukázkách a příkladech

COPYRIGHTED MATERIAL. Table of Contents. Assessment Test

Interconnecting Cisco Networking Devices Part 1 (ICND1) Course Overview

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Cilium Documentation. Release v0.8. Cilium Authors

An introduction to Docker

PVS Deployment in the Cloud. Last Updated: June 17, 2016

Think Small to Scale Big

NSX-T Data Center Migration Coordinator Guide. 5 APR 2019 VMware NSX-T Data Center 2.4

SD-WAN Deployment Guide (CVD)

DaoliNet A Simple and Smart Networking Technology for Docker Applications

Introduction to Neutron. Network as a Service

Clustered Data Management in Virtual Docker Networks Spanning Geo- Redundant Data Centers

Openstack Networking Design

exam. Number: Passing Score: 800 Time Limit: 120 min CISCO Interconnecting Cisco Networking Devices Part 1 (ICND)

NETWORK OVERLAYS: AN INTRODUCTION

Cloud e Datacenter Networking

Using Network Virtualization in DevOps environments Yves Fauser, 22. March 2016 (Technical Product Manager VMware NSBU)

Red Hat OpenStack Platform 10 Red Hat OpenDaylight Product Guide

Creating your Virtual Data Centre

Cisco Campus Fabric Introduction. Vedran Hafner Systems engineer Cisco

Service Graph Design with Cisco Application Centric Infrastructure

Understanding Linux Internetworking

Service Mesh and Microservices Networking

Scaling bridge forwarding database. Roopa Prabhu, Nikolay Aleksandrov

Buenos Aires 31 de Octubre de 2018

Load Balancing Bloxx Web Filter. Deployment Guide v Copyright Loadbalancer.org

Cisco CSR 1000V VxLAN Support 2

TestOut Routing and Switching Pro - English 6.0.x COURSE OUTLINE. Modified

BCS EXIN Foundation Certificate in OpenStack Software Neutron Syllabus

Hands-On TCP/IP Networking

Enabling Multi-Cloud with Istio Stretching an Istio service mesh between Public & Private Clouds. John Joyce Robert Li

Transcription:

Dockercon 2017 Networking Workshop Mark Church, Technical Account Manager @ Docker Lorenzo Fontana, Docker Captain Nico Kabar, Solutions Architect @ Docker

Agenda 1. Container Network Model 2. Docker Networking Fundamentals 3. Bridge Driver 4. Overlay Driver 5. MACVLAN Driver 6. Network Services (Internal Load Balancing, Service Discovery) 7. Ingress Load Balancing 8. Docker Network Troubleshooting 2

Logistics Logging in to lab environment 2 Docker Instances: node0, node1 Username: ubuntu Password: D0ckerCon2017 Slack Channel: Survey

4 The Container Network Model (CNM)

Networking is hard! Distributed in nature Many discrete components that are managed and configured differently Services that need to be deployed uniformly across all of these discrete components 5

Enter containers 100s or 1000s of containers per host Containers that exist for minutes or months Microservices distributed across many more hosts (>>> E-W traffic) this is worse.

Docker Networking Design Philosophy Put Users First Plugin API Design Developers and Operations Batteries included but removable 7

Docker Networking Goals Pluggable Flexibility Decentralized Highly-Available Docker Native UX and API User Friendly Out-of-the-Box Support with Docker Datacenter Distributed Scalability + Performance Cross-platform 8

Container Network Model (CNM) Sandbox Endpoint Network 9

Containers and the CNM Sandbox Endpoint Network Container Container C1 Container C2 Container C3 Network A Network B 10

What is Libnetwork? Libnetwork is Docker s native implementation of the CNM 11 CNM Libnetwork

What is Libnetwork? Docker s native implementation of the CNM Library containing everything needed to create and manage container networks Gives containers direct access to the Pluggable underlay model network (native without and port remote/3rd mapping and party without drivers) a Linux bridge Provides built-in service discovery and load balancing Provides a consistent versioned API Multi-platform, written in Go, open source 12

Libnetwork and Drivers Libnetwork has a pluggable driver interface Drivers are used to implement different networking technologies Built-in drivers are called local drivers, and include: bridge, host, overlay, MACVLAN 3rd party drivers are called remote drivers, and include: Calico, Contiv, Kuryr, Weave Libnetwork also supports pluggable IPAM drivers 13

Show Registered Drivers 14 $ docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 2 <snip> Plugins: Volume: local Network: null bridge host overlay...

Libnetwork Architecture Libnetwork (CNM) Load Balancing Service Discovery Network Control Plane Native Network Driver Native IPAM Driver Remote Network Driver Remote IPAM Driver Docker Engine 15

Networks and Containers Defer to Driver docker network create d <driver> Libnetwork Driver Driver Engine docker run --network 16

Detailed Overview: Summary The CNM is an open-source container networking specification contributed to the community by Docker, Inc. The CNM defines sandboxes, endpoints, and networks Libnetwork is Docker s implementation of the CNM Libnetwork is extensible via pluggable drivers Drivers allow Libnetwork to support many network technologies Libnetwork is cross-platform and open-source The CNM and Libnetwork simplify container networking and improve application portability 17

18 Docker Networking Fundamentals

Libnetwork (CNM) Multihost Networking Plugins IPAM Network UX/API Aliases DNS Round Robin LB HRM Host-Mode 1.7 1.8 1.9 1.10 1.11 1.12 1.13 17.03 Service Discovery Distributed DNS Secure out-of-the-box Distributed KV store Load balancing Swarm integration Built-in routing mesh Native multi-host overlay 19

Docker Networking on Linux The Linux kernel has extensive networking capabilities (TCP/IP stack, VXLAN, DNS ) Docker networking utilizes many Linux kernel networking features (network namespaces, bridges, iptables, veth pairs ) Linux bridges: L2 virtual switches implemented in the kernel Network namespaces: Used for isolating container network stacks veth pairs: Connect containers to container networks iptables: Used for port mapping, load balancing, network isolation 20

Docker Networking is Linux (and Windows) Networking Host User Space Kernel Docker Engine Linux Bridge VXLAN net namespaces IPVS iptables veth TCP/IP Devices eth0 eth1

Docker Networking on Linux and Windows Linux Windows Network Namespace Network Compartments Linux Bridge VSwitch Virtual Ethernet Devices veths) Virtual nics and Switch Ports IP Tables Firewall & VFP Rules

Docker Windows Networking

Container Network Model (CNM) Sandbox Endpoint Network 24

Linux Networking with Containers Docker host Namespaces are used extensively for container isolation Host network namespace is the default namespace Additional network namespaces are created to isolate containers from each other Host Network Namespace Cntnr1 Cntnr2 Cntnr3 eth0 eth1 25

Host Mode Data Flow Docker host 1 Docker host 2 eth0 eth0 172.31.1.5 192.168.1.25 26

Demo: Docker Networking Fundamentals 27

28 Lab Section 1

29 Bridge Driver

What is Docker Bridge Networking? Single-host networking! Simple to configure and troubleshoot Useful for basic test and dev Docker host Cntnr1 Cntnr2 Cntnr1 Bridge 30

What is Docker Bridge Networking? Each container is placed in its own network namespace The bridge driver creates a bridge (virtual switch) on a single Docker host All containers on this bridge can communicate The bridge is a private network restricted to a single Docker host Docker host Cntnr1 Cntnr2 Cntnr1 Bridge 31

What is Docker Bridge Networking? Docker host 1 Docker host 2 Docker host 3 CntnrA CntnrB CntnrC CntnrD CntnrE CntnrF Bridge Bridge Bridge 1 Bridge 2 Containers on different bridge networks cannot communicate 32

Bridge Networking in a Bit More Detail The bridge created by the bridge driver for the pre-built bridge network is called docker0 Each container is connected to a bridge network via a veth pair which connects between network namespaces Provides single-host networking External access requires port mapping Docker host Cntnr1 Cntnr2 Cntnr1 veth veth veth Bridge eth0 33

Docker Bridge Networking and Port Mapping Docker host 1 Cntnr1 Host port Container port 10.0.0.8 :80 $ docker run -p 8080:80... Bridge 172.14.3.55 :8080 L2/L3 physical network 34

Bridge Mode Data Flow Docker host 1 eth0 veth eth0 192.168.2.17 192.168.1.25 35

Demo BRIDGE 36

37 Lab Section 2

38 Overlay Driver

What is Docker Overlay Networking? The overlay driver enables simple and secure multi-host networking Docker host 1 Docker host 2 Docker host 3 CntnrA CntnrB CntnrC CntnrD CntnrE CntnrF Overlay Network All containers on the overlay network can communicate! 39

Building an Overlay Network (High level) Docker host 1 Docker host 2 10.0.0.3 10.0.0.4 Overlay 10.0.0.0/24 172.31.1.5 192.168.1.25 40

Docker Overlay Networks and VXLAN The overlay driver uses VXLAN technology to build the network A VXLAN tunnel is created through the underlay network(s) At each end of the tunnel is a VXLAN tunnel end point (VTEP) The VTEP performs encapsulation and de-encapsulation The VTEP exists in the Docker Host s network namespace Docker host 1 VTEP VXLAN Tunnel Docker host 2 VTEP 172.31.1.5 192.168.1.25 Layer 3 transport (underlay networks) 41

Building an Overlay Network (more detailed) Docker host 1 Docker host 2 veth veth C1: 10.0.0.3 C2: 10.0.0.4 Br0 Br0 Network namespace VTEP :4789/udp VXLAN Tunnel VTEP :4789/udp Network namespace 172.31.1.5 192.168.1.25 42 Layer 3 transport (underlay networks)

Overlay Networking Ports Docker host 1 Docker host 2 10.0.0.3 10.0.0.4 Management Plane (TCP 2377) - Cluster control Data Plane (UDP 4789) - Application traffic (VXLAN) Control Plane (TCP/UDP 7946) - Network control 172.31.1.5 192.168.1.25 43

Overlay Network Encryption with IPSec Docker host 1 Docker host 2 veth veth C1: 10.0.0.3 C2: 10.0.0.4 Br0 Br0 Network namespace VTEP :4789/udp VXLAN Tunnel VTEP :4789/udp Network namespace 172.31.1.5 192.168.1.25 IPsec Tunnel 44 Layer 3 transport (underlay networks)

Overlay Networking Under the Hood Virtual extensible LAN (VXLAN) is the data transport (RFC7348) Creates a new L2 network over an L3 transport network Point-to-Multi-Point tunnels VXLAN Network ID (VNID) is used to map frames to VLANs Uses Proxy ARP Invisible to the container The docker_gwbridge virtual switch per host for default route Leverages the distributed KV store created by Swarm Control plane is encrypted by default Data plane can be encrypted if desired 45

Demo OVERLAY 46

47 MACVLAN Driver

What is MACVLAN? A way to attach containers to existing networks and VLANs Ideal for apps that are not ready to be fully containerized Uses the well known MACVLAN Linux network type Docker host 1 Cntnr1 Cntnr2 10.0.0.8 10.0.0.9 V 10.0.0.68 P 10.0.0.25 eth0: 10.0.0.40 L2/L3 physical underlay (10.0.0.0/24) 48

What is MACVLAN? A way to connect containers to virtual and physical machines on existing networks and VLANs Parent interface has to be connected to physical underlay Gives containers direct access to the underlay Sub-interfaces network used without to port mapping trunk 802.1Q and without VLANs a Linux bridge Each container gets its own MAC and IP on the underlay network Each container is visible on the physical underlay network Gives containers direct access to the underlay network without port mapping and without a Linux bridge Requires promiscuous mode 49

What is MACVLAN? Docker host 1 Docker host 2 Docker host 3 Cntnr1 10.0.0.18 Cntnr2 10.0.0.19 Cntnr3 10.0.0.10 Cntnr4 10.0.0.11 Cntnr5 10.0.0.91 Cntnr6 10.0.0.92 V P 10.0.0.68 10.0.0.25 eth0: 10.0.0.30 eth0: 10.0.0.40 eth0: 10.0.0.50 L2/L3 physical underlay (10.0.0.0/24) Promiscuous mode 50

What is MACVLAN? Cntnr2 Cntnr2 Cntnr3 Cntnr4 Cntnr5 Cntnr6 10.0.0.18 10.0.0.19 10.0.0.10 10.0.0.11 10.0.0.91 10.0.0.92 V P 10.0.0.68 10.0.0.25 L2/L3 physical underlay (10.0.0.0/24) 51

MACVLAN and Sub-interfaces MACVLAN uses sub-interfaces to process 802.1Q VLAN tags. In this example, two sub-interfaces are used to enable two separate VLANs Yellow lines represent VLAN 10 Blue lines represent VLAN 20 Docker host Cntnr2 Cntnr2 eth0: 10.0.0.8 eth0: 10.0.20.3 MACVLAN 10 MACVLAN 20 eth0.10 eth0.20 eth0 802.1q trunk L2/L3 physical underlay VLAN 10 10.0.10.1/24 VLAN 20 10.0.20.1/24 52

MACVLAN Summary Allow containers to be plumbed into existing VLANs Ideal for integrating containers with existing networks and apps High performance (no NAT or Linux bridge ) Every container gets its own MAC and routable IP on the physical underlay Uses sub-interfaces for 802.1q VLAN tagging Requires promiscuous mode! 53

Demo MACVLAN 54

Use Cases Summary The bridge driver provides simple single-host networking Recommended to use another more specific driver such as overlay, MACVLAN etc The overlay driver provides native out-of-the-box multi-host networking The MACVLAN driver allows containers to participate directly in existing networks and VLANs Requires promiscuous mode Docker networking will continue to evolve and add more drivers and networking use-cases 55

Docker Network Services SERVICE REGISTRATION, SERVICE DISCOVERY, AND LOAD BALANCING 56

What is Service Discovery? The ability to discover services within a Swarm Every service registers its name with the Swarm Clients can lookup service names Every task registers its name with the Swarm Service discovery uses the DNS resolver embedded inside each container and the DNS server inside of each Docker Engine 57

Service Discovery in a Bit More Detail Docker host 1 Docker host 2 task1.myservice task2.myservice task3.myservice mynet network (overlay, MACVLAN, user-defined bridge) task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 Swarm DNS (service discovery) 58

Service Discovery in a Bit More Detail Docker host 1 Docker host 2 task1.myservice task2.myservice task3.myservice task1.yourservice DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 DNS resolver 127.0.0.11 Engine DNS server Engine DNS server mynet network (overlay, MACVLAN, user-defined bridge) task1.myservice 10.0.1.19 task2.myservice 10.0.1.20 task3.myservice 10.0.1.21 myservice 10.0.1.18 yournet network 59 task1.yourservice 192.168.56.51 yourservice 192.168.56.50 Swarm DNS (service discovery)

Service Virtual IP (VIP) Load Balancing Every service gets a VIP when it s created This stays with the service for its entire life Lookups against the VIP get load-balanced across all healthy tasks in the service Behind the scenes it uses Linux kernel IPVS to perform transport layer load balancing docker inspect <service> (shows the service VIP) Service VIP Load balance group NAME HEALTHY IP Myservice 10.0.1.18 task1.myservice Y 10.0.1.19 task2.myservice Y 10.0.1.20 task3.myservice Y 10.0.1.21 task4.myservice Y 10.0.1.22 task5.myservice Y 10.0.1.23 60

Service Discovery Details Service and task registration is automatic and dynamic 1 2 Name-IP-mappings stored in the Swarm KV store Resolution is network-scoped 4 3 Container DNS and Docker Engine DNS used to resolve names Every container runs a local DNS resolver (127.0.0.1:53) Every Docker Engine runs a DNS service 61

62 Q & A

Demo SERVICE DISCOVERY 63

Load Balancing External Requests ROUTING MESH 64

What is the Routing Mesh? Native load balancing of requests coming from an external source Services get published on a single port across the entire Swarm Incoming traffic to the published port can be handled by all Swarm nodes A special overlay network called Ingress is used to forward the requests to a task in the service Traffic is internally load balanced as per normal service VIP load balancing 65

Routing Mesh Example 1. Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network Docker host 1 Docker host 2 Docker host 3 task1.myservice task2.myservice IPVS IPVS IPVS mynet overlay network Ingress network 8080 8080 8080 66 LB

Routing Mesh Example 1. Three Docker hosts 2. New service with 2 tasks 3. Connected to the mynet overlay network 4. Service published on port 8080 swarm-wide 5. External LB sends request to Docker host 3 on port 8080 6. Routing mesh forwards the request to a healthy task using the ingress network Docker host 1 Docker host 2 Docker host 3 task1.myservice task2.myservice IPVS IPVS IPVS mynet overlay network Ingress network 8080 8080 8080 67 LB

Host Mode vs Routing Mesh Docker host 1 Docker host 2 Docker host 3 Docker host 1 Docker host 2 Docker host 3 task1.myservice task2.myservice task3.myservice task1.myservice task2.myservice bridge bridge bridge 8080 8080 8080 IPVS IPVS IPVS mynet overlay network Ingress network 8080 8080 8080 LB LB 68

Demo ROUTING MESH 69

HTTP Routing Mesh (HRM) with Docker Datacenter APPLICATION LAYER LOAD BALANCING (L7) 70

What is the HTTP Routing Mesh (HRM)? Native application layer (L7) load balancing of requests coming from an external source Load balances traffic based on hostnames from HTTP headers Allows multiple services to be accessed via the same published port Requires Docker Enterprise Edition Builds on top of transport layer routing mesh 71

Enabling and Using the HTTP Routing Mesh docker service create -p 8080 \ --network ucp-hrm \ --label com.docker.ucp.mesh.http=8080= http://foo.exsample.org... 1 Enable HTTP routing mesh in UCP a) Creates ucp-hrm network b) Creates ucp-hrm service and exposes it on a port (80 by default) 2 Create new service a) Add to ucp-hrm network b) Assign label specifying hostname (links service to http://foo.example.com) 72

HTTP Routing Mesh (HRM) Flow Enable HRM in UCP and assign a port HTTP traffic comes in on the HRM port HTTP header is inspected for host value Request is passed to the VIP of the service with the matching com.docker.ucp.mesh.http label 1 2 3 4 5 6 7 Create a service (publish a port, attach it to the ucp-hrm network, and add the com.docker.ucp.mesh.http label) Gets routed to the ucp-hrm service Host value is matched with the com.docker.ucp.mesh.http label for a service 73

HTTP Routing Mesh Example Docker host 1 Docker host 2 Docker host 3 user-svc.1 com.docker.ucp.mesh.http= 8080=http://foo.example.com user-svc.2 com.docker.ucp.mesh.http= 8080=http://foo.example.com ucp-hrm http://foo.example.com Service: user-svc VIP: 10.0.1.4 ucp-hrm.1 :80 ucp-hrm.2 :80 ucp-hrm.3 :80 ucp-hrm overlay network Ingress network 74 ucp-hrm:80 foo.example.com:80 LB

Demo HRM 75

76 Q & A

77 Docker Network Troubleshooting

Common Network Issues Blocked ports, ports required to be open for network mgmt, control, and data plane Iptables issues Used extensively by Docker Networking, must not be turned off List rules with $ iptables -S, $ iptables -S -t nat Network state information stale or not being propagated Destroy and create networks again with same name General connectivity problems 78

Required Ports 79

General Connectivity Issues Network always gets blamed first :( Eliminate or prove connectivity first, connectivity can be broken at service discovery or network level Service Discovery Test service name resolution or container name resolution drill <service name> (returns the service VIP DNS record) drill tasks.<service name> (returns all task DNS records) Network Layer Test reachability using VIP or container IP task1$ nc -l 5000, task2$ nc <service ip> 5000 ping <container ip> 80

Netshoot Tool Has most of the tools you need in a container to troubleshoot common networking problems iperf, tcpdump, netstat, iftop, drill, netcat-openbsd, iproute2, util-linux(nsenter), bridge-utils, iputils, curl, ipvsadmin, ethtool Two Uses Connect it to a specific network namespace (such as a container s) to view the network from that container s perspective Connect it to a docker network to test connectivity on that network 81

Netshoot Tool Connect to a container namespace docker run -it --net container:<container_name> nicolaka/netshoot Connect to a network docker run -it --net host nicolaka/netshoot Once inside the netshoot container, you can use any of the network troubleshooting tools that come with it 82

Network Troubleshooting Tools Capture all traffic to/from port 999 on eth0 on a myservice container docker run -it --net container:myservice.1.0qlf1kaka0cq38gojf7wcatoa nicolaka/netshoot tcpdump -i eth0 port 9999 -c 1 -Xvv See all network connections to a specific task in myservice docker run -it --net container:myservice.1.0qlf1kaka0cq38gojf7wcatoa nicolaka/netshoot netstat -taupn 83

Network Troubleshooting Tools Test DNS service discovery from one service to another docker run -it --net container:myservice.1.bil2mo8inj3r9nyrss1g15qav nicolaka/netshoot drill yourservice Show host routing table from inside the netshoot container docker run -it --net host nicolaka/netshoot ip route show 84

85 Lab Section 3

THANK YOU