Note: Currently (December 3, 2017), the new managed Kubernetes service on Azure (AKS) does not yet support Windows agents.

Similar documents
Services and Networking

Efficiently exposing apps on Kubernetes at scale. Rasheed Amir, Stakater

Kubernetes Ingress Virtual Service Configuration

ASP.NET Core & Docker

NGINX: From North/South to East/West

Load Balancing Nginx Web Servers with OWASP Top 10 WAF in Azure

10 Kube Commandments

Kubernetes Ingress Virtual Service Configuration

Kuber-what?! Learn about Kubernetes

gcp / gke / k8s microservices

Deployment Strategies on Kubernetes. By Etienne Tremel Software engineer at Container February 13th, 2017

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.1

Load Balancing Web Servers with OWASP Top 10 WAF in Azure

Code: Slides:

Kubernetes deep dive

KubeNow Documentation

Authorized Source IP for OpenShift Project

Kubernetes Basics. Christoph Stoettner Meetup Docker Mannheim #kubernetes101

Question: 2 Kubernetes changed the name of cluster members to "Nodes." What were they called before that? Choose the correct answer:

Life of a Packet. KubeCon Europe Michael Rubin TL/TLM in GKE/Kubernetes github.com/matchstick. logo. Google Cloud Platform

Delivering Kubernetes Apps with Helm. Michelle Adnan Adam

Kubernetes on Openstack

Using PCF Ops Manager to Deploy Hyperledger Fabric

Load Balancing Nginx Web Servers with OWASP Top 10 WAF in AWS

Creating a Multi-Container Pod

A Comparision of Service Mesh Options

What s New in K8s 1.3

What s New in K8s 1.3

How to Re-Architect without Breaking Stuff (too much) Owen Garrett March 2018

Declarative Modeling for Cloud Deployments

Continuous delivery while migrating to Kubernetes

Kubernetes objects on Microsoft Azure

Deploying PXC in Kubernetes / Openshift. Alexander Rubin, Percona

Bitnami s Kubernetes Projects Leverage Application delivery on Next Generation Platforms

Exam : Implementing Microsoft Azure Infrastructure Solutions

Docker Enterprise Edition 2.0 Platform Public Beta Install and Exercises Guide

2018 Hands-on Guide. F5 Azure SACA. F5 Networks, Inc.

Infoblox IPAM Driver for Kubernetes User's Guide

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

Infoblox IPAM Driver for Kubernetes. Page 1

Containers, Serverless and Functions in a nutshell. Eugene Fedorenko

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Introduction to Kubernetes

Disclaimer This presentation may contain product features that are currently under development. This overview of new technology represents no commitme

Kubernetes: Twelve KeyFeatures

WHITE PAPER. Kubernetes Deployment Models: The Ultimate Guide

Cloud I - Introduction

The missing CI/CD Kubernetes component: Helm package manager

OpenShift Dedicated 3 Release Notes

EASILY DEPLOY AND SCALE KUBERNETES WITH RANCHER

An Introduction to Kubernetes

Load Balancing For Clustered Barracuda CloudGen WAF Instances in the New Microsoft Azure Management Portal

/ Cloud Computing. Recitation 5 September 26 th, 2017

& the architecture along the way!

Kubernetes. Introduction

Developing Microsoft Azure Solutions

Load Balancing Web Servers with OWASP Top 10 WAF in AWS

Building an on premise Kubernetes cluster DANNY TURNER

/ Cloud Computing. Recitation 5 February 14th, 2017

RAFT library for Java

Kubernetes made easy with Docker EE. Patrick van der Bleek Sr. Solutions Engineer NEMEA

AZURE CONTAINER INSTANCES

Kubernetes Integration Guide

DEPLOYING A 3SCALE API GATEWAY ON RED HAT OPENSHIFT

Red Hat Quay 2.9 Deploy Red Hat Quay on OpenShift

Deploy an external load balancer with

Red Hat JBoss Middleware for OpenShift 3

Getting Started with VMware Integrated OpenStack with Kubernetes. VMware Integrated OpenStack 5.1

F5 Analytics and Visibility Solutions

Kubernetes 101. Doug Davis, STSM September, 2017

Containerisation with Docker & Kubernetes

Kuberiter White Paper. Kubernetes. Cloud Provider Comparison Chart. Lawrence Manickam Kuberiter Inc

ENHANCE APPLICATION SCALABILITY AND AVAILABILITY WITH NGINX PLUS AND THE DIAMANTI BARE-METAL KUBERNETES PLATFORM

Kubernetes on Azure. Daniel Neumann Technology Solutions Professional Microsoft. Build, run and monitor your container applications

Installing VMR with V2PC

From Dev to DevOps: An Unexpected Journey. Luis Angel Vicente Sanchez BigCentech Ltd

KUBERNETES IN A GROWN ENVIRONMENT AND INTEGRATION INTO CONTINUOUS DELIVERY

$ wget V SOLUTIONS.tar.bz2 \ --user=lftraining --password=penguin2014

The Long Road from Capistrano to Kubernetes

Kubernetes - Networking. Konstantinos Tsakalozos

Microsoft Exchange Server 2013 and 2016 Deployment

Table of Contents HOL CNA

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

Using RDP with Azure Linux Virtual Machines

Blockchain on Kubernetes

How-to Guide: Tenable Nessus for Microsoft Azure. Last Updated: April 03, 2018

Cisco Container Platform Installation Guide

Installing VMR with V2PC

[MS20533]: Implementing Microsoft Azure Infrastructure Solutions

How to Configure Mobile VPN for Forcepoint NGFW TECHNICAL DOCUMENT

NSX Data Center Load Balancing and VPN Services

Kubernetes Container Networking with NSX-T Data Center Deep Dive

Developing Microsoft Azure Solutions (70-532) Syllabus

Scheduling in Kubernetes October, 2017

Installing and Configuring Oracle VM on Oracle Cloud Infrastructure ORACLE WHITE PAPER NOVEMBER 2017

Building Kubernetes cloud: real world deployment examples, challenges and approaches. Alena Prokharchyk, Rancher Labs

Realms and Identity Policies

McAfee Cloud Workload Security Installation Guide. (McAfee epolicy Orchestrator)

Load Balancing Microsoft IIS. Deployment Guide v Copyright Loadbalancer.org

Table of Contents. Section 1: Overview 3 NetScaler Summary 3 NetScaler CPX Overview 3

Transcription:

Create a Hybrid Kubernetes Linux/Windows Cluster in 7 Easy Steps Azure Container Service (ACS) makes it really easy to provision a Kubernetes cluster in Azure. Today, we'll walk through the steps to set up a hybrid Kubernetes cluster with two agent pools: one for Linux and one for Windows. We'll also install an ingress controller and set it up with free and automatic SSL certificate management using Let's Encrypt. We should be able to do this in a few steps and under 20 minutes. We'll then test out our cluster by deploying a hybrid application consisting of an ASP.NET application in a Windows container and a Redis instance in a Linux container. Here's a simplified view of what we'll be deploying: Note: Currently (December 3, 2017), the new managed Kubernetes service on Azure (AKS) does not yet support Windows agents. 0. Set up the Cloud Shell environment The Cloud Shell in the Azure portal has all the tools we need preinstalled. However, if we've never set up an SSH key, we'll have to do that first. The SSH key is required in case we need to log on to Linux machines in the cluster. To check if an SSH key already exists, start the Cloud Shell and check if there are files named id_rsa and id_rsa.pubin the ~/.ssh directory. If they don't exist, we can generate them by running this command and taking the defaults (do not enter a password): $ ssh-keygen

Now we're ready to get started! 1. Create a service principal ACS needs a service principal in order to manage the cluster's resources on our behalf. It's best to create a dedicated service principal for this purpose. In the Cloud Shell, run this command to create the service principal: az ad sp create-for-rbac --skip-assignment -n <service-principal-name> The service principal does not require any role assignments. The ACS resource provider will assign its roles when the cluster is created. After the service principal is created, the CLI outputs some values. Note the AppId and Password. 2. Create the cluster ACS currently supports multi-agent pool clusters in a few preview regions. I've created an ARM template that simplifies creating the cluster. Click the following button or run the template in the Azure CLI. In general, the defaults should work fine. For the blank spaces, we can use these values: Resource Name - Enter a unique name. Location - There are other regions, but West US 2 should be available in most accounts. SSH RSA Public Key - Get this value by running cat ~/.ssh/id_rsa.pub in Cloud Shell.

Service Principal Client ID - AppId from service principal, see Step 1. Service Principal Client Secret Password from service principal, see Step 1. Click Next and then Deploy. It will take a few minutes for the cluster to be created. 3. Download the KUBECONFIG Before we can connect to our new cluster, we need to download the configuration. In the Cloud Shell: $ az acs kubernetes get-credentials -n {acs-name} -g {resource-group-name} This will connect to the master using the SSH key and download the Kubernetes configuration file to its default location at ~/.kube/config. 4. Update Helm/Tiller We'll be installing a couple of Helm charts. To get ready for this, we need to make sure Tiller is installed and up-to-date. Tiller should already be installed in our cluster, but we should make sure it's a new version. And because we have both Windows and Linux nodes in our cluster, we should set its nodeselector to Linux. Run this command (Helm is already installed in Cloud Shell): $ helm init --node-selectors "beta.kubernetes.io/os"="linux" --upgrade 5. Install the nginx ingress controller

An ingress controller makes it easy to expose services to the outside world without the need to set up additional load balancers for each new service. We can install the nginx ingress controller using Helm. Again, we use node selectors to ensure it is placed on Linux nodes. $ helm install --name nginx-ingress \ --set controller.nodeselector."beta\.kubernetes\.io\/os"=linux \ --set defaultbackend.nodeselector."beta\.kubernetes\.io\/os"=linux \ stable/nginx-ingress An ingress controller provides a centralized service for routing incoming HTTP requests based on host name to the corresponding services inside the cluster. 6. Install Kube-Lego (Let's Encrypt) Let's Encrypt is a certificate authority that provides an automated way to obtain free SSL certificates. It's extremely easy to integrate it into the nginx ingress controller using a project called Kube-Lego. We can also install this using Helm. We need to provide an email address. $ helm install --name kube-lego \ --set config.lego_email=<your-email-address> \ --set config.lego_url=https://acme-v01.api.letsencrypt.org/directory \ --set nodeselector."beta\.kubernetes\.io\/os"=linux \ stable/kube-lego 7. Add a wildcard DNS entry To finish our setup, we need to add a wildcard DNS entry that points to the IP address of our ingress controller. With the wildcard entry in place, we can easily add new services without adding any more DNS entries. And with Kube-Lego installed, we automatically get SSL certs too! For instance, we can set up a wildcard DNS for *.k8s.anthonychu.com. When we create a new service, we can simply specify its hostname in the form of {servicename}.k8s.anthonychu.com in its ingress resource, and the nginx ingress controller will know how to route traffic to it. Before we set up the DNS entry, we first need to get the ingress controller's external IP address by running: $ kubectl get svc NAME TYPE CLUSTER-IP EXTERN AL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 1h

nginx-ingress-nginx-ingress-controller LoadBalancer 10.0.46.60 52.175.244.102 80:32658/TCP,443:30830/TCP 3m nginx-ingress-nginx-ingress-default-backend ClusterIP 80/TCP 3m 10.0.128.186 <none> The process for setting up the wildcard DNS depends on your DNS service. I currently use CloudFlare, and I added a record like this: Azure has a similar process. At this point, we have a 3-node Kubernetes cluster (Linux master, Linux agent, Windows agent). We also have an ingress controller and SSL certificate management set up. Deploy a hybrid Linux/Windows app It's time to deploy an application. We'll be running Redis in a Linux container, and an ASP.NET Web Forms application in a Windows container. The Web Forms app will be externally exposed via the ingress controller, and it will use Redis to store data. Redis To deploy Redis, use this manifest. Simply run this command in the Cloud Shell: $ kubectl create -f https://raw.githubusercontent.com/anthonychu/acs-k8s-multi-age nt-pool-demo/master/redis.yaml This will create a deployment for a single pod running Redis. It'll also create an internal service for it named redis. ASP.NET application Before we can deploy the ASP.NET application, we need to make a slight modification to its manifest to specify the hostname that will be used to access it externally. First, download this manifest in the Cloud Shell:

$ curl -LO https://raw.githubusercontent.com/anthonychu/acs-k8s-multi-agent-pool-d emo/master/aspnet-webforms-redis-sample.yaml Now, we can use VIM to edit the file. There are 2 values named HOSTNAME. Replace them with a hostname that matches the wildcard DNS that we set up earlier. We'll use counter.k8s.anthonychu.com for this example. After we save the file, we can run this command: $ kubectl create -f aspnet-webforms-redis-sample.yaml This will create a deployment for a simple ASP.NET app, plus a service and an ingress resource for it. Here's the ingress resource in the manifest: apiversion: extensions/v1beta1 kind: Ingress metadata: name: aspnet-redis-service annotations: spec: kubernetes.io/tls-acme: "true" kubernetes.io/ingress.class: "nginx" tls: - hosts: - counter.k8s.anthonychu.com secretname: aspnet-redis-service-tls rules: - host: counter.k8s.anthonychu.com http: paths: - path: / backend: How this works servicename: aspnet-redis-service serviceport: 80 Thekubernetes.io/ingress.class: "nginx" annotation and thehost on the ingress resource instruct the nginx ingress controller to route traffic with the specified host name to the service.

Thekubernetes.io/tls-acme: "true" annotation on the ingress resource instructs Kube-Lego to obtain and manage SSL certs for the ingress' host name using Let's Encrypt. TheREDIS_HOST environment variable in the application's container is set to redis.default.svc.cluster.local. This fully qualified DNS name will resolve to the Redis service inside the cluster. The Windows container image is pretty big, so it might take a few minutes to pull it down the first time. Run $ kubectl get pods to check on its status. Once all the pods are ready, we can hit the site by going to the domain we specified: If we take a look at the response headers, we'll see that it's coming through nginx.

And we have a valid certificate issued by Let's Encrypt. Scale the application We can really quickly scale out the application: $ kubectl scale --replicas=3 deployment aspnet-redis It should take under a minute for the new pods to spin up and start receiving requests from the ingress. As we refresh the page, we'll see the counter increase and the machine name change as the requests are routed to different pods in the cluster. CODE https://github.com/sunilake/acs-k8s-multi-agent-pool-demo.git https://github.com/sunilake/aspnet-webforms-redis-sample.git