Docker Enterprise Edition 2.0 Platform Public Beta Install and Exercises Guide

Similar documents
Run containerized applications from pre-existing images stored in a centralized registry

Infoblox Kubernetes1.0.0 IPAM Plugin

Table of Contents. Configure and Manage Logging in to the Management Portal Verify and Trust Certificates

Kubernetes made easy with Docker EE. Patrick van der Bleek Sr. Solutions Engineer NEMEA

Question: 2 Kubernetes changed the name of cluster members to "Nodes." What were they called before that? Choose the correct answer:

Table of Contents DevOps Administrators

Red Hat JBoss Middleware for OpenShift 3

Note: Currently (December 3, 2017), the new managed Kubernetes service on Azure (AKS) does not yet support Windows agents.

Docker Universal Control Plane Deploy and Manage On-Premises, Your Dockerized Distributed Applications

Kuber-what?! Learn about Kubernetes

StreamSets Control Hub Installation Guide

Infoblox IPAM Driver for Kubernetes User's Guide

Infoblox IPAM Driver for Kubernetes. Page 1

ASP.NET Core & Docker

Using PCF Ops Manager to Deploy Hyperledger Fabric

gcp / gke / k8s microservices

Issues Fixed in DC/OS

Downloading and installing Db2 Developer Community Edition on Ubuntu Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

VMware AirWatch Chrome OS Platform Guide Managing Chrome OS Devices with AirWatch

Red Hat Quay 2.9 Deploy Red Hat Quay - Basic

DEPLOYING A 3SCALE API GATEWAY ON RED HAT OPENSHIFT

VMware AirWatch Chrome OS Platform Guide Managing Chrome OS Devices with AirWatch

Enterprise Steam Installation and Setup

$ wget V SOLUTIONS.tar.bz2 \ --user=lftraining --password=penguin2014

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.1

Red Hat CloudForms 4.6

Blockchain on Kubernetes

Getting Started With Containers

Docker Swarm installation Guide

Setting up Docker Datacenter on VMware Fusion

Services and Networking

RED HAT QUAY. As part of OCP Architecture Workshop. Technical Deck

An introduction to Docker

Read the following information carefully, before you begin an upgrade.

Creating a Multi-Container Pod

DCCN Docker Swarm Cluster Documentation

Blockchain on Kubernetes

Installation and setup guide of 1.1 demonstrator

Important DevOps Technologies (3+2+3days) for Deployment

Think Small to Scale Big

Code: Slides:

Kubernetes Integration Guide

Swift Web Applications on the AWS Cloud

Red Hat Quay 2.9 Deploy Red Hat Quay on OpenShift

AWS Remote Access VPC Bundle

StorageGRID Webscale 11.0 Tenant Administrator Guide

Investigating Containers for Future Services and User Application Support

Container-based virtualization: Docker

/ Cloud Computing. Recitation 5 February 14th, 2017

Installing and Configuring vcloud Connector

INSTALLATION RUNBOOK FOR Iron.io + IronWorker

Using vrealize Operations Tenant App as a Service Provider

Downloading and installing Db2 Developer Community Edition on Red Hat Enterprise Linux Roger E. Sanders Yujing Ke Published on October 24, 2018

Bitnami s Kubernetes Projects Leverage Application delivery on Next Generation Platforms

Blockchain on Kubernetes User Guide

Setting Up Resources in VMware Identity Manager (On Premises) Modified on 30 AUG 2017 VMware AirWatch 9.1.1

BlackBerry Enterprise Server for IBM Lotus Domino Version: 5.0. Administration Guide

Installation & Configuration Guide Enterprise/Unlimited Edition

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

DreamFactory Security Guide

Handel-CodePipeline Documentation

Kubernetes on Openstack

Anchore Container Image Scanner Plugin

vrealize Suite Lifecycle Manager 1.1 Installation, Upgrade, and Management vrealize Suite 2017

Installing SmartSense on HDP

vrealize Suite Lifecycle Manager 1.0 Installation and Management vrealize Suite 2017

Puppet on the AWS Cloud

Zadara Enterprise Storage in

Table of Contents 1.1. Introduction. Overview of vsphere Integrated Containers 1.2

Red Hat CloudForms 4.6

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

VMware AirWatch Content Gateway for Linux. VMware Workspace ONE UEM 1811 Unified Access Gateway

271 Waverley Oaks Rd. Telephone: Suite 206 Waltham, MA USA

QuickStart Guide for Managing Computers. Version

Tenable.io Container Security. Last Updated: November 02, 2018

Red Hat Enterprise Linux 7 Getting Started with Cockpit

QuickStart Guide for Managing Computers. Version

Threat Response Auto Pull (TRAP) - Installation Guide

Docker DCA EXAM. m/ Product: Demo. For More Information: Docker Certified Associate

agenda PAE Docker Docker PAE

What s New in K8s 1.3

Proofpoint Threat Response

Table of Contents HOL CNA

Carbon Black QRadar App User Guide

Building a Kubernetes on Bare-Metal Cluster to Serve Wikipedia. Alexandros Kosiaris Giuseppe Lavagetto

WHITE PAPER. RedHat OpenShift Container Platform. Benefits: Abstract. 1.1 Introduction

Lab 3. On-Premises Deployments (Optional)

Ubuntu LTS Install Guide

Hacking and Hardening Kubernetes

Blockchain on vsphere By VMware

Eucalyptus User Console Guide

MANAGEMENT AND CONFIGURATION MANUAL

Building a Django Twilio Programmable Chat Application

SOA Software API Gateway Appliance 6.3 Administration Guide

KubeNow Documentation

Red Hat Enterprise Linux Atomic Host 7 Getting Started with Cockpit

Red Hat Development Suite 2.1

Adobe Marketing Cloud Bloodhound for Mac 3.0

2016 OPSWAT, Inc. All rights reserved. OPSWAT, MetadefenderTM and the OPSWAT logo are trademarks of OPSWAT, Inc.All other trademarks, trade names,

Transcription:

Docker Enterprise Edition 2.0 Platform Public Beta Install and Exercises Guide Welcome to the Docker EE 2.0 Public Beta! Below you will find installation instructions as well as series of exercises to explore the new features in this release: - This beta consists of Universal Control Plane 3.0.0-beta3 (application and cluster management), Docker Trusted Registry 2.5.0-beta3 (image management), and EE Engine 17.06.2-beta3. - If you signed up for the EE beta at https://beta.docker.com, you should ve have received an email containing instructions for obtaining your license key via the Docker Store. We are rolling out these emails in daily batches, so it may take a few days to receive yours. - The guide is split into UCP and DTR exercises. Feel free to do both or either set, but keep in mind to use DTR you need to have UCP and the Docker EE engines installed. - If you have any questions or feedback, please use the forums ( https://forums.docker.com/c/docker-data-center/eebeta ) Universal Control Plane Exercises Exercise 1: Install Engine and UCP Purpose : Install Docker EE Engine and Universal Control Plane in a Highly Available (HA) configuration, and add additional worker nodes to the cluster. 1. Read the UCP Release Notes for information on known issues In particular, Beta3 has been tested on RHEL 7.3, 7.4, and Ubuntu 16.04, and there are confirmed incompatibilities with SLES 12 and Ubuntu 14.04. Note that recommended graph drivers for RHEL 7.3 is devicemapper and RHEL 7.4 is overlay2 2. Install Engine 17.06.2-beta3 on at least 3 Linux nodes For the workshop use the EE test builds 3. Install UCP with HA, i.e. multiple, odd number of controllers Universal Control Plane Install Documentation The newest UCP version is 3.0.0-beta3 4. There are no extra install steps to setup Kubernetes

Engine Installation To install the engine beta on Ubuntu, run (notice the test-17.06 channel): add-apt-repository "deb [arch=amd64] https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu \ $(lsb_release -cs) \ test-17.06" (Add GPG Key) curl -fssl https://storebits.docker.com/ee/ubuntu/<subscription-id>/ubuntu/gpg sudo apt-key add - apt-get update apt-get install docker-ee To install on RHEL, follow the standard instructions to configure the repo with your beta subscription url. To install the beta Docker engine package, run this command: sudo yum install --enablerepo=docker-ee-test-17.06 docker-ee To install the beta Docker engine on Windows Server, follow the standard instructions, but run this command to install the beta engine package: Install-Package Docker -ProviderName DockerProvider -RequiredVersion preview UCP Installation Detailed UCP install instructions are available in the UCP beta documentation. Install with the docker/ucp:3.0.0-beta3 image. # On one manager node (or one you intend to make a manager) docker container run --rm -it --name ucp \ -v /var/run/docker.sock:/var/run/docker.sock \ docker/ucp:3.0.0-beta3 install \ --host-address <node-ip-address> \ --interactive

Exercise 2: Try out the new EE CLI Purpose: EE CLI releases are now available for download within the UCP UI. This will let you get a CLI build that matches the version of the Docker engine that UCP is running on. Beta Documentation for installing EE CLI, kubectl, and client bundle Download and Test EE CLI 1. Download EE CLI. This can be found on the UCP dashboard, at the bottom. Follow the instructions in the card to download the appropriate CLI for your client platform. 2. Once you ve installed the Docker CLI, Download a client bundle which allows you to connect your terminal to UCP with the correct certs. The client bundle download can be found on your User Profile page. 3. Once you ve downloaded the client bundle and expanded the zip file, in Linux/Mac terminal use source env.sh to authenticate with it (more instructions in the docs) 4. You can test whether it is working by running `docker version` with the EE CLI. You should see UCP s version (3.0.0) rather than any local docker engine. Feel free to run other docker CLI commands if you d like.

Download and Test Kube CLI 1. Download the Kube CLI (kubectl) 2. Re-run the client bundle script 3. Run some kubernetes commands, eg. `kubectl version` and `kubectl get all` Exercise 3: Deploy a Kubernetes App Purpose: Deploy a Kubernetes application via YAML files. This can be accomplished from the UCP UI and with the kubectl CLI using a standard UCP client bundle. The Kubernetes project has several good example apps, and Weave also maintains the Sock Shop app. Select one and deploy it using either the kube CLI or UCP UI Create Object. Kube app components can either be deployed piecemeal or with one big.yml file. Here s all-in-one yml file for the guest book app. If you deploy the Sock Shop app, note that it requires being deployed in the `sock-shop` Kubernetes namespace. You can find out the where to access deployed apps by finding the relevant load balancer: Scaling To scale an app in Kubernetes, you typically update the deployment controller:

Check that updating the spec caused the number of pods to change. Namespaces Familiarize yourself with namespaces in the UCP UI. You can use Set context for all namespaces to see objects in all namespaces you have access to. Use `kubectl config set-context` to change context in the CLI. Exercise 4: Deploy a Compose Stack on Kubernetes Purpose: Deploy a docker-compose based app with Kubernetes. This can currently only be done through the UCP UI (CLI is in development). Docker EE and UCP (and CE desktop) supports deploying Docker Compose file on Kubernetes. The result is a true Kubernetes app. Note that you must use a Compose file version 3.3 in order to deploy on Kubernetes. Older versions will not work correctly. Go to Shared resources -> Stacks -> Create and select Kubernetes Workloads mode. Try deploying your favorite compose-based app or try the Words example app from the beta Docker documentation. Once deployment is complete, inspect the Kubernetes objects created to deploy the app. Exercise 5: Access Control with Kubernetes Purpose: UCP enforces access control on Kubernetes using the same grants system as with Swarm. In this case, you can apply a grant to a subject (user/team/org), a role (set of permissions), and a Kubernetes namespace (grouping of K8s resources).

Create a new Kubernetes namespace. More information on K8s namespaces here: https://kubernetes.io/docs/tasks/administer-cluster/namespaces/ Click on Kubernetes Namespaces Click Create in upper-right corner YAML file should look like this: apiversion: v1 kind: Namespace metadata: Name: your-namespace-name-here As an admin, create a Kube resource inside of the namespace you created. Go to Kubernetes Create. Be sure to set the namespace to the namespace you just created! Here s an example using an NGINX webserver: apiversion: v1 kind: ReplicationController metadata: name: nginx spec: replicas: 1 selector: app: nginx template: metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx

ports: - containerport: 80 This should create a simple NGINX pod running on the cluster. Add a user Click on User Management Users Create User Add a new user. Make sure they are not an admin. Create a grant giving the user access to the namespace Go to User Management Manage Grants Subject the user you created, Role View Only, Object the namespace you created. (Optional): You can create a custom role using various Kubernetes API permissions instead of using View Only role Login as that user, for example using a Chrome incognito window or Chrome user profiles Confirm that the user can perform the actions they are approved to do by the role, but get access denied for non-permitted operations. In the above example when you login you should be able to see the NGINX app you just created, but you shouldn t be able to create new resources in that namespace or edit that existing resources.

Exercise 6: Upgraded Swarm Layer 7 Routing Purpose: This version of UCP upgrades the HRM Swarm Layer 7 routing feature with architecture based on the Interlock project in order to provide additional stability, performance, and features. Documentation on Interlock with architecture, features, and examples Enable HRM on the cluster Go to Admin Settings Routing Mesh Click Enable Routing Mesh and set HTTP/HTTPS ports as appropriate for your environment Create an application service and add two interlock labels The hostname you want to route to: com.docker.lb.hosts=<hostname> The port HRM should listen to for the upstream: com.docker.lb.port= <port> Use DNS to point the hostname.domain to a specific node in the cluster. Confirm that each hostname does in fact route to the correct application service. You can either run hostname directly against the correct node, or set up an external load balancer to route these requests to the correct node Interlock 2.0 adds a number of other features including SSL Termination, Sticky Sessions, and Web Sockets support. See the docs link above for more info. Exercise 7: RBAC for Nodes with Kubernetes Purpose: With EE Advanced, UCP has the ability to isolate groups of users to specific nodes in the cluster. This is done by placing nodes in different collections, and in Swarm, you create

Scheduler grants to those collections. To use the RBAC for Nodes feature In Kubernetes, since nodes are not a part of namespaces, you instead use UCP to link a group of nodes from a collection to a namespace. By default a namespace is linked to the /Shared collection where all worker nodes are located. You can change this to whichever group of nodes you want when using EE Advanced. Go the Nodes screen. On every worker node, edit the node s access label ( come.docker.ucp.access.label ) and change the path to a different collection from /Shared. Go to Kubernetes Namespaces and create a new namespace (or multiple new namespaces). YAML file should look something like this: apiversion: v1 kind: Namespace metadata: Name: your-namespace-name-here On the namespaces page, click the... on the right side of your namespace and click Link Nodes in Collection Select a collection which has the group of nodes you want this namespace to have access to. UCP tells you which nodes are in that collection.

Create a grant with subject a user/team/org, role Full Control, collection the namespace you created. Login as a user from the grant you created Deploy a Kubernetes workload and confirm it can only schedule to the node that user has access to.

Docker Trusted Registry Exercises To use DTR you will need to have previously installed Docker EE Engines and UCP on the cluster. For this you can refer to the installation instructions at the beginning of this document. Exercise 1: Install DTR Purpose: Install DTR in an HA configuration on your UCP cluster using the UI and connect with your storage backend of choice. Refer to the Docker Trusted Registry Installation Documentation After installing the first DTR replica, install additional DTR replicas. # Pull the latest version of DTR $ docker pull docker/dtr:2.5.0-beta3 # Install DTR $ docker run -it --rm \ docker/dtr:2.5.0-beta3 install \ --ucp-node <ucp-node-name> \ --ucp-insecure-tls NOTE: If you are going to do Exercise 3, you will need to install two DTR instances on two separate UCP clusters. Exercise 6 requires a single DTR instance running in HA mode. Exercise 2a: Create Repositories and Images Purpose: Set up the team and repository structures that will allow you to manage your images. Refer to the Repositories and Images Documentation Create a new Organization Create a new team within the Organization Create public and private repos in the organization namespace Some repos you can use from Docker Hub: pdevine/whale-test pdevine/alpine Ubuntu (large!) hello-world kitematic/hello-world-nginx kitematic/minecraft

Mysql microsoft/dotnet (windows) Use access control to set repo permissions for your team (e.g. Read-Only, Read-Write ) Exercise 2b: Create a Repository on Push Purpose: Use the new Create repository on push feature Refer to the DTR Configuration Documentation As the admin user select Settings in the DTR UI Select the Create repository on push setting Pull one of the images from step 2a and retag it for your DTR to a repository which has not been created Push the image and ensure that the repository was created automatically. Exercise 3: Push Mirroring to another DTR Purpose: Moving images automatically between different DTRs through the UI. Refer to the Mirror Images to Another Repository Documentation Navigate to one of the repositories that you created in Exercise 2. Click on the MIRRORS tab and then click on New mirroring policy. Set the registry type to Docker Trusted Registry and fill in Registry URL of your second DTR, as well as your username and password (NOTE: you can create a token based login from Exercise 7 and use it in the password field here).

Fill in the Namespace and Name fields with the repository that you want to use for pushing your images. If you re not using public certs for the two DTRs, click on Show advanced settings and either fill in the CA of the other DTR (you can get it with the command curl -k https://<path to other dtr>/ca ), or click on Skip TLS. Press Connect to test your settings. (Optional) Add a trigger which will only mirror an image if the trigger criteria is met: If you have scanning enabled, you can use something like Critical Vulnerabilities equals 0. (Optional) Set a tag template so that the tag s name will be changed when it s pushed to the remote repository. Click Save & Apply to create your new mirroring policy. From the CLI, push an image to the repository with the mirror that you have just created.

Exercise 4: Push Mirroring to Docker Hub Purpose: Backing up images automatically to Docker Hub. Navigate to one of the repositories that you created in Exercise 2. Click on the MIRRORS tab and then click on New mirroring policy. Set the registry type to Docker Hub and fill in your Docker Hub username and password. Fill in the Namespace and Name fields with the repository that you want to use for backing up your images.

Press Connect to test your settings. (Optional) Add a trigger which will only mirror an image if the trigger criteria is met: If you have scanning enabled, you can use something like Critical Vulnerabilities equals 0. (Optional) Set a tag template so that the tag s name will be changed when it s pushed to the remote repository. Click Save & Apply to create your new mirroring policy. From the CLI, push an image to the repository with the mirror that you have just created. Exercise 5: Pull Mirroring (with Polling) Purpose: Moving images automatically between different DTRs through the API. Refer to Mirror Images from Another Registry Documentation Click on </> API and scroll down to POST /api/v0/repositories/{namespace}/{reponame}/pollmirroringpolicies Click Try it out. Fill in the namespace and reponame fields with the repository on this DTR that you wish to pull images to. Fill in the body with the settings:

{ } "enabled": true, "password": <remote password>, "remotehost": <URL for the remote host>, "remoterepository": <remote namespace/repostory>, "skiptlsverification": true, "username": <remote username> NOTE: The remotehost field should be the full URL for the remote host, including https://. Click on Execute and ensure that you received a 200 response to make sure that everything worked correctly. Tag and push an image to the remote DTR, and then wait several minutes to check that the image was mirrored correctly. Exercise 6: Recovery from Loss of Quorum Purpose: Bring an HA cluster back to life after it has lost quorum between each of the DTR replicas. NOTE: this procedure is for emergencies only. If your cluster still has quorum and you have lost a node, you should follow the normal procedure for recovering from a node outage (i.e. remove the replica and join a new one to replace it). Refer to Repair a DTR Cluster Documentation Install DTR and join at least one additional node to the cluster. You can follow the directions linked from Exercise 1. Fail at least half of the cluster (this is easiest to do by failing one node in a two node cluster) so that your DTR instance loses quorum. Run (add --ucp-insecure-tls if you have not set up TLS correctly on your UCP/DTR nodes): docker run -it --rm docker/dtr:2.5.0-beta3 emergency-repair Select the replica ID that you want to use to recover your DTR cluster. Any of the additional DTR containers which are part of the cluster will be destroyed and cleaned up. Wait for the process to complete, and ensure that your DTR is still running. Use the join command to create new nodes for the cluster to restore HA capability.

Exercise 7: Create a token based login (advanced) Purpose: Create a token based login for this DTR so that you don t have to use your login credentials. This is useful for LDAP where you want to pass around your login credentials in multiple places. Refer to Manage Access Tokens Documentation Click on your username in the upper right hand corner and then click on ACCESS TOKENS. Click on New access token and fill in the Description as something like DTR Mirroring. Copy the new access token, and store it somewhere where you won t lose it. It only gets displayed once. You can use the access token in the password field in Exercise 3. Exercise 8: Online GC Purpose: Test out the new garbage collection system which allows you to not put DTR in read-only mode when garbage collection is running. Delete some images from any of your existing repositories. Click on System and then the GARBAGE COLLECTION tab.

Click on Upgrade and then confirm that you want to turn on Online GC. Depending on how many repositories and images you have, it may take a few moments. Select Until done, select a cron schedule, and then then click Save & Start. Observe that DTR is not put in to read-only mode. (Optional) Look at the storage backend and the new layout for images on disk. You may want to change storage backends to see how this works in comparison to older versions of DTR. Exercise 9: Override a Vulnerability (advanced) Purpose: Use the Vulnerability Override feature to hide a CVE which has been found in an image Configure image scanning inside of your DTR. Push and scan an image which has vulnerabilities. Sign in as the admin user and navigate to the Repositories > (repo) > IMAGES > View details screen.

Click on a component you wish to hide and then select hide next to the CVE you wish to mark as hidden. Note that the vulnerabilities totals should reflect the hidden vulnerability. Exercise 10: Connect DTR to a remote UCP with Docker Content Trust (advanced) Purpose: Connect a single DTR to one or more UCP clusters while enabling Docker Content Trust to block unsigned images Refer to the Integrate with Multiple Registries Documentation For this exercise you will need to have two UCP 3.0.0 clusters. It also helps if you have a client system with docker-ce 17.12 w/ the new Docker Content Trust commands. Install DTR onto one of the clusters. After installation, navigate to the Settings screen and fill in the Domain & proxies > LOAD BALANCER/PUBLIC ADDRESS setting with the correct host name for this system (if you are using a load balancer here, it s the external address of the load balancer). This setting is necessary to get docker content trust to work correctly. Pull the CA cert of the DTR to your client machine using (you can not use --insecure-registry to make this work: curl -k https://<hostname>/ca > dtr.crt Copy the dtr.crt file into ONE of these places: /etc/docker/certs.d/<dtr hostname>/<dtr hostname>.crt

/usr/local/share/ca-certificates/dtr.crt (Ubuntu) If you copied the cert into the docker certs.d directory, restart your docker daemon. If you were using Ubuntu and used the ca-certificates directory, use sudo update-ca-certificates to pull in the cert. Create a repository on DTR and attempt to sign an image and push it into that repo using the commands: docker trust sign <dtr>/<namespace>/<repo>:<tag> docker push <dtr>/<namespace>/<repo>:<tag> You will probably be asked during the signing command for several passwords. Make certain you record what those passwords are, as you will need them later. If the push command succeeded, look inside of DTR to ensure that the image is signed correctly Now that trust is working inside of DTR, we will register this DTR with the other UCP cluster. Using the same CA cert that you retrieved earlier, create a JSON file which looks like: { "hostaddress": "<dtr hostname>", "cabundle": "-----BEGIN CERTIFICATE-----\n<contents of cert>\n-----end CERTIFICATE-----" } The cert contents should be stripped of new lines inside of the cabundle setting. This can be a little tricky, so you may have to try a few times. Save it as dtr-bundle.json. Login to UCP via the API and obtain a Bearer token using the command (you can also do this through the swagger docs): curl -k -X POST "https://<ucp hostname>/id/login" -H "accept: application/json" -H "content-type: application/json" -d "{\"username\":\"<username>\", \"password\":\"<password>\"}" Register your DTR with this UCP using the command: curl -k -H "accept: application/json" -H "Authorization: <sessiontoken>" https://<ucp hostname>/api/config/trustedregistry_ -X POST -H "Content-Type: application/json" -d @dtr-bundle.json

You will know this has succeeded if it hasn t thrown back any kind of error message. If you haven t formatted the dtr-bundle.json file correctly, you may have to massage it a bit until it works correctly. Download the client bundle from UCP by clicking on My Profile > Client Bundles > New Client Bundle > Generate Client Bundle which should generate you a new client bundle and download it to your system. Unzip the client bundle and load it into the docker trust command with: unzip <bundle name>.zip docker trust key load key.pem We ll now re-sign the image we signed before with the new key that we just downloaded. The signer name is an alias to the key which will get shown in the future when using using any of the docker trust commands. docker trust signer add --key cert.pub <signer name> <dtr hostname>/<namespace>/<repo> docker trust sign <dtr hostname>/<namespace>/<repo>:<tag> docker push <dtr hostname>/<namespace>/<repo>:<tag> As the UCP admin user, go back in to UCP into the Admin Settings > Docker Content Trust screen and click Run Only Signed Images. Pull in the signed tag that you pushed (you may want to do this as a non-admin user, but you will need to create a Role and a Grant for that user which includes the Image Load permission). You can do this from the Images > Pull Image and enter in the name of the tag which you want to pull into your UCP cluster. Put in your username / password (or token into the password field from Exercise 7).

Pull the image and make sure that it was pulled correctly. You may also want to pull a non-signed image to ensure that the feature is working correctly.