Machine Learning. Bridging the OT IT Gap for Machine Learning with Ignition and AWS Greengrass

Similar documents
Voice, Image, Video : AI in action with AWS. 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.

Easier than Ever: An Introduction for Building End to End IoT Solutions

Intelligent IoT Edge Computing Platform. AWS Greengrass QIoT QuAI

AWS DeepLens Workshop: Building a Computer Vision App

Serverless Predictions at Scale

S INSIDE NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORK CONTAINERS

Edge-to-Cloud Compute with MxNet

Managing Deep Learning Workflows

Practical Applications of Machine Learning for Image and Video in the Cloud

Analytics and Visualization

Take C ntrol of the Edge. VMware s Role in the Internet of Things

AWS IoT Overview. July 2016 Thomas Jones, Partner Solutions Architect

Introduction to Deep Learning in Signal Processing & Communications with MATLAB

HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads. Natalia Vassilieva, Sergey Serebryakov

How to Route Internet Traffic between A Mobile Application and IoT Device?

Easily Extend Ignition to the Edge of Your Network

Advancing State-of-the-Art of Autonomous Vehicles and Robotics Research using AWS GPU Instances

Containers or Serverless? Mike Gillespie Solutions Architect, AWS Solutions Architecture

Integrate MATLAB Analytics into Enterprise Applications

How can you implement this through a script that a scheduling daemon runs daily on the application servers?

AI & Machine Learning at Amazon

Getting Started with AWS IoT

Connectivity from A to Z Roadmap for PI Connectors and PI Interfaces

Zombie Apocalypse Workshop

Deep Learning on AWS with TensorFlow and Apache MXNet

Applications of Machine Learning and Intelligent Algorithms for SDN and NFV

Deploying Deep Learning Networks to Embedded GPUs and CPUs

AWS 101. Patrick Pierson, IonChannel

AWS Lambda: Event-driven Code in the Cloud

AWS Machine Learning Transformation Day - Munich. 2018, Amazon Web Services, Inc. or its Affiliates. All rights reserved.

Integrate MATLAB Analytics into Enterprise Applications

MIOVISION DEEP LEARNING TRAFFIC ANALYTICS SYSTEM FOR REAL-WORLD DEPLOYMENT. Kurtis McBride CEO, Miovision

FOR IMMEDIATE RELEASE

NVIDIA GPU CLOUD DEEP LEARNING FRAMEWORKS

Accelerate PI Asset Framework with Cognitive Computing

Open Source at Amazon. Alolita Sharma Principal Technologist, Amazon Web Services FOSS Backstage, Berlin June

Serverless Architecture Hochskalierbare Anwendungen ohne Server. Sascha Möllering, Solutions Architect

OPERATIONALIZING MACHINE LEARNING USING GPU ACCELERATED, IN-DATABASE ANALYTICS

Microservices without the Servers: AWS Lambda in Action

Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI

SAP Vora - AWS Marketplace Production Edition Reference Guide

Onto Petaflops with Kubernetes

EFFICIENT INFERENCE WITH TENSORRT. Han Vanholder

Pagely.com implements log analytics with AWS Glue and Amazon Athena using Beyondsoft s ConvergDB

Why data science is the new frontier in software development

Aws Lambda A Guide To Serverless Microservices

Supporting GPUs in Docker Containers on Apache Mesos

Energy Management with AWS

Design Like a Pro: Best Practices for IIoT

IOT DEVICE MANAGEMENT: SECURE AND SCALABLE DEPLOYMENTS WITH DIGI REMOTE MANAGER

CACHE ME IF YOU CAN! GETTING STARTED WITH AMAZON ELASTICACHE. AWS Charlotte Meetup / Charlotte Cloud Computing Meetup Bilal Soylu October 2013

Streaming ETL of High-Velocity Big Data Using SAS Event Stream Processing and SAS Viya

AWS Lambda. 1.1 What is AWS Lambda?

Lambda Architecture for Batch and Stream Processing. October 2018

FAQs. Business (CIP 2.2) AWS Market Place Troubleshooting and FAQ Guide

HOW TO BUILD A MODERN AI

Video Analytics at the Edge: Fun with Apache Edgent, OpenCV and a Raspberry Pi

Fast Hardware For AI

This document (including, without limitation, any product roadmap or statement of direction data) illustrates the planned testing, release and

Xilinx ML Suite Overview

Cortana Analytics : with Raspberry Pi and Weather Sensor

Deep Learning Inference on Openshift with GPUs

Expected Learning Outcomes Introduction To AWS

AWS Connected Vehicle Cloud

Demystifying Deep Learning

Prediction Systems. Dan Crankshaw UCB RISE Lab Seminar 10/3/2015

Activator Library. Focus on maximizing the value of your data, gain business insights, increase your team s productivity, and achieve success.

Qualys Cloud Platform

Cloud Computing with FPGA-based NVMe SSDs

3 Major Considerations to Bring Industrial IoT to Reality Péter Bóna, CEO, Com-Forth Kft. Budapest, 15th May 2018

Lesson 5 Nimbits. Chapter-6 L05: "Internet of Things ", Raj Kamal, Publs.: McGraw-Hill Education

Managing & Accelerating Innovation with Open Source at the Edge

Intro to Big Data on AWS Igor Roiter Big Data Cloud Solution Architect

AWS plug-in. Qlik Sense 3.0 Copyright QlikTech International AB. All rights reserved.

Observations about Serverless Computing

Integrate MATLAB Analytics into Enterprise Applications

Back-end architecture

An Introduction to Developing for Cisco Kinetic

NVIDIA DLI HANDS-ON TRAINING COURSE CATALOG

Kepware Whitepaper. IIoT Protocols to Watch. Aron Semle, R&D Lead. Introduction

Getting Started With Amazon EC2 Container Service

Real-time Streaming Applications on AWS Patterns and Use Cases

Databricks Delta: Bringing Unprecedented Reliability and Performance to Cloud Data Lakes

Third Wave. How to Incorporate Legacy Devices to the. of Internet Evolution

IBM Data Science Experience White paper. SparkR. Transforming R into a tool for big data analytics

Amazon Virtual Private Cloud. Getting Started Guide

Who done it: Gaining visibility and accountability in the cloud

ECE 8823: GPU Architectures. Objectives

Searching for Meaning in the Era of Big Data and IoT

End to End Optimization Stack for Deep Learning

USERS CONFERENCE Copyright 2016 OSIsoft, LLC

Thomson Reuters Graph Feed & Amazon Neptune

Machine Learning with Python

Build the unified end to end IoT solution on ARM LEADING COLLABORATION IN THE ARM ECOSYSTEM

BEST PRACTICES FOR DOCKER

UNIK Building Mobile and Wireless Networks Maghsoud Morshedi

Building a Modular and Scalable Virtual Network Architecture with Amazon VPC

Securing Microservices Containerized Security in AWS

Building a Future-Proof Data- Processing Solution with Intelligent IoT Gateways. Johnny T.L. Fang Product Manager

Transcription:

Machine Learning with Ignition and AWS Greengrass

Bridging B the OT IT Gap for Machine Learning Simply Connect Ignition & AWS Greengrass for Machine Learning! Bridging the OT IT gaps is now easier using the tools offered by the Ignition Platform with the Cirrus Link MQTT Transmission Module and AWS s Greengrass product offerings. The Ignition software from Inductive Automation is the industry leading SCADA/HMI platform, which utilizes the MQTT Transmission Module to securely send data using MQTT. AWS Greengrass is the latest predictive modelling environment that runs on edge hardware next to the plant, this significantly reduces the cost of sending vast amounts of data to the cloud continuously to execute machine learning algorithms. These tools enable clients to quickly take advantage of machine learning and predictive maintenance to drive operational efficiencies and increase profits. The diagram above shows how easy it is to utilize the Ignition platform and MQTT Transmission Module to connect the OT data to the IT machine learning. It connects with simple configurations to existing Ignition systems, new Ignition systems and to any existing HMI/SCADA platform that provides an OPC- UA/DA connection. The data from PLCs, RTUs, sensors or any other IOT device is securely available for machine learning requiring no code, vastly reducing time and effort. The data is published using MQTT to the AWS s Greengrass Core in a MQTT Sparkplug JSON format including tag description meta data. Using MQTT, the OT data is 100% self-discoverable including all meta data such as scaling factors, Cirrus Link Solutions 2018 P a g e 2

Bridging B the OT IT Gap for Machine Learning engineering units and proper tag names, saving thousands of hours manually configuring tags in multiple systems. The data also includes time stamp information of when it was measured for accurate time series analysis. From diagram above, connectivity is accomplished by using either Ignition or Ignition Edge to the Greengrass core. One key factor is that the Greengrass core can run on the same Edge hardware as Ignition or on a separate Edge device providing local algorithms and models for real-time analytics. This minimizes data traffic going to the cloud and reduces cost while increasing system response times. AWS Greengrass is software that lets you run local compute, messaging, data caching, and sync capabilities for connected devices in a secure way. With AWS Greengrass, connected devices can run AWS Lambda functions, keep device data in sync, and communicate with other devices securely even when not connected to the Internet. Now, with the Greengrass Machine Learning (ML) Inference capability, you can also easily perform ML inference locally on connected devices. Machine Learning works by using powerful algorithms to discover patterns in data and construct complex mathematical models using these patterns. Once the model is built, you perform inference by applying new data to the trained model to make predictions for your application. Building and training ML models requires massive computing resources, so it is a natural fit for the cloud. But, inference takes a lot less compute power and is typically done in real-time when new data is available, so getting inference results with very low latency is important to making sure your applications can respond quickly to local events. AWS Greengrass ML inference gives you the best of both worlds. You use ML models that are built and trained in the cloud and you deploy and run ML inference locally on connected devices. For example, you can build a predictive model in the cloud about the diamond heads of boring equipment and then run it underground where there is no cloud connectivity to predict the wear and usage of the diamond. Using Amazon SageMaker with Greengrass Amazon SageMaker is a fully managed end-to-end machine learning service that enables data scientists, developers, and machine learning experts to quickly build, train, and host machine learning models at scale. This drastically accelerates all of your machine learning efforts and allows you to add machine learning to your production applications quickly. The diagram below shows the end-to-end architecture of the solution when using Amazon SageMaker. Cirrus Link Solutions 2018 P a g e 3

Bridging B the OT IT Gap for Machine Learning There are 3 main components of Amazon SageMaker: Authoring: Zero-setup hosted Jupyter notebook IDEs for data exploration, cleaning, and preprocessing. You can run these on general instance types or GPU powered instances. Model Training: A distributed model building, training, and validation service. You can use built-in common supervised and unsupervised learning algorithms and frameworks or create your own training with Docker containers. The training can scale to tens of instances to support faster model building. Training data is read from S3 and model artifacts are put into S3. The model artifacts are the data dependent model parameters, not the code that allows you to make inferences from your model. This separation of concerns makes it easy to deploy Amazon SageMaker trained models to other platforms like IoT devices. Model Hosting: A model hosting service with HTTPs endpoints for invoking your models to get real-time inferences. These endpoints can scale to support traffic and allow you to A/B test multiple models simultaneously. Again, you can construct these endpoints using the built-in SDK or provide your own configurations with Docker images. Each of these components can be used in isolation which makes it easy to adopt Amazon SageMaker to fill in the gaps in your existing pipelines. That said, there are some powerful things that are enabled when you use the service end-to-end. The diagram below shows the process in modeling with SageMaker. Cirrus Link Solutions 2018 P a g e 4

Bridging B the OT IT Gap for Machine Learning Benefits Bridges the OT IT Gap With the tools Ignition and MQTT Transmission offer, you can quickly configure and publish data to AWS Greengrass bridging the OT IT gap either at a control center or at the Edge. No code is needed to write to gain access to the data. The data is pushed securely and not requested from IT. This eliminates the need to open up TCP/IP ports for IT and provides a secure process to go through the DMZ zone of the layer 3 to layer 4 of the ISA 95 Purdue model. Easily run ML Inference on Connected Devices Performing inference locally on connected devices reduces the latency and cost of sending device data to the cloud to make a prediction. Rather than sending all data to the cloud for ML inference, ML inference is performed right on devices and data is sent to the cloud only when it requires more processing. Flexible You can pick an ML model for each of your connected devices built and trained using the Amazon SageMaker service or stored in Amazon EC2. AWS Greengrass ML Inference works with Apache MXNet, TensorFlow, Caffe2, and CNTK. For NVIDIA Jetson, Intel Apollo Lake devices, and Raspberry Pi, AWS Greengrass ML Inference includes a pre-built Apache MXNet package, so you don t have to build or configure the ML framework for your device from scratch. Transfer Models to Your Connected Device with a Few Clicks AWS Greengrass ML Inference makes it easy to transfer your machine learning model from the cloud to your devices with just a few clicks in the Greengrass console. From the Greengrass console, you can select the desired machine learning model along with AWS Lambda code and it will then be deployed and run on connected devices. References Amazon Website: https://aws.amazon.com/sagemaker/ Blog Post - Amazon SageMaker Accelerating Machine Learning by Randall Hunt 20 Nov 2017 - https://aws.amazon.com/blogs/aws/sagemaker/ Amazon Website - https://aws.amazon.com/greengrass/ml/ Cirrus Link Solutions 2018 P a g e 5