IBM Image-Analysis Node.js

Similar documents
IBM Bluemix Node-RED Watson Starter

BlueMix Hands-On Workshop Lab A - Building and Deploying BlueMix Applications

Exercise 1. Bluemix and the Cloud Foundry command-line interface (CLI)

Create Swift mobile apps with IBM Watson services IBM Corporation

Hands-on Lab Session 9011 Working with Node.js Apps in IBM Bluemix. Pam Geiger, Bluemix Enablement

IBM Watson Solutions Business and Academic Partners

IBM / ST SensorTile Watson IoT Workshop

Using the Bluemix CLI IBM Corporation

Using Dropbox with Node-RED

IBM Watson Application Developer Workshop. Watson Knowledge Studio: Building a Machine-learning Annotator with Watson Knowledge Studio.

Getting started with Tabris.js Tutorial Ebook

IBM Natural Language Understanding Node.js

BlueMix Hands-On Workshop

Lab 4: create a Facebook Messenger bot and connect it to the Watson Conversation service

Developing Intelligent Apps

SIGNATUS USER MANUAL VERSION 3.7

Automation with Meraki Provisioning API

Lesson 7: Recipe Display Application Setup Workspace

BLACKBERRY SPARK COMMUNICATIONS PLATFORM. Getting Started Workbook

Setting Up Resources in VMware Identity Manager (SaaS) Modified 15 SEP 2017 VMware Identity Manager

SIGNATUS USER MANUAL VERSION 2.5

Workspace 2.0 Android Content Locker. UBC Workspace 2.0: VMware Content Locker 3.5 for Android. User Guide

Table of Contents HOL-1757-MBL-5

Azure Application Deployment and Management: Service Fabric Create and Manage a Local and Azure hosted Service Fabric Cluster and Application

Lab 5: Working with REST APIs

WebSphere Puts Business In Motion. Put People In Motion With Mobile Apps

Instruction Guide! VERITEXT VAULT - ONLINE DEPOSITORY

OFFICE 365 FOR STUDENTS O VERVIEW OF OFFICE 36 5 FOR STUDENTS. Passero, Denise Author. Overview

User Guide for Client Remote Access. Version 1.2

IBM BlueMix Workshop. Lab D Build Android Application using Mobile Cloud Boiler Plate

SIGNATUS USER MANUAL VERSION 2.3

Salvatore Rinzivillo VISUAL ANALYTICS

212Posters Instructions

WA2393 Data Science for Solution Architects. Classroom Setup Guide. Web Age Solutions Inc. Copyright Web Age Solutions Inc. 1

Using SourceTree on the Development Server

Table of Contents. Copyright Pivotal Software Inc, of

Developing Node.js Applications on IBM Cloud

This quick reference guide is designed for consumers of the Program Dashboard and provides details on how to: Log in

OAM 2FA Value-Added Module (VAM) Deployment Guide

10.1 Getting Started with Container and Cloud-based Development

C exam. Number: C Passing Score: 800 Time Limit: 120 min IBM C IBM Cloud Platform Application Development

** If you are having issues logging into , Contact **

Running the ESPM Twitter Integration sample app on SAP Cloud Platform

Connect and Transform Your Digital Business with IBM

Sticky Notes for Cognos Analytics by Tech Data BSP Software

WA2592 Applied Data Science and Big Data Analytics. Classroom Setup Guide. Web Age Solutions Inc. Copyright Web Age Solutions Inc.

Q.bo Webi User s Guide

PANOPTO GUIDE Version 1.2 October 2016

vsphere APIs and SDKs First Published On: Last Updated On:

TIBCO LiveView Web Getting Started Guide

PAS for OpenEdge Support for JWT and OAuth Samples -

MANAGING ANDROID DEVICES: VMWARE WORKSPACE ONE OPERATIONAL TUTORIAL VMware Workspace ONE

Cloud Help for Community Managers...3. Release Notes System Requirements Administering Jive for Office... 6

Create OData API for Use With Salesforce Connect

An Introduction to Box.com

Marketplace. User Guide (for Tenants) Date

Choose OS and click on it

BHSF Physician User Guide

Getting started with Convertigo Mobilizer

Storebox User Guide. Swisscom (Switzerland) Ltd.

Multi-factor Authentication Instructions

Installation & Configuration Guide Enterprise/Unlimited Edition

Bitnami MEAN for Huawei Enterprise Cloud

Colligo Engage Console. User Guide

AP Computer Science Principles: Problem Set 1

Remote Access User Guide for Mac OS (Citrix Instructions)

Azure Developer Immersion Getting Started

USER GUIDE WASHINGTON ACCESS

Filr 3.4 Desktop Application Guide for Mac. June 2018

Read&Write 4 for Mac TDSB Download Instructions. 1. Open your web browser and enter in the address box

Setting Up Resources in VMware Identity Manager (On Premises) Modified on 30 AUG 2017 VMware AirWatch 9.1.1

Configuring an Android Device for inet Guardian

Red Hat Fuse 7.1 Fuse Online Sample Integration Tutorials

Supplemental setup document of CE Connector for IoT

South Essex College Remote Resources

Using the Computer Programming Environment

C. The system is equally reliable for classifying any one of the eight logo types 78% of the time.

Installation Guide - Mac

Contents. Anaplan Connector for MuleSoft

Integration Service. Admin Console User Guide. On-Premises

owncloud Android App Manual

8.0 Help for Community Managers Release Notes System Requirements Administering Jive for Office... 6

IBM Bluemix platform as a service (PaaS)

Integration Service. Admin Console User Guide. On-Premises

Transcode and Add Pulse Video Analytics to Video Content on Cisco Show and Share

PRACTICE-LABS User Guide

Lab 2 Examine More Development Features in IBM Worklight

Using Google Drive Some Basics

HOW TO USE MOODLE. Welcome to BFI Wien! QUICK GUIDE

Tivoli Common Reporting V Cognos report in a Tivoli Integrated Portal dashboard

ZENworks 2017 Audit Management Reference. December 2016

Node.js. Node.js Overview. CS144: Web Applications

VMware Identity Manager Administration

Android InsecureBankv2 Usage Guide. InsecureBankv2

Introducing the VNE Customization Builder (VCB)

Contents Overview... 2 Part I Connecting to the VPN via Windows OS Accessing the Site with the View Client Installing...

User Guide. 3CX Recording Manager Standard. Version

Installing and configuring an Android device emulator. EntwicklerCamp 2012

Getting Started. 1 Check your . Typically, sent from (Note: You may need to check your junk/spam folder).

Adobe Marketing Cloud Bloodhound for Mac 3.0

Transcription:

IBM Image-Analysis Node.js Cognitive Solutions Application Development IBM Global Business Partners Duration: 90 minutes Updated: Feb 14, 2018 Klaus-Peter Schlotter kps@de.ibm.com Version 1

Overview The IBM Watson Developer Cloud (WDC) offers a variety of services for developing cognitive applications. Each Watson service provides a Representational State Transfer (REST) Application Programming Interface (API) for interacting with the service. Some services, such as the Speech to Text service, provide additional interfaces. The Watson Image Analysis application will use several services (Visual Recognition, Text-to-Speech, Language Translator) to show how they can be combined to achieve a desired goal. This app will be created and run from a local workstation. In a first step, the application provides the ability to classify an image and get a descriptive name back in English (Visual Recognition). In a second step, this name is spoken thanks to the Text-to-Speech service. The following step is to translate the description in German language with the Language Translator Service and have it pronounced by the Text-To-Speech service in that language. Finally, we deploy the Node.js application to IBM Cloud. Objectives Learn how to use the Cloud Foundry command-line interface to create and manage Watson services Learn how to implement the Image-Analysis application using Node.js Learn to orchestrate 3 services in a single application Learn how to create a Node.js application running in IBM Cloud Prerequisites Before you start the exercises in this guide, you will need to complete the following prerequisite tasks: Guide Getting Started with IBM Watson APIs & SDKs Create an IBM Cloud account You need to have a workstation with the following programs installed: 1. Node.js 2. Npm 3. Git only for optional Step 19 a) Note: Copy and Paste from this PDF document does not always produce the desired result, therefore open the code snipptes for this lab in the browser and copy from there! Page 2 of 22

Section 1: Testing Watson APIs with a REST Client Create services in IBM Cloud IBM Cloud offers services, or cloud extensions, that provide additional functionality that is ready to use by your application s running code. You have two options for working with applications and services in IBM Cloud. You can use the IBM Cloud web user interface or the Cloud Foundry command-line interface (cf cli). This cf cli only works when you have completed Step 4 in the workstation setup document while creating your IBM Cloud Account. Note: This is the option chosen here because it is the base of automated scripts. In the German region, as of May 2017, Visual Recognition is not yet available Step 1 Open a Terminal/Command Window and enter the following commands cf api https://api.<region>.bluemix.net where region is either ng for region US South eu-gb for region United Kingdom eu-de for region Germany - as of May 2017 Visual Recognition is not yet available. cf login -u <your Bluemix user account> When prompted, enter your IBM Cloud password. Step 2 Create the Visual Recognition Service a) Get the information about the service cf marketplace -s watson_vision_combined b) Create the free service with name my-visual-recognition cf create-service watson_vision_combined free my-visual-recognition Page 3 of 22

c) Generate the authentication credentials cf create-service-key my-visual-recognition credentials-1 d) Retrieve the new credentials cf service-key my-visual-recognition credentials-1 You should see the API key of your Visual Recognition service. Later in this exercise you will need this value. Feel free to copy the API key to a text file or just leave the terminal/command window open until it is needed. Step 3 Create the Text to Speech service a) Create the standard service with name my-text-to-speech cf create-service text_to_speech standard my-text-to-speech b) Generate authentication credentials cf create-service-key my-text-to-speech credentials-1 c) Retrieve the new credentials cf service-key my-text-to-speech credentials-1 You should see the username and password of your Text to Speech service. Later in this exercise you will need this value. Feel free to copy the API key to a text file or just leave the terminal/command window open until it is needed. Step 4 Create a Language Translation service (now in the IBM Cloud console) a) Open a Browser and navigate to https://console.<region>.bluemix.net Use the region you have specified in Step 1. b) Click the menu icon and select Services and then Dashboard. On the list of services you should see the Visual Recognition and Text to Speech service created in previous steps. c) Click the Create Service button Page 4 of 22

d) Under Services select the category Watson e) Click the Language Translator service and as Service name specify my-language-translator You can leave the Credentials name as the default. This is the same we have used when creating the services from the Cloud Foundry CLI in previous steps. f) Click the Create button Page 5 of 22

g) After the service is created you will see the Manage page of the service. Click the Service Credentials to display them h) In the list of credentials select the Action View credentials to display the Credentials-1 i) Copy the username and the password of the service for later use into a file on your workstation or you can always go to the Bluemix console to copy the credentials from your list of services when you need them. Page 6 of 22

Work with your services using Postman (REST Client) The Watson Services in Bluemix have a REST (Representational State Transfer) interface and therefore they can be easily tested with a REST client like Postman. You can find the API reference in the Documentation of each service. Step 5 Postman is available as an application for MacOS, Windows, Linux https://www.getpostman.com and as an add-in for the Chrome browser from Google (Search the web for Postman for Chrome). Step 6 Open the Postman client Page 7 of 22

Test Visual Recognition Note: You may Copy and Paste from the code snipptes file. Step 7 Open this URL and save a copy of the image to your local hard drive Step 8 In Postman select GET as the HTTP request type to use Build the URL for testing with the Visual Recognition service a) Get the url and the api_key retrieved in Step 2 d), or copy the these values from within your my-visual-recognition service Service credentials in the Bluemix console and build the url with the following string: https://gateway-a.watsonplatform.net/visual-recognition/api /v3/classify this is the function of the service your api_key without {}?api_key={api-key} &url=https://upload.wikimedia.org/wikipedia/commons/thumb/5/50/q ueen_elizabeth_ii_march_2015.jpg/455pxqueen_elizabeth_ii_march_2015.jpg &version=2016-05-19 version string Resulting URL string: - Do not forget your api_key https://gateway-a.watsonplatform.net/visual-recognition/api/v3/classify? api_key=your_api_key&url=https://upload.wikimedia.org/wikipedia/ commons/thumb/5/50/queen_elizabeth_ii_march_2015.jpg/455pxqueen_elizabeth_ii_march_2015.jpg&version=2016-05-19 b) Depending on your browser s default language (f.e. when it is German) you have to specify the following header variable: key: Accepted-Language value: en Otherwise you may get a message unsupported language de Page 8 of 22

c) Press the Send button and see the result Step 9 Save the Postman request definition In Postman, beside the Send button there is a Save button with a down arrow. Click the down arrow and select Save As.. and enter something like the following to save the request for future use. Page 9 of 22

Step 10 Test Visual Recognition to detect faces. In the Postman request url just change the classify to detect_faces and submit again. The following result appears. { "images": [ { "faces": [ { "age": { "min": 65, "score": 0.670626 }, "face_location": { "height": 306, "left": 200, "top": 126, "width": 199 }, "gender": { "gender": "FEMALE", "score": 0.970688 }, "identity": { "name": "Elizabeth II", "score": 0.970688, "type_hierarchy": "/british monarchs/elizabeth ii" } } ], "resolved_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/5/50/queen_elizabeth_ii_march_2015.jpg/455px-queen_elizabeth_ii_march_2015.jpg", "source_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/5/50/queen_elizabeth_ii_march_2015.jpg/455px-queen_elizabeth_ii_march_2015.jpg" } ], "images_processed": 1 } Step 11 Test Visual Recognition with an Image from your local drive a) Decide whether you want to classify or face_detect the image. b) Change the HTTP method to POST c) In Postman remove the url parameter from the request. &url=https://upload.wikimedia.org/wikipedia/commons/thumb/5/50/queen_elizabeth_ii_march_2015.jpg/455px-queen_elizabeth_ii_march_2015.jpg Result: https://gateway-a.watsonplatform.net/visual-recognition/api/v3/detect_faces? api_key=18c455xxxxxxxxxxxxxxxxxxxxxxxxxxx3d0&version=2016-05-20 Page 10 of 22

Step 12 In the Body section select form-data and add the following parameters: key: size value: original key: file value: Choose the file you have saved in Step 8 The result should be the same as in the previous step. Optional: You can now test with other images. Test the Language Translation service You can find the description of the API in the API Reference. Step 13 In Postman specify the following parameter for the POST translate request. a) Url from your service definition. See Step 4 h) and add the method /v2/translate https://gateway.watsonplatform.net/languagetranslator/api/v2/translate b) This service uses basic authentication. In the Authentication section enter the user id and password from your service definition. See Step 4 h). Page 11 of 22

c) In the Headers section specify the following parameters: Key: Content-Type Value: application/json Key: Accept: Value: application/json d) In the Body section, type raw, specify the following JSON data { } "text":"hello, how are you!", "source":"en", "target":"de" e) Press the Send button and check the result. Page 12 of 22

Step 14 In Postman specify the following parameter for the GET Identifiable languages request. a) Select GET as the Postman HTTP method. b) In the Url section change translate to identifiable_languages c) Keep your Authorization section from previous step. d) For a GET request the Body section is ignored. e) Press the Send button and see the result. A long list of available languages. f) Check the API Reference for more methods of the Language Translator service. Page 13 of 22

Test the Text to Speech service Step 15 In Postman specify the following parameter for the POST synthesize request. a) Url from your service definition. See Step 3 c) and add the method /v1/synthesize https://stream.watsonplatform.net/text-tospeech/api/v1/synthesize b) Add the Voice you want to use (see the API Reference) to the url above.?voice=de-de_birgitvoice c) This service uses basic authentication. In the Authentication section enter the user id and password from your service definition. See Step 3 c). d) In the Headers section specify the following parameters: Key: Content-Type Value: application/json Key: Accept: Value: audio/wav e) Specify the following for the Body section in raw format. {"text": "Hallo, wie geht es Ihnen!"} f) Because we will receive a binary audio response, do not press the Send button, but on the Send button s down arrow the Send and Download. g) Save the file as response.wav to a location on your hard drive and double-click it afterwards. Page 14 of 22

Section 2: Watson APIs in Web Applications Running the Sample Application Locally To see the IBM Watson Services integrated in a Web application we have created a sample that you need to download or clone from Github to your Workstation. The following screenshots are based on the Google Chrome browser and Microsoft Visual Studio Code. Both tools can be downloaded for free. Other browsers and development environments/code editors work as well. Step 16 In the Browser navigate to the following project repository on Github https://github.com/iic-dach/image-analysis Step 17 Copy the Github clone URL to the clipboard or download the ZIP. Step 18 Open a Terminal/Command window and navigate to a folder that will contain the project. (F.e. /home/<user>/iicworkshop on the Mac OS). Step 19 Do the following based on Step 17 a) In the terminal enter git clone and past the URL behind it. git clone https://github.com/iic-dach/image-analysis.git b) Unpack the downloaded ZIP file into the folder from Step 18. Step 20 Move to the folder created in Step 19 The project consists of the following components a) A server component based on Node.js. The main file is app.js with with the appropriate routes defined in the routes folder. b) A client component based on jquery and bootstrap with the business logic defined in the js folder and a root index.html file. Page 15 of 22

Step 21 Start Visuals Studio Code and open the project folder (f.e. /Users/<user>/image-analysis). Step 22 In the terminal Window type npm install This command installs all the dependencies of the project defined in package.json (the node_modules folder is created during this process). Step 23 Copy config.sample.js to config.js Step 24 In the config.js file enter the user ids, passwords, and api key created in Steps 2 4. Page 16 of 22

Step 25 Save config.js Step 26 Start the server by either Step 25 a) or Step 25 b) a) Enter the following commend will start the nodejs server npm start The servers console is now displayed in the terminal. b) Start the server in debug modus in Visual Studio Code You then see the server s debug console in Visual Studio code. Page 17 of 22

Step 27 In the browser navigate to http://localhost:3000 The index.html file gets loaded from the nodejs server. Step 28 To see how the process works, open the Developer Tools of the Chrome Browser. Step 29 The page shows some general information about IBM s Watson APIs. The starting point of our sample application is the button corner. Click it. in the upper right Step 30 The button opens the appropriate file selection panel of your operating system. Navigate to the <project folder>/public/images and select one of the images, then click Open. Page 18 of 22

Step 31 When the images is selected the Watson Visual Recognition and the Language Translator service are executed automatically. In the Chrome Developer Tools, click the Network tab. Here you can see the results of the calls to the Watson cloud services. Step 32 Now click the Speaker button to hear the translated result. The response to this call contains a buffer with the content of a.wav file buffer that is automatically played in the browser. Page 19 of 22

Step 33 The business logic of the client component starts in js/ui.js, when the image is selected the function recognize() is called and in its callback the translate() function is called. These functions call the actual ajax function defined in js/api.js, the $api.recognize() and the $api.translate. These functions then call the routes defined in the Node.js server /recognize and /translate. See their implementations in the routes folder. Step 34 For the final translation, when the speaker button is pressed, in js/ui.js the speak() function calls the $api.speak() in js/api.js and this then calls the route /speak on the Node.js server. Step 35 After testing the app, stop the Node.js server by pressing Ctrl-C in the Terminal/Command window. Page 20 of 22

Section 3: Push the local Web Application to IBM Cloud The final step of this lab is to push the application that runs locally on your machine into your Bluemix account and make it publicly available. Note: This simple example currently works on Android with images saved previously, not from the camera. ios only allows camera images that do not work. Step 36 The following step assumes, that your terminal is connected to your IBM Cloud account as described in Step 1. Step 37 In the Terminal window, in the applications root folder (f.e. /Users/<user>/image-analysis) enter following command: cf push xxximageanalysis where xxx stands for a prefix you choose for your application, because this url is public and must therefore be unique. This process takes some time to upload, build and start the app. In the end you should see something like the following: Page 21 of 22

Step 38 In the browser navigate to http://xxximageanalysis.mybluemix.net. The application should work as locally installed. Step 39 You can display the server console from IBM Cloud in your Terminal with the following command cf logs xxximageanalysis Page 22 of 22