Installing VMR with V2PC

Similar documents
Installing VMR with V2PC

Cisco Virtual Media Recorder Deployment Guide

Services and Networking

Setting Up the Server

Configuring CWMP Service

Cisco Virtual Media Recorder User Guide

ForeScout CounterACT. Controller Plugin. Configuration Guide. Version 1.0

Installing and Configuring vcloud Connector

Use Plug and Play to Deploy New Devices

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.1

Using vrealize Operations Tenant App as a Service Provider

Using PCF Ops Manager to Deploy Hyperledger Fabric

Installing or Upgrading ANM Virtual Appliance

VMware Integrated OpenStack with Kubernetes Getting Started Guide. VMware Integrated OpenStack 4.0

Installing and Configuring VMware Identity Manager Connector (Windows) OCT 2018 VMware Identity Manager VMware Identity Manager 3.

Table of Contents. Configure and Manage Logging in to the Management Portal Verify and Trust Certificates

DEPLOYING A 3SCALE API GATEWAY ON RED HAT OPENSHIFT

StreamSets Control Hub Installation Guide

Configuring Cisco Unified Presence for Integration with Microsoft Exchange Server

Managing Pod Through Cisco VIM Insight

Configure the IM and Presence Service to Integrate with the Microsoft Exchange Server

Cisco Threat Intelligence Director (TID)

VMware Identity Manager Cloud Deployment. DEC 2017 VMware AirWatch 9.2 VMware Identity Manager

VMware Identity Manager Cloud Deployment. Modified on 01 OCT 2017 VMware Identity Manager

VMware Identity Manager Connector Installation and Configuration (Legacy Mode)

Installing and Configuring vcloud Connector

Deploying VMware Identity Manager in the DMZ. SEPT 2018 VMware Identity Manager 3.3

Table of Contents DevOps Administrators

Deploying VMware Identity Manager in the DMZ. JULY 2018 VMware Identity Manager 3.2

Qualys Cloud Suite 2.30

VMware Horizon View Deployment

ForeScout Extended Module for Tenable Vulnerability Management

IaaS Integration for Multi- Machine Services. vrealize Automation 6.2

Red Hat Gluster Storage 3.3

Best Practices for Security Certificates w/ Connect

Installation and Configuration. vrealize Code Stream 2.1

Getting Started with Prime Network

Ansible Tower Quick Setup Guide

VMware Integrated OpenStack Quick Start Guide

Configure High Availability and Scalability

Installation and Configuration

IaaS Configuration for Cloud Platforms. vrealize Automation 6.2

Documentation. This PDF was generated for your convenience. For the latest documentation, always see

Red Hat CloudForms 4.6

How to Deploy vcenter on the HX Data Platform

Tetration Cluster Cloud Deployment Guide

Infoblox Kubernetes1.0.0 IPAM Plugin

Note: Currently (December 3, 2017), the new managed Kubernetes service on Azure (AKS) does not yet support Windows agents.

Cisco ACI vcenter Plugin

Deploying the Cisco ASA 1000V

Installing the Cisco Unified MeetingPlace Web Server Software

Web Application Firewall Getting Started Guide. September 7, 2018

KubeNow Documentation

Read the following information carefully, before you begin an upgrade.

Manage Administrators and Admin Access Policies

Identity Firewall. About the Identity Firewall

Graphite and Grafana

ForeScout Extended Module for ServiceNow

Cisco Prime Service Catalog Virtual Appliance Quick Start Guide 2

Device Manager. Managing Devices CHAPTER

Installation Guide Revision B. McAfee Cloud Workload Security 5.0.0

BIG-IP TMOS : Implementations. Version

Interdomain Federation Guide for IM and Presence Service on Cisco Unified Communications Manager, Release 11.5(1)SU2

Getting Started with the Ed-Fi ODS and Ed-Fi ODS API

Set Up Cisco ISE in a Distributed Environment

AWS Remote Access VPC Bundle

Manage Administrators and Admin Access Policies

Red Hat CloudForms 4.6

Issues Fixed in DC/OS

Device Management Basics

Infoblox IPAM Driver for Kubernetes User's Guide

Installing the Cisco Unified CallManager Customer Directory Plugin Release 4.3(1)

Administration Guide for Cisco WebEx Meetings Server Release 2.8

Getting Started Using Cisco License Manager

Unified CVP Migration

Administering vrealize Log Insight. September 20, 2018 vrealize Log Insight 4.7

Load Balancing Web Servers with OWASP Top 10 WAF in AWS

System Administration

McAfee Boot Attestation Service 3.5.0

Getting Started with VMware Integrated OpenStack with Kubernetes. VMware Integrated OpenStack 5.1

Eucalyptus User Console Guide

Infoblox IPAM Driver for Kubernetes. Page 1

Eucalyptus User Console Guide

Cisco WebEx Meetings Server Administration Guide

Load Balancing Nginx Web Servers with OWASP Top 10 WAF in AWS

Amazon Web Services Training. Training Topics:

MOVE AntiVirus page-level reference

Using the TUI Upgrade Utility to Update an Existing Release 2.4 Server

Installing Cisco CMX in a VMware Virtual Machine

NexentaStor VVOL

Configuring ApplicationHA in VMware SRM 5.1 environment

High Availability Options

Setting up Microsoft Exchange Server 2016 with Avi

What s New in Fireware v12.3 WatchGuard Training

Kubernetes Integration with Virtuozzo Storage

LAB EXERCISE: RedHat OpenShift with Contrail 5.0

Installing and Configuring vrealize Code Stream. 28 JULY 2017 vrealize Code Stream 2.3

Managing Streams. Create Stream

Dell Wyse Management Suite. Version 1.0 Quick Start Guide

Interdomain Federation for the IM and Presence Service, Release 10.x

Transcription:

This chapter describes the procedure for installing VMR v1.2.1_018 or later on Red Hat OpenShift v1.5. These procedures assume you are using V2PC as the control interface for VMR. The following topics are included: Prerequisites, page 1 Add OpenShift UPIC, page 2 Configure FQDN Support for OpenShift, page 5 Load Docker Images to Docker Registry, page 6 Deploy VMR AIC, page 7 Generate Kubernetes Secret, page 14 Enable VMR AIC, page 15 Configure cdvr Media Flow Controller, page 16 Prerequisites Prior to installing VMR on OpenShift, the following prerequisites must be met: OpenShift v1.5 (required) is installed in a cluster per Cisco guidelines. For additional information, contact Cisco Support to request the Readme file that contains instructions for installing a Cisco customized OpenShift Origin Cluster. An inventory file must be created for the OpenShift installation using the template that is shipped with the VMR release package. If you want to modify log rotation parameters, it should be done in this template file. See Configure Log Rotation. After OpenShift installation is complete, run the script for the appropriate version from launcher/deployer VM. IVP Deployer Version 201706081358 and earlier: ansible-playbook -i <inventory-file> /root/ivp-coe/vmr/hacks.yml IVP Deployer Version 201706210946-1.5.1 or later: ansible-playbook -i inventory-file_name /root/ivp-coe/vmr/dp_mods.yml 1

Add OpenShift UPIC Cisco Virtualized Video Processing Controller (V2PC) v3.3.0-15518 or later is installed. Cisco Cloud Object Storage (COS) v3.18 (required) or later is installed. It is recommended that you carefully review the Installation Checklist in Installation Prerequisites prior to beginning the deployment. Add OpenShift UPIC Perform the following procedure to add an OpenShift unmanaged platform instance controller (UPIC) in V2PC. Procedure Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Step 7 Log in to V2PC. In the navigation pane, select Platform Deployment Manager > Platform Types. Click Add to define a new platform type. In the Platform Type pane, enter the Package Name (cisco-k8s-upic) and optional. Click Save. In the navigation pane, expand Platform Deployment Manager and select Deployed Platforms. In the Select Platform field, choose the platform from the drop-down list and click the Add icon to add the cisco- k8s-upic platform type. The PIC creation wizard opens. Figure 1: Step 1 Instance Info Panel Step 8 On the Step 1 - Instance Info panel, specify the following information: 2

Add OpenShift UPIC Field Name Region Enter the instance name (OpenShift). Optionally, enter a description for the instance. Choose the region to which this UPIC belongs (region-0). Step 9 Click Next. Figure 2: Step 2 - Endpoints Panel Step 10 On the Step 2 - Endpoints panel, enter the following information: Field Endpoint URL and port Enter the load balancer IP and port (https://<load_balancer_ip>:8443). Docker Registry URL and port Enter the URL and port for the Docker Registry to be used by the AIC (docker_registry:port). 3

Add OpenShift UPIC Step 11 Click Next. Figure 3: Step 3 - SSL Panel Step 12 On the Step 3 - SSL panel, copy and pasted the text from the following files into the appropriate text boxes. These files can be found on your OpenShift launcher VM at /root/ivp-coe/ssl/<domain_name>. Field Certificate Authority Client Certificate Client Key Copy and paste the content from the ca.crt file. Copy and paste the content from the admin.crt file. Copy and paste the content from the admin.key file. 4

Configure FQDN Support for OpenShift Step 13 Click Next. Figure 4: Step 4 - Auth Panel Step 14 On the Step 4 - Auth panel, click Save (no changes needed on this panel). Configure FQDN Support for OpenShift This procedure is required for all OpenShift deployments with route support for VMR 1.3.1 or later. OpenShift routes allow services to be exposed via FQDN instead of an IP address. To support this feature, there are some configuration changes that must be completed on the DNS server that is being used for your V2PC setup. Perform the following procedure to configure the DNS server to support FQDN. The examples in this procedure are for a CentOS 7 based DNS server and uses vmrdns.com as the DNS zone name. Replace instances of vmrdns.com with the your DNS zone name. Procedure Step 1 Step 2 Using SSH, log in to the external DNS being used by the V2PC master. Open one of the following files (depending on how you have configured your DNS): /var/named/data/vmrdns.com.zone /var/named/data/<fqdn>.zone 5

Load Docker Images to Docker Registry Step 3 Step 4 Step 5 Step 6 Edit the file by adding the following: $ORIGIN <oc-cluster-subdomain>.vmrdns.com. $TTL 86400 ; 1 day *.dp A <dp-ipfailover-vip1> A <dp-ipfailover-vip2>... *.cp A <cp-ipfailover-vip1> A <cp-ipfailover-vip2> Click Save. The oc-cluster-subdomain needs to be the same as the cluster domain specified in the inventory file used for the OpenShift installation. Also, the number of VIPs will vary depending on your cluster configuration. If you do not have a VIP set up, register all the worker node IPs. Restart named service using the following command: service named restart Verify that the zone was created using the following command: nslookup test.dp.vmrdns.com Load Docker Images to Docker Registry The Docker registry provides the Docker images for installing the core VMR applications (for example, recon-agent, archive-agent, vmr dashboard, api server, manifest-agent, dash-origin, and segment recorder) as containers on the Kubernetes platform. VMR must have these images available for installation. The Docker images are not included in the V2P repository, and must be copied manually and set up using the following procedure. Procedure Step 1 Step 2 Step 3 Step 4 Copy all of the contents of the following URL to the kubernetes master node as follows: http://<server-ip>/vmr-releases/vmr-cisco-{release-tag} Log in to any kubernetes master node that has connectivity to the <docker registry ip>. The kubernetes master node, also called kubernetes controller node, is different from the V2PC Master node. On the kubernetes master node, navigate to directory where you copied the files from Step 1, and from there, change to the directory for the VMR version (for example, vmr-cisco-1.0.4_002). Enter the following commands to load the Docker images: cd scripts./load_to_registry.sh <docker registry ip> <release tag> Example: #./load_to_registry.sh 1.2.3.4 cisco-1.0.3_002 where <docker registry> is 1.2.3.4 and <release tag> is cisco-1.0.3_002. 6

Deploy VMR AIC Deploy VMR AIC Perform the following procedure to deploy the VMR AIC on the platform instance controller (PIC). Important Creating multiple VMR AIC instances on the same platform is currently not supported, even if only one instance is enabled. The other instances may corrupt the working instance on node restart or upgrade and will cause deterministic failures. Before You Begin Obtain and download the latest cisco-vmr AIC package from the Cisco software download page for VMR. You can also use the cisco-vmr AIC package included with the V2PC ISO. Copy the AIC package files to the V2PC master and post them to the repository. For information on using the V2P Package Manager Utility, see the V2PC Controller User Guide. Procedure Step 1 Step 2 Step 3 Step 4 Step 5 Step 6 Log in to V2PC at https://<v2pc_master_ip_address>:8443. In the navigation pane, expand Application Deployment Manager and select Application Types. Click Create New Application and enter cisco-vmr in the Package Name field. Click Save. A new application named cisco-vmr should be displayed in the Application Types grid. In the navigation pane, select Deployed Applications. Confirm that Region is defined as Region 0. If not, define the Region as follows: a) Select Application Deployment Manager > Resources > Providers and configure it. b) Drag Region 0 to the Application Deployment Manager pane. c) Drag cisco-vmr to Region 0. Figure 5: Deployed Applications Pane Step 7 Click the Add icon and choose the application type from the drop-down list. 7

Deploy VMR AIC The Create Application Instance wizard launches. Figure 6: Create Application Instance General Pane Step 8 On the General pane, specify the following information: Field Name Enter a name for the new cisco-vmr application instance (for example, vmr-02 or vmr-oc). Version Region Platform Instance Provider The version is automatically populated. Optionally, enter a description for the AIC. Choose a region from the drop-down list. Choose the platform instance from the drop-down list (for example, OpenShift). Auto-populates from the Platform Instance selection. 8

Deploy VMR AIC Step 9 Click Next. Figure 7: Create Application Instance External Services Pane Step 10 On the External Services pane, specify the following information: Field Docker Registry IP Docker registry IP address. Database Master IP Database Child Aggregator IP Database User Name Database Password Objectstore IP MemSQL master IP address; can be one or multiple IP addresses. If multiple, the user name and password for all database IP addresses should be the same. In a MemSQL cluster, the master aggregator is a specialized aggregator responsible for cluster monitoring, failover and database creation. Each MemSQL cluster has one master aggregator node. MemSQL must be in the clustering mode to specify the Database Master IP and Database Leaf Aggregator IP. MemSQL child aggregator IP address; click the Add New icon to open a text box and enter the value. User name to access the database. Password used to access the database; leave blank. Objectstore IP address or FQDN; can be one or multiple IP addresses; click the Add New icon to open a text box and enter the value. If multiple, the user name and password for all objectstore IP addresses should be the same. 9

Deploy VMR AIC Field Objectstore User Name Objectstore Password VSRM IP Locality Site Id Active Storage Path Archive Storage Path Recon Storage Path Objectstore user name Objectstore password VSRM IP address or URL VSRM locality information Site ID for the cisco-vmr installation Storage path for active contents; for COS, the path must start with rio/ Storage path for archive contents; for COS, the path must start with rio/ Storage path for reconstituted contents; for COS, the path must start with rio/ Step 11 Click Next. Figure 8: New Application Instance VMR Services Pane Step 12 On the VMR Services pane, specify the following information: 10

Deploy VMR AIC Field VMR hostname Playback hostname OpenShift Platform VMR Service FQDN VMR Service IP VMR Haproxy VIP DataPlane VIP VMR GUI Port Playback Service Port Default Supported Channels Bulk Delete BaseURL in Adaptation Set Enter the host name for the VMR Dashboard (for example, v2pcvmr.com). Enter the host name for the packager service; this can be the same as VMR hostname. Click in the text field and select Yes to indicate that this instance is installed on an OpenShift platform. Make sure the logging volume is mounted with the correct options for the docker containers in the Openshift inventory file. For example: --log-driver=json-file. Enter <oc-cluster-name>.<external-dns-fqdn> (for example, vmr1.mosdns.com). This is the DNS that was used in Configure FQDN Support for OpenShift, on page 5. To find this information, open the /etc/dnsmasq.conf file in the V2PC Master node and view the last line. Enter the IP failover address for the Openshift cluster. Enter the VIP for HAProxy in non-openshift mode. This field is optional and used only if HAProxy is required for traffic from the scheduler to VMR. Leave this field blank. Port designation for the VMR Dashboard; enter 9449 Port designation for the VMR API; enter 9080. By default, this setting is configured to 1. You can specify up to the maximum number of channels this VMR installation supports. This setting uses the number of worker nodes available to determine the number of pods brought up by VMR. This field also indicates the number of archive-agent and manifest-agent pods that are spawned after the AIC is enabled. There is a (configurable) default system limit of 110 pods per node. You must deploy an adequate number of nodes to support configured number of channels. See Channel Configuration Considerations. Select True or False to indicate whether to enable the bulk delete feature. Check the check box to indicate the MPD generated by Dash Origin will use BaseURL in AdaptationSet. 11

Deploy VMR AIC Field Use PutCopy for Object Store Object Store Access Mode Object Store Virtual IP for Generic Mode Sensu Server IP Sensu Server Port Sensu Server VHost Sensu Server User Name Sensu Server Password Check the check box to enable vault to vault data transfer in the object store. Click in the text field and select Direct, Generic or Proxy to indicate the mode for accessing the object store. See DASH Origin Redirect. Load balancer front end IP address for Object Storage Services. IP address of the external Sensu server to use for forwarding alarm and event information. If this field is left blank, the IP address of the V2PC master is used (V2PC Sensu server installation). Port/socket endpoint used to receive Sensu client events/alarms. Upon start, the sensu-client container must initiate a connection with the Sensu server using the IP address:port. Virtual host within the Sensu server transport mechanism to be referenced by clients. User name used to authenticate against the external Sensu server. Password credentials paired with the user name and used to authenticate against the external Sensu server. 12

Deploy VMR AIC Step 13 Click Next. Figure 9: Create Application Instance Log Service Pane Step 14 On the Log Service pane, specify the log service details to be used by the FluentD container to generate the necessary configuration to contact the respective servers and forward logs. Field Start Enables or disables the log service. By default, this field is set to Yes (enabled). If there are no Log Server IPs to be configured, set the Start field to No (disabled). Log Server Type Log Server IP Kafka IHport Kafka Default Topic Http proxy Https proxy Select X1_ELK, ELK or KAFKA. If you select X1_ELK or ELK, you only need to specify the Log Server IP for the ELK server. If you select KAFKA, specify the Kafka IHport and Kafka Default Topic. Where the fluentd container forwards logs; if using X1 ELK or ELK log server, enter IP address of ELK server. Enter the Zookeeper <IP>:<PORT> endpoint in the Kafka setup. Enter a string value that defines the listening topic set in the Kafka setup (for example, logs). HTTP proxy Secure HTTP proxy 13

Generate Kubernetes Secret Step 15 Click Save. The new instance is displayed in the Deployed Applications pane. Generate Kubernetes Secret All requests from the VMR Dashboard (UI) to its underlying API are handled through HTTPS using Kubernetes secret objects. A secret object, or secret, is an object intended to hold sensitive information. Conveniently, a Kubernetes secret can store both a SSL certificate and a private key. Using secrets is also more secure than placing a certificate and private key directly in a pod definition or Docker image. Secrets allow Kubernetes to load OpenSSL certificate-key pairs on UI and API containers. By doing so, there is no need to create a new certificate each time containers are created. Before enabling the VMR instance, perform the following steps to generate a kubernetes secret that will enable HTTPS requests between the UI and API. Procedure Step 1 Step 2 If the VMR application instance controller (AIC) is already enabled, disable it. Obtain the certificate and private key using one of the following methods as appropriate: For production, you should purchase an SSL certificate from a certificate authority and save the purchased certificate and key on the V2P master in the /tmp/ directory as vmr.key and vmr.crt. For testing, you may prefer to create a self-signed certificate and private key. To create a self-signed certificate, run the following command: openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout /tmp/ vmr.key -out /tmp/vmr.crt -subj "/C=<Country Initials>/ST=<Full State>/L=<City>/ O=<Organization>/CN=*.<organization>.com" Where: C is the two-letter country initials (for example, US). ST is the full state name (for example, California). L is the full city name with spaces removed (for example, SanJose). O is the organization (for example, Cisco). CN is the domain name (for example, *.cisco.com). Tip Including a wildcard in the domain name (for example, *.cisco.com) allows the certificate to be used for multiple sites. For example, if your deployment is in San Jose, California, United States, and the domain names the organization uses are v2p.cisco.com and vmr.cisco.com, the command would appear as follows: openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout /tmp/vmr.key -out /tmp/vmr.crt -subj"/c=us/st=california/l=sanjose/o=cisco/cn=*.cisco.com" 14

Enable VMR AIC Step 3 Step 4 Step 5 X509 certificates validity period must be set to a time frame that corresponds to system and application risks. Cisco recommends certificate validity period be two years or less. Download the script create-vmr-secret.sh from the V2PC Master and run the script locally. Example: The release package vmr-releases/vmr-cisco-<vmr_version>/scripts/create_vmr_secret.sh contains the script. # wget http://172.22.110.186/vmr-releases/vmr-cisco-1.1.3_005/scripts/create_vmr_secret.sh Create the Kubernetes secret using the following commands. Be sure hat the KUBECONFIG and PICINST environment variables have been set. kubectl create namespace vmr kubectl create -f ssl-secret.yaml --namespace=vmr For VMR versions earlier than 1.1.3, use the command kubectl create -f ssl-secret.yaml. Verify that the secret was successfully created: kubectl get secrets --namespace=vmr For VMR versions earlier than 1.1.3, use the command kubectl get secrets. Enable VMR AIC Perform the following steps to enable the VMR AIC. Procedure Step 1 Step 2 Confirm that you have generated a Kubernetes secret as described in Generate Kubernetes Secret. In the V2PC navigation pane, expand Application Deployment Manager and select Deployed Applications. Figure 10: Enabling New Instance Step 3 Select the check box for the newly created application instance and then click Enable in the upper portion of the pane. 15

Configure cdvr Media Flow Controller Installation of cisco-vmr services now begins on the specified platform. Step 4 When the installation has completed, confirm that the Operational State shows In Service. Configure cdvr Media Flow Controller Add Media Archive This section provides instructions for configuring a cdvr media flow controller. This is completed after the cisco-vmr AIC is enabled and placed in In Service. Before adding a cdvrcaptureplaback cdvr media flow controller (MFC) in VMR, the operator must create a corresponding cisco-vmr AIC, enable the AIC, and place it in InService state. Perform the following steps to configure a media archive in V2PC: 16

Add Media Archive Procedure Step 1 Step 2 In the navigation pane, expand Media Workflow Manager and select Resources > Media Sources > Media Archives. Click Add to add a new media archive. Figure 11: Add Media Archive Step 3 Complete the following fields and then click Save. Field Name Media Archive Name Archive Time Enter a name for the media archive. Optionally, enter a description of the media archive. Specify the time (Days, Hours, Minutes) after which it the media archive will be put into archive storage. Default = 3 days 17

Create ATS Channel Field Name Re-archive Time Archive Start Time Archival Duration Archival Pause Specify the time (Days, Hours, Minutes) after which the reconstituted stream can be archived again. Default = 1 day Time of day (in 12-hour format) to start running the archival process for any segment that has passed the Archive Time. Default = 2:00 AM Enter number of minutes to run the archival process before taking a break. Default = 10 minutes Enter the number of milliseconds to pause time between archival runs. Default = 5 milliseconds Create ATS Channel Perform the following steps to create an ATS channel in V2PC: Procedure Step 1 Step 2 In the navigation pane, expand Media Workflow Manager and choose Resources > Media Sources > ATS Channels. Click the Create ATS Channel icon to add a new channel. Figure 12: Create ATS Channel 18

Create ATS Channel Lineup Step 3 Complete the following fields and then click Save. Field Name Name Enter a name for the ATS channel. This is a required field. Channel ID No De-Dup Source Type Target Multicast Address Source IP ABR Profiles Enter a unique channel ID to be used internally by the system. This is a required field. Optionally, enter a description for the ATS channel. Select True or False from the drop-down list to indicate whether segments from the stream should be archived. If True is selected, the segments from that stream are not archived. If False is selected, the segments are archived. This option can only be specified at the time the channel is created. A channel set to No De-Dup True, cannot be modified at a later time. Select UDP or HTTP from the drop-down list. If UDP is selected as the source type, specify the IPv4 addresses for source of the multicast feed. If UDP is selected as the source type, specify the Source IP. Add at least one ABR profile (stream profiles). Click the Add icon, specify the ATS Rate (bps), UDP Port number, Stream Type for video and audio, and then click OK. Step 4 On the Create ATS Channel pane, click OK to save the new ATS channel. Create ATS Channel Lineup Perform the following steps to create an ATS channel lineup in V2PC: 19

Create cdvr Workflow Procedure Step 1 Step 2 In the navigation pane, expand Media Workflow Manager and select Resources > Media Sources > ATS Channel Lineups. Click Add to add a new ATS channel lineup. Figure 13: Create ATS Channel Lineup Step 3 Step 4 Step 5 Step 6 Step 7 Step 8 Step 9 Step 10 Enter a Name and for the channel lineup. Select the Media Archive that the channel lineup is associated with (this is the media archive created in Add Media Archive). Select the channel in the right Channels pane and drag it into the Selected Channels pane. For the selected channel, specify the Content Id and Rights Tag. Optionally, add Advanced Configuration information. Click OK. Repeat Step 5 through Step 8 to add all the desired channels to the new channel lineup. On the Channel Lineups pane, click Save. The right Channels pane only displays the channels for the currently selected media archive. To view channels for a different media archive, select the desired media archive from the list to display its channels. Create cdvr Workflow You must create a cdvr workflow that defines the media workflow type and add it to the channel, and then enable it before VMR is ready for recording. Prior to creating the media workflow, obtain the latest cdvr-mfc package, copy the NPM package files to the V2PC master and post them to the repository. 20

Create cdvr Workflow You can obtain the cdvr-mfc package from the JFrog Artifactory at: http://engci-maven-master.cisco.com/ artifactory/webapp/#/artifacts/browse/tree/general/spvss-mdp-npm/cdvr-mfc/ or use the cdvr-mfc package that is included with the V2PC ISO. Add Workflow Type Perform the following to add a cdvr Workflow Type to the channel: Procedure Step 1 Step 2 In the navigation pane, expand Media Workflow Manager and select Media Workflow Types. Click Add to add a new workflow type. Figure 14: Add Media Workflow Type Step 3 Step 4 Step 5 Click in the Package Name field and choose the package name from the drop-down list. Click Save to create the media workflow type. In the navigation pane, expand Media Workflow Manager and select Media Workflows. Figure 15: Media Workflow Dialog 21

Create cdvr Workflow Step 6 Choose the Media Workflow Type from the drop-down list to open the wizard. Figure 16: Select Media Workflow Type Step 7 On the General pane, enter a Name for the media workflow instance. Figure 17: Create Media Workflow General Pane Step 8 Click Next. Figure 18: Create Media Workflow Media Source Pane 22

Create cdvr Workflow Step 9 Step 10 On the Media Source pane, choose the Application Instance and the Channel Lineup from the drop-down lists. Click Next. Figure 19: Create Media Workflow Capture Pane Step 11 On the Capture pane, specify the following information: Field Application Instance Asset Life Cycle Policy Asset Redundancy Policy ESAM Profile Asset Download Option Choose the media capture engine instance from the drop-down list. Choose the policy from the drop-down list. If there are no policies defined, click the Add icon to add a new Asset Lifecycle Policy. Choose the policy from the drop-down list. If there are no policies defined, click the Add icon to add a new Asset Redundancy Policy. Choose the profile from the drop-down list. If there are no profiles defined, click the Add icon to add a new ESAM profile. Choose Enabled or Disabled from the drop-down list to indicate whether the asset can be downloaded. 23

Create cdvr Workflow Step 12 Click Next. Figure 20: Create Media Workflow Recorder Pane Step 13 Step 14 On the Recorder pane, choose the VMR Application Instance and the Archive Configuration. If there are no archive configurations, click the Add icon to add a new archive configuration. Click Next. Figure 21: Create Media Workflow Playback Pane 24

Create cdvr Workflow Step 15 Step 16 On the Playback pane, choose the media playback instance from the Application Instance drop-down list and the template to be used for publishing from the Publish Templates drop-down list. If there are no templates defined, click the Add icon to add a new publishing template. Click Next. Figure 22: Create Media Workflow State Cache Pane Step 17 Step 18 On the State Cache pane, choose the Resource Application Instance. Click Finish to complete the procedure. Enable Media Workflow After defining the media workflow, you must enable it before you can begin recording with VMR. Perform the following procedure to enable the media workflow instance. Procedure Step 1 Step 2 On the Media Workflow Manager > Media Workflows pane, select the media workflow. Click the Enable icon to change the media flow Operational State to In Service. VMR is now ready for recording. You can now record the channel through VSRM or other methods. 25

Create cdvr Workflow 26