Release Change Reference, StarOS Release 21.8/Ultra Services Platform Release 6.2

Size: px
Start display at page:

Download "Release Change Reference, StarOS Release 21.8/Ultra Services Platform Release 6.2"

Transcription

1 Release Change Reference, StarOS Release 21.8/Ultra Services Platform Release 6.2 First Published: Last Modified: Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA USA Tel: NETS (6387) Fax:

2 THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT NOTICE. ALL STATEMENTS, INFORMATION, AND RECOMMENDATIONS IN THIS MANUAL ARE BELIEVED TO BE ACCURATE BUT ARE PRESENTED WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. USERS MUST TAKE FULL RESPONSIBILITY FOR THEIR APPLICATION OF ANY PRODUCTS. THE SOFTWARE LICENSE AND LIMITED WARRANTY FOR THE ACCOMPANYING PRODUCT ARE SET FORTH IN THE INFORMATION PACKET THAT SHIPPED WITH THE PRODUCT AND ARE INCORPORATED HEREIN BY THIS REFERENCE. IF YOU ARE UNABLE TO LOCATE THE SOFTWARE LICENSE OR LIMITED WARRANTY, CONTACT YOUR CISCO REPRESENTATIVE FOR A COPY. The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of California, Berkeley (UCB) as part of UCB's public domain version of the UNIX operating system. All rights reserved. Copyright 1981, Regents of the University of California. NOTWITHSTANDING ANY OTHER WARRANTY HEREIN, ALL DOCUMENT FILES AND SOFTWARE OF THESE SUPPLIERS ARE PROVIDED AS IS" WITH ALL FAULTS. CISCO AND THE ABOVE-NAMED SUPPLIERS DISCLAIM ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING, WITHOUT LIMITATION, THOSE OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THIS MANUAL, EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative content is unintentional and coincidental. Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this URL: Third-party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (1721R) 2018 Cisco Systems, Inc. All rights reserved.

3 CHAPTER 1 Release 21.8 Features and Changes Quick Reference Release 21.8 Features and Changes, on page 1 Release 21.8 Features and Changes Note The release version is not provided for features or behavior changes introduced before release and N6.2. New Features and Functionality / Behavior Changes 5G NSA for MME, on page 17 5G NSA for SAEGW, on page 35 Override Control Enhancement, on page 205 IMEI Validation Failure, on page 159 Non-MCDMA Cores for Crypto Processing, on page 203 Override Control Enhancement, on page 205 Cisco Ultra Traffic Optimization, on page 81 Short Message Service, on page 217 Dedicated Core Networks on MME, on page 105 Increased Subscriber Map Limits, on page 163 IPv6 PDN Type Restriction, on page 173 NAT64 Support, on page 199 Applicable Product(s) / Functional Area MME SAEGW ECS epdg epdg epdg IPSG MME MME MME MME P-GW Release Introduced / Modified

4 Release 21.8 Features and Changes Release 21.8 Features and Changes Quick Reference New Features and Functionality / Behavior Changes Inline TCP Optimization, on page 165 Multiple IP Versions Support, on page 193 Packet Count in G-CDR, on page 209 LTE to Wi-Fi (S2bGTP) Seamless Handover, on page 179 Multiple IP Versions Support, on page 193 LTE to Wi-Fi (S2bGTP) Seamless Handover, on page 179 Multiple IP Versions Support, on page 193 BGP Peer Limit, on page 77 Configuration Support for Heartbeat Value, on page 101 Event Logging Support for VPP, on page 147 Limiting Cores on Local File Storage, on page 177 ICSR Switchover Configuration Support for SF Failures, on page 155 Increased Maximum IFtask Thread Support, on page 161 Monitor VPC-DI Network, on page 185 SNMP IF-MIB and Entity-MIB Support for DI-Network Interface, on page 231 API-based VNFM Upgrade Process, on page 57 API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process, on page 65 Automatic Disabling of Unused OpenStack Services, on page 73 Automatic Enabling of Syslogging for Ceph Services, on page 75 ESC Event Integration with Ultra M Manager, on page 145 UAS and UEM Login Security Enhancements, on page 241 Applicable Product(s) / Functional Area P-GW P-GW P-GW P-GW SAEGW SAEGW S-GW System System System System System System System System Ultra Services Platform Ultra Services Platform Ultra Services Platform Ultra Services Platform Ultra Services Platform Ultra Services Platform Release Introduced / Modified

5 Release 21.8 Features and Changes Quick Reference Release 21.8 Features and Changes New Features and Functionality / Behavior Changes UEM Patch Upgrade Process, on page 243 Ultra M Manager Integration with AutoIT, on page 253 Ultra M Manager SNMP Fault Suppression, on page 259 USP Software Version Updates, on page 261 Applicable Product(s) / Functional Area Ultra Services Platform Ultra Services Platform Ultra Services Platform Ultra Services Platform Release Introduced / Modified

6 Release 21.8 Features and Changes Release 21.8 Features and Changes Quick Reference 4

7 CHAPTER 2 Feature Defaults Quick Reference Feature Defaults Feature Defaults, on page 5 The following table indicates what features are enabled or disabled by default: Feature 5G Non Standalone for MME 5G Non Standalone for SAEGW API-based VNFM Upgrade Process API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process Automatic Disabling of Unused OpenStack Services Automatic Enabling of Syslogging for Ceph Services BGP Peer Limit Cisco Ultra Traffic Optimization on IPSG Configuration Support for Heartbeat Value Dedicated Core Networks on MME Diameter Proxy Consolidation DI-Network RSS Encryption ESC Event Integration with Ultra M Manager Event Logging Support for VPP Hash-Value Support in Header Enrichment ICSR Switchover Configuration Support for SF Failures Default Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled Enabled - Always-on Disabled - Configuration Required Disabled - License Required Disabled - Configuration Required Enabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Enabled - Always-on Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required 5

8 Feature Defaults Feature Defaults Quick Reference Feature IMEI Validation Failure Increased Maximum IFtask Thread Support Increased Subscriber Map Limits Inline TCP Optimization IPv6 PDN Type Restriction Limiting Cores on Local File Storage NAT64 Support Monitor VPC-DI Network Multiple IP Versions Support Non-MCDMA Cores for Crypto Processing Override Control Enhancement Packet Count in G-CDR RHEL Version Upgrade for Ultra M S6B-bypass Support for emps Sessions Short Message Service (SMS) Support SNMP IF-MIB and Entity-MIB Support Added for DI-Network Interface Triggering Iu Release Procedure UAS and UEM Login Security Enhancements UEM Patch Upgrade Process Ultra M Manager Integration with AutoIT Ultra M Manager SNMP Fault Suppression USP Software Version Updates Default Enabled - Always-on Enabled - Always-on Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Enabled - Always-on Disabled - Configuration Required Enabled - Always-on Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Enabled - Always-on Disabled - Configuration Required Enabled Always-on Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required Disabled - Configuration Required 6

9 CHAPTER 3 Bulk Statistics Changes Quick Reference This chapter identifies bulk statistics changes added to, modified for, or deprecated from the StarOS 21.8 software release. Important For more information regarding bulk statistics identified in this section, see the latest version of the BulkstatStatistics_document.xls spreadsheet supplied with the release. Bulk statistics changes for 21.8 include: New Bulk Statistics, on page 7 Modified Bulk Statistics, on page 14 Deprecated Bulk Statistics, on page 15 New Bulk Statistics This section identifies new bulk statistics and new bulk statistic schemas introduced in release APN Schema The following bulk statistics are added in the APN schema in support of the LTE to Wi-Fi (S2bGTP) Seamless Handover feature. Bulk Statistics Description apn-handoverstat-ltetos2bgtpsucc-timerexpiry Indicates the number of LTE to S2bGTP handover succeeded on Timer Expiry. apn-handoverstat-ltetos2bgtpsucc-uplnkdata Indicates the number of LTE to S2bGTP handover succeeded on Uplink Data on S2b tunnel. Diameter-Auth Schema Table 1: Bulks Statistics Description fh-continue-retry-emp Indicates the number of times failure handling action continue is taken using the emps template. 7

10 New Bulk Statistics Bulk Statistics Changes Quick Reference Bulks Statistics fh-continue-wo-retry-emps fh-retry-and-term-emps fh-retry-and-term-wo-str-emps h-terminate-emps fh-terminate-wo-str-emps Description Indicates the number of times the failure handling action continue without retry is taken using emps template. Indicates the number of times failure handling retry and terminate is taken using emps template. Indicates the number of times failure handling retry and terminate without STR is taken using emps template. Indicates the number of times failure handling terminate is taken using emps template. Indicates the number of times failure handling terminate without STR is taken using emps template. epdg Schema The following bulk statistic is added in the epdg schema to indicate the IMEI validation failure. Bulk Statistics sess-disconnect-invalid-imei Description The total number of sessions disconnected due to Invalid IMEI received from the UE. ICSR Schema This section displays the new bulk statistc added for the ICSR Switchover Configuration Support for SF Failures feature. Bulk Statistics switchover reason Description Indicates the reason for the ICSR switchover. MME Schema The following bulk statistics are added in the MME schema in support of the 5G Non Standalone (NSA) feature. Bulk Statistics attached-dcnr-subscriber connected-dcnr-subscriber idle-dcnr-subscriber Description The current total number of attached Subscribers which are capable of operating in DCNR. The current total number of Subscribers which are capable of operating in DCNR and in connected state. The current total number of Subscribers which are capable of operating in DCNR and in idle state. 8

11 Bulk Statistics Changes Quick Reference New Bulk Statistics Bulk Statistics dcnr-attach-req dcnr-attach-acc-allowed dcnr-attach-acc-denied dcnr-attach-rej dcnr-attach-comp dcnr-intra-tau-req dcnr-intra-tau-acc-allowed dcnr-intra-tau-acc-denied dcnr-intra-tau-comp dcnr-inter-tau-req dcnr-inter-tau-acc-allowed dcnr-inter-tau-acc-denied dcnr-inter-tau-rej dcnr-inter-tau-comp s1ap-recdata-erabmodind s1ap-transdata-erabmodcfm erab-modification-indication-attempted Description The total number of Attach Request received with DCNR supported. The total number of Attach Accept sent with DCNR allowed. The total number of Attach Accept sent with DCNR denied. The total number of DCNR requested Attaches Rejected. The total number of Attach Complete received for DCNR supported attaches. The total number of Intra-TAU Request received with DCNR supported. The total number of Intra-TAU Accept sent with DCNR allowed. The total number of Intra-TAU Accept sent with DCNR denied. The total number of Intra-TAU Complete received for DCNR supported requests. The total number of Inter-TAU Request received with DCNR supported. The total number of Inter-TAU Accept sent with DCNR allowed. The total number of Inter-TAU Accept sent with DCNR denied. The total number of DCNR requested Inter-TAU requests Rejected. The total number of Inter-TAU Complete received for DCNR supported requests. The total number of S1 Application Protocol - E-RAB Modification Indication messages received from all enodebs. The total number of E-RAB Modification Confirmation messages sent by the MME to the enodeb. This proprietary counter tracks the number of bearers for which ERAB Modification Indication message was sent. 9

12 New Bulk Statistics Bulk Statistics Changes Quick Reference Bulk Statistics erab-modification-indication-success erab-modification-indication-failures emmevent-path-update-attempt emmevent-path-update-success emmevent-path-update-failure dcnr-dns-sgw-selection-common dcnr-dns-sgw-selection-nr dcnr-dns-sgw-selection-local dcnr-dns-pgw-selection-common dcnr-dns-pgw-selection-nr dcnr-dns-pgw-selection-local Description This proprietary counter tracks the number of bearers for which ERAB Modification Indication message was sent. This proprietary counter tracks the number of bearers for which ERAB Modification Indication failed as shown in ERAB Modification Indication Confirm message. The total number of EPS Mobility Management events - Path Update - attempted. The total number of EPS Mobility Management events - Path Update - successes. The total number of EPS Mobility Management events - Path Update - failures. Indicates the number of times SGW DNS selection procedures are performed with DNS RR excluding NR network capability. This counter increments only when the DNS RR with +nc-nr is absent. Indicates the number of times SGW DNS selection procedures were performed with DNS RR including NR network capability. This counter increments only when the DNS RR with +nc-nr is present. Indicates the number of times SGW selection procedures were performed with locally configured SGW address, without considering the NR network capability. Indicates the number of times PGW DNS selection procedures were performed with DNS RR excluding NR network capability. This counter increments only when the DNS RR with +nc-nr is absent. Indicates the number of times PGW DNS selection procedures were performed with DNS RR including NR network capability. This counter increments only when the DNS RR with +nc-nr is present. Indicates the number of times PGW selection procedures were performed with locally configured PGW address, without considering the NR network capability. The following bulk statistics are new in the MME schema, and added in support of the Dedicated Core Networks feature. 10

13 Bulk Statistics Changes Quick Reference New Bulk Statistics Bulk Statistics mme-decor-handover-srv-area-dcn mme-decor-handover-srv-area-non-dcn mme-decor-explicit-air-attach mme-decor-explicit-air-in-reallocation mme-decor-explicit-air-tau-in-reallocation Description Indicates the total number of inbound handovers from the service area where DCN is supported. Indicates the total number of inbound handovers from the service area where DCN is not supported. Indicates the number of explicit AIR messages during Attach. Indicates the number of explicit AIR messages during inbound relocation. Indicates the number of explicit AIR messages during inbound relocation using TAU. MME Decor Schema The MME Decor schema is new in release The following bulk statistics for a specific DÉCOR profile are new in the MME Decor schema, and added in support of the DECOR feature. Bulk Statistics mme-decor-profile-name mme-decor-profile-attached-subscriber mme-decor-profile-initial-attach-req-accept mme-decor-profile-initial-attach-req-reroute mme-decor-profile-initial-attach-req-reject mme-decor-profile-reroute-attach-req-accept mme-decor-profile-reroute-attach-req-reject mme-decor-profile-initial-tau-req-accept mme-decor-profile-initial-tau-req-reroute mme-decor-profile-initial-tau-req-reject Description Indicates the name of the DECOR profile. Indicates the total number of subscribers on the MME which is acting as a DCN. Indicates the total number of Initial Attach Requests accepted by the MME that is acting as a DCN. Indicates the total number of Initial Attach Requests which are rerouted by the MME that is acting as a DCN. Indicates the total number of Initial Attach Rejects due to No Reroute Data and not handled by the MME that is acting as a DCN. Indicates the total number of Rerouted Attach Requests which are accepted by the MME that is acting as a DCN. Indicates the total number of Rerouted Attach Requests which are rejected by the MME that is acting as a DCN. Indicates the total number of Initial TAU Reuquests accepted by the MME that is acting as a DCN. Indicates the total number of Initial TAU Reuquests which are rerouted by the MME that is acting as a DCN. Indicates the total number of Initial TAU Rejects due to No Reroute Data and not handled by the MME that is acting as a DCN. 11

14 New Bulk Statistics Bulk Statistics Changes Quick Reference Bulk Statistics mme-decor-profile-reroute-tau-req-accept mme-decor-profile-reroute-tau-req-reject mme-decor-profile-ue-usage-type-src-hss mme-decor-profile-ue-usage-type-src-ue-ctxt mme-decor-profile-ue-usage-type-src-peer-mme mme-decor-profile-ue-usage-type-src-peer-sgsn mme-decor-profile-ue-usage-type-src-cfg mme-decor-profile-ue-usage-type-src-enb mme-decor-profile-sgw-sel-dns-common mme-decor-profile-sgw-sel-dns-dedicated mme-decor-profile-sgw-sel-local-cfg-common mme-decor-profile-pgw-sel-dns-common mme-decor-profile-pgw-sel-dns-dedicated mme-decor-profile-pgw-sel-local-cfg-common mme-decor-profile-mme-sel-dns-common Description Indicates the total number of Rerouted TAU Requests which are accepted by the MME that is acting as a DCN. Indicates the total number of Rerouted TAU Requests which are rejected by the MME that is acting as a DCN. Indicates the total number of times UE Usage Type is received from the HSS and used by the MME. Indicates the total number of times UE Usage Type is fetched from the local DB Record and used by the MME. Indicates the total number of times UE Usage Type is received from the peer MME and used by the MME. Indicates the total number of times UE Usage Type is received from the peer SGSN and used by the MME. Indicates the total number of times UE Usage Type is fetched from the local configuration and used by the MME. Indicates the total number of times UE Usage Type is received from the enodeb and used by the MME. Indicates the total number of times S-GW is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times S-GW is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times S-GW is selected from the local configuration without UE Usage Type. Indicates the total number of times P-GW is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times P-GW is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times P-GW is selected from the local configuration without UE Usage Type. Indicates the total number of times MME is selected through DNS from a common pool (DNS records without UE Usage Type). 12

15 Bulk Statistics Changes Quick Reference New Bulk Statistics Bulk Statistics mme-decor-profile-mme-sel-dns-dedicated mme-decor-profile-mme-sel-local-cfg-common mme-decor-profile-sgsn-sel-dns-common mme-decor-profile-sgsn-sel-dns-dedicated mme-decor-profile-sgsn-sel-local-cfg-common mme-decor-profile-mmegi-sel-dns mme-decor-profile-mmegi-sel-local-cfg mme-decor-profile-mmegi-sel-fail mme-decor-profile-guti-reallocation-attempted mme-decor-profile-guti-reallocation-success mme-decor-profile-guti-reallocation-failures mme-decor-profile-isdr-ue-usage-type-change mme-decor-profile-explicit-air-attach mme-decor-profile-explicit-air-in-relocation mme-decor-profile-explicit-air-tau-in-relocation mme-decor-profile-handover-srv-area-dcn Description Indicates the total number of times MME is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times MME is selected from the local configuration without UE Usage Type. Indicates the total number of times SGSN is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times SGSN is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times SGSN is selected from the local configuration without UE Usage Type. Indicates the total number of times MMEGI is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times MMEGI is selected from the local configuration. Indicates the total number of times MMEGI selection failed. Indicates the number of GUTI Reallocation procedures attempted due to UE-Usage-Type Change from HSS through ISDR OR after connected mode handover and UE-Usage-Type not served by this MME (NAS GUTI Reallocation Command message was sent by MME). Indicates the number of successful GUTI Reallocation procedures. Indicates the number of failed GUTI Reallocation procedures. Indicates the number of ISDR Messages received with different UE-Usage-Type from the HSS. Indicates the number of explicit AIR messages during Attach. Indicates the number of explicit AIR messages during inbound relocation. Indicates the number of explicit AIR messages during inbound relocation using TAU. Indicates the total number of inbound handovers from service area where DCN is supported. 13

16 Modified Bulk Statistics Bulk Statistics Changes Quick Reference Bulk Statistics mme-decor-profile-handover-srv-area-non-dcn Description Indicates the total number of inbound handovers from service area where DCN is not supported. P-GW Schema The following bulk statistics are added in the P-GW schema in support of the LTE to Wi-Fi (S2bGTP) Seamless Handover feature. Bulk Statistics handoverstat-ltetos2bgtpsucc-timerexpiry handoverstat-ltetos2bgtpsucc-uplnkdata Description Handover Statistics - Indicates the number of LTE to GTP S2b successful handovers on Timer Expiry. Handover Statistics - Indicates the number of LTE to GTP S2b successful handovers on Uplink Data on S2b tunnel. SAEGW Schema The following bulk statistics are added in the SAEGW schema in support of the LTE to Wi-Fi (S2bGTP) Seamless Handover feature. Bulk Statistics pgw-handoverstat-ltetos2bgtpsucc-timerexpiry pgw-handoverstat-ltetos2bgtpsucc-uplnkdata Description P-GW Handover Statistics - Indicates the number of LTE to GTP S2b successful handover on Timer Expiry. P-GW Handover Statistics - Indicates the number of LTE to GTP S2b successful handover on Uplink Data on S2b tunnel. Modified Bulk Statistics This section identifies bulk statistics that have been modified in release None in this release. Mon-Di-Net Schema The following bulk statistics are added in the mon-di-net schema in support of the Monitor the VPC-DI Network feature. Bulk Statistics cp-loss-5minave cp-loss-60minave dp-loss-5minave dp-loss-60minave Description Indicates the average Control Plane loss in prior 5 minutes. Indicates the average Control Plane loss in prior 60 minutes. Indicates the average Data Plane loss in prior 5 minutes. Indicates the average Data Plane loss in prior 60 minutes. 14

17 Bulk Statistics Changes Quick Reference Deprecated Bulk Statistics Deprecated Bulk Statistics This section identifies bulk statistics that are no longer supported in release None in this release. 15

18 Deprecated Bulk Statistics Bulk Statistics Changes Quick Reference 16

19 CHAPTER 4 5G NSA for MME Feature Summary and Revision History, on page 17 Feature Description, on page 18 How It Works, on page 21 Configuring 5G NSA for MME, on page 26 Monitoring and Troubleshooting, on page 29 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) MME ASR 5000 ASR 5500 VPC-DI VPC-SI Feature Default Disabled - Configuration Required Related Changes in This Release Not applicable Related Documentation 5G Non Standalone Solution Guide AAA Interface Administration and Reference Command Line Interface Reference MME Administration Guide Statistics and Counters Reference Revision History Revision Details Release 17

20 Feature Description 5G NSA for MME The 5G NSA solution is qualified on the ASR 5000 platform. First introduced Feature Description Cisco 5G Non Standalone (NSA) solution leverages the existing LTE radio access and core network (EPC) as an anchor for mobility management and coverage. This solution enables operators using the Cisco EPC Packet Core to launch 5G services in shorter time and leverage existing infrastructure. Thus, NSA provides a seamless option to deploy 5G services with very less disruption in the network. Overview 5G is the next generation of 3GPP technology, after 4G/LTE, defined for wireless mobile data communication. The 5G standards are introduced in 3GPP Release 15 to cater to the needs of 5G networks. The two solutions defined by 3GPP for 5G networks are: 5G Non Standalone (NSA): The existing LTE radio access and core network (EPC) is leveraged to anchor the 5G NR using the Dual Connectivity feature. This solution enables operators to provide 5G services with shorter time and lesser cost. Note The 5G NSA solution is supported in this release. 5G Standalone (SA): An all new 5G Packet Core will be introduced with several new capabilities built inherently into it. The SA architecture comprises of 5G New Radio (5G NR) and 5G Core Network (5GC). Network Slicing, CUPS, Virtualization, Multi-Gbps support, Ultra low latency, and other such aspects will be natively built into the 5G SA Packet Core architecture. Dual Connectivity The E-UTRA-NR Dual Connectivity (EN-DC) feature supports 5G New Radio (NR) with EPC. A UE connected to an enodeb acts as a Master Node (MN) and an en-gnb acts as a Secondary Node (SN). The enodeb is connected to the EPC through the S1 interface and to the en-gnb through the X2 interface. The en-gnb can be connected to the EPC through the S1-U interface and other en-gnbs through the X2-U interface. The following figure illustrates the E-UTRA-NR Dual Connectivity architecture. 18

21 5G NSA for MME Feature Description Figure 1: EN-DC Architecture If the UE supports dual connectivity with NR, then the UE must set the DCNR bit to "dual connectivity with NR supported" in the UE network capability IE of the Attach Request/Tracking Area Update Request message. If the UE indicates support for dual connectivity with NR in the Attach Request/Tracking Area Update Request message, and the MME decides to restrict the use of dual connectivity with NR for the UE, then the MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept/Tracking Area Update Accept message. If the RestrictDCNR bit is set to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept/Tracking Area Update Accept message, the UE provides the indication that dual connectivity with NR is restricted to the upper layers. If the UE supports DCNR and DCNR is configured on MME, and if HSS sends ULA/IDR with "Access-Restriction" carrying "NR as Secondary RAT Not Allowed", MME sends the "NR Restriction" bit set in "Handover Restriction List" IE during Attach/TAU/Handover procedures. Similarly, MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept/Tracking Area Update Accept message. Accordingly, UE provides the indication that dual connectivity with NR is restricted to the upper layers. The "Handover Restriction List" IE is present in the "Initial Context Setup Request" message for Attach and TAU procedure with data forwarding procedure, in the "Handover Required" message for S1 handover procedure, in the "Downlink NAS Transport" message for TAU without active flag procedure. The 5G NSA solution for MME supports the following functionality: When DCNR capable UE attempts to register in MME and when all DCNR validations are successful (for example DCNR feature configuration on MME, HSS not sending access-restriction for NR, etc), for dynamic S-GW and P-GW selection, MME uses the following service parameters received from DNS server (in NAPTR response) over other service parameters to select NR capable S-GW/P-GW. x-3gpp-sgw:x-s5-gtp+nc-nr x-3gpp-pgw:x-s5-gtp+nc-nr 19

22 Feature Description 5G NSA for MME When the dynamic selection of S-GW/P-GW fails for any other reasons, MME falls back and selects the locally configured S-GW/P-GW. Dynamic S-GW and P-GW selection by MME for DCNR capable UE is supported. When a DCNR capable UE attempts to register in MME and when all DCNR validations are successful (DCNR feature configuration on MME, HSS not sending access-restriction for NR, and so on), the MME sets the "UP Function Selection Indication Flags" IE with DCNR flag set to 1 in "Create Session Request" message. This feature will support the CUPS architecture for SGW-C and PGW-C to select SGW-U and PGW-U which supports dual connectivity with NR. When S-GW receives this IE over S11, it sends this IE over S5 to P-GW. If S-GW receives IE in non-cups deployment, it is ignored. Ultra-low latency QCI URLLC QCI 80 (Non-GBR resource type), QCI 82 and QCI 83 (GBR resource type). MME establishes the default bearers with URLLC QCI 80, which is typically used by low latency embb applications. MME also establishes the dedicated bearers with URLLC QCI 82 and QCI 83 (also with QCI 80 if dedicated bearers of non-gbr type to be established), which is typically used by discrete automation services (industrial automation). E-RAB modification procedure Handles the "DCNR bit" in "UE network capability" IE. Advertises the DCNR feature support by sending "NR as Secondary RAT" feature bit in "Feature-List-ID-2" towards HSS provided the DCNR feature is configured at MME and UE advertises the DCNR capability in NAS. Receives DCNR feature support from HSS when HSS sends the "NR as Secondary RAT" feature bit in "Feature-List-ID-2". Decodes the extended AVP (Extended-Max-Requested-BW-UL and Extended-Max-Requested-BW-DL) which is received from HSS. Handles ULA/IDR from HSS with "Access-Restriction" carrying "NR as Secondary RAT Not Allowed". The following new IEs are supported with this feature. S1-AP interface: Extended UE-AMBR Downlink Extended UE-AMBR Uplink Extended E-RAB Maximum Bit Rate Downlink Extended E-RAB Maximum Bit Rate Uplink Extended E-RAB Guaranteed Maximum Bit Rate Downlink Extended E-RAB Guaranteed Maximum Bit Rate Uplink NAS interface: Extended EPS quality of service Extended APN aggregate maximum bit rate Sends the extended QoS values towards S-GW in legacy IE APN-AMBR, Bearer QoS, and Flow QoS. Configuration of DCNR at the MME service and call control profile. 20

23 5G NSA for MME How It Works Extension of UE-AMBR limits in call control profile for higher throughput. Extension of APN-AMBR and MBR limits in APN profile for higher throughput. Statistics for DCNR feature and E-RAB modification feature. How It Works Architecture This section describes the external interfaces required to support the 5G NSA architecture. S6a (HSS) Interface The S6a interface supports new AVPs "Extended-Max-Requested-BW-UL" and "Extended-Max-Requested-BW-DL" in grouped AVP "AMBR" to handle the 5G throughput ranges. When the maximum bandwidth value for UL (or DL) traffic is higher than bits per second, the "Max-Requested-Bandwidth-UL" AVP (or DL) must be set to the upper limit and the "Extended-Max-Requested-BW-UL" AVP (or DL) must be set to the requested bandwidth value in kilobits per second. S1AP (enodeb) Interface Extended UE-AMBR The S1AP interface supports new IEs "Extended UE Aggregate Maximum Bit Rate Downlink" and "Extended UE Aggregate Maximum Bit Rate Uplink" in the grouped IE "UE Aggregate Maximum Bit Rate", where the units are bits/second. If the Extended UE Aggregate Maximum Bit Rate Downlink/Uplink IE is included, then the UE Aggregate Maximum Bit Rate Downlink/Uplink IE must be ignored. Extended E-RAB MBR/GBR The S1AP interface supports new AVPs "Extended E-RAB Maximum Bit Rate Downlink/Uplink" and "Extended E-RAB Guaranteed Bit Rate Downlink/Uplink" in the "GBR QoS Information" grouped IE, where the units are bits/second. NAS (UE) Interface Extended APN Aggregate Maximum Bit Rate The existing IE in NAS "APN-AMBR" supports APN-AMBR values up to 65.2Gbps, to convey the 5G throughput (beyond 65.2Gbps) over NAS. A new IE "Extended APN aggregate maximum bit rate" is added in all applicable NAS messages. Extended EPS Quality of Service As the existing IE in NAS "EPS Quality of Service" supports MBR and GBR values up to 10Gbps, to convey the 5G throughput (beyond 10Gbps) over NAS, a new IE "Extended EPS Quality of Service" is added in all applicable NAS messages. The structure of IE "Extended EPS Quality of Service" and the units for encoding/decoding is displayed below. 21

24 Flows 5G NSA for MME Flows This section describes the call flow procedures related to MME for 5G NSA. Initial Registration Procedure This section describes the Initial Registration procedure for DCNR capable UE. Figure 2: Initial Registration of DCNR Capable UE Step Description The DCNR capable UE sets the "DCNR bit" in NAS message "Attach Request" in "UE Network Capability" IE. DCNR must be enabled at MME service or call control profile depending upon the operator requirement. MME successfully authenticates the UE. As part of the authorization process, while sending ULR to HSS, MME advertises the DCNR support by sending the "NR as Secondary RAT" feature bit in "Feature-List-ID-2". 22

25 5G NSA for MME Initial Registration Procedure Step Description HSS sends ULA by advertising the DCNR by sending "NR as Secondary RAT" feature bit in "Feature-List-ID-2", "Max-Requested-Bandwidth-UL" as bps, "Max-Requested-Bandwidth-DL" as bps, and the extended bandwidth values in new AVPs "Extended-Max-Requested-BW-UL" and "Extended-Max-Requested-BW-DL". If HSS determines that the UE is not authorized for DCNR services, then HSS sends Subscription-Data with "Access-Restriction" carrying "NR as Secondary RAT Not Allowed". MME sends the Create Session Request message with the extended APN-AMBR values in existing AMBR IE. As the APN-AMBR values in GTPv2 interface are encoded in kbps, the existing AMBR IE handles the 5G NSA bit rates. P-GW sends CCR-I to PCRF advertising the DCNR by sending "Extended-BW-NR" feature bit in "Feature-List-ID-2", "APN-Aggregate-Max-Bitrate-UL" as bps, "APN-Aggregate-Max-Bitrate-DL" as bps, and the extended bandwidth values in new AVPs "Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL". PCRF sends CCA-I advertising the DCNR by sending "Extended-BW-NR" feature bit in "Feature-List-ID-2", "APN-Aggregate-Max-Bitrate-UL" as bps, "APN-Aggregate-Max-Bitrate-DL" as bps, and the extended bandwidth values in new AVPs "Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL". PCRF can offer the same extended APN-AMBR values that are requested by PCRF or modify the extended APN-AMBR values. P-GW enforces the APN-AMBR values accordingly. P-GW honors the APN-AMBR values as offered by PCRF and sends the extended APN-AMBR values in existing APN-AMBR IE in the Create Session Response message. 23

26 E-RAB Modification Procedure 5G NSA for MME Step Description MME computes the UE-AMBR values and sends the extended UE-AMBR values in new IEs "Extended UE Aggregate Maximum Bit Rate Downlink" and "Extended UE Aggregate Maximum Bit Rate Uplink" by setting the legacy "UE AMBR Uplink" and "UE AMBR Downlink" values to the maximum allowed value bps (10 Gbps) in the "Initial Context Setup Request" message. MME sends the APN-AMBR values up to 65.2 Gbps in existing APN-AMBR IE in NAS Activate Default EPS Bearer Context Request Attach Accept. If the APN-AMBR values are beyond 65.2 Gbps, MME sends the extended APN-AMBR values in new IE "Extended APN Aggregate Maximum Bit Rate". If ULA is received with "Access-Restriction" carrying "NR as Secondary RAT Not Allowed". MME sends the Initial Context Setup Request message with "NR Restriction" bit set in Handover Restriction List IE. MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept message. UE provides the indication that dual connectivity with NR is restricted to the upper layers accordingly. If the DCNR feature is not configured at MME service or call control profile, then MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept message. UE provides the indication that dual connectivity with NR is restricted to the upper layers accordingly. enodeb sends the Initial Context Setup Response message. If master enodeb determines to establish the bearer on secondary enodeb, F-TEID of the secondary enodeb may be sent (Transport layer address and TEID of secondary enodeb). It is transparent to MME if the bearer is established on master enodeb or secondary enodeb. enodeb sends Uplink NAS Transport with NAS message "Complete - Activate Default EPS Bearer Context Accept". MME sends the Modify Bearer Request message to S-GW with S1-U F-TEID details as received in the Initial Context Setup Response message. MME receives the Modify Bearer Response message from S-GW. E-RAB Modification Procedure When Secondary Cell Group (SCG) bearer option is applied to support DCNR, this procedure is used to transfer bearer contexts to and from secondary enodeb or secondary gnodeb. 24

27 5G NSA for MME E-RAB Modification Procedure Figure 3: E-RAB Modification Procedure by Master enodeb Step Description The master enodeb (MeNB) sends an E-RAB Modification Indication message (enodeb address(es) and TEIDs for downlink user plane for all the EPS bearers) to the MME. The master enodeb indicates if each bearer is modified or not. The "E-RAB to be Modified List" IE contains both "E-RAB to Be Modified Item IEs" and "E-RAB not to Be Modified Item IEs". For the bearer that need to be switched to secondary enodeb/gnodeb (SeNB), "E-RAB to Be Modified Item IEs" contains the transport layer address of gnodeb and TEID of gnodeb. The MME sends a Modify Bearer Request message (enodeb address(es) and TEIDs for downlink user plane for all the EPS bearers) per PDN connection to the S-GW, only for the affected PDN connections. The S-GW returns a Modify Bearer Response message (S-GW address and TEID for uplink traffic) to the MME as a response to the Modify Bearer Request message. For the bearers transferred to SeNB, S-GW sends one or more end marker packets on the old path (to Master enodeb) immediately after switching the path. 25

28 Standards Compliance 5G NSA for MME Step 5 Description The MME confirms E-RAB modification with the E-RAB Modification Confirm message. The MME indicates if each bearer was successfully modified, retained, unmodified or already released by the EPC. Standards Compliance Cisco's implementation of the 5G NSA feature complies with the following standards: 3GPP Release Numbering, addressing and identification. 3GPP Release General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access 3GPP Release Evolved Packet System (EPS); Mobility Management Entity (MME) and Serving GPRS Support Node (SGSN) related interfaces based on Diameter protocol 3GPP Release GPP Evolved Packet System (EPS); Evolved General Packet Radio Service (GPRS) Tunnelling Protocol for Control plane (GTPv2-C); Stage 3 3GPP Release Domain Name System Procedures Configuring 5G NSA for MME This section describes how to configure 5G NSA to support MME. Configuring 5G NSA on MME involves: Enabling DCNR in MME Service, on page 26 Enabling DCNR in Call Control Profile, on page 27 Configuring APN AMBR Values, on page 27 Configuring Dedicated Bearer MBR Values, on page 28 Configuring UE AMBR Values, on page 28 Enabling DCNR in MME Service Use the following configuration to enable Dual Connectivity with New Radio (DCNR) to support 5G NSA. configure context context_name mme-service service_name [ no ] dcnr end NOTES: 26

29 5G NSA for MME Enabling DCNR in Call Control Profile mme-service service_name: Creates an MME service or configures an existing MME service in the current context. service_name specifies the name of the MME service as an alphanumeric string of 1 to 63 characters. no: Disables the DCNR configuration. The dcnr CLI command is disabled by default. Enabling DCNR in Call Control Profile Use the following configuration to enable Dual Connectivity with New Radio (DCNR) to support 5G Non Standalone (NSA). configure call-control-profile profile_name [ no remove ] dcnr end NOTES: call-control-profile profile_name: Creates an instance of a call control profile. profile_name specifies the name of the call control profile as an alphanumeric string of 1 to 64 characters. no: Disables the DCNR configuration in the call control profile. remove: Removes the DCNR configuration from the call control profile. The dcnr CLI command is disabled by default. Configuring APN AMBR Values Use the following configuration to configure the APN aggregate maximum bit rate (AMBR) that will be stored in the Home Subscriber Server (HSS). configure apn-profile apn_profile_name qos apn-ambr max-ul mbr_up max-dl mbr_down remove qos apn-ambr end NOTES: apn-profile apn_profile_name: Creates an instance of an Access Point Name (APN) profile. apn_profile specifies the name of the APN profile as an alphanumeric string of 1 to 64 characters. qos: Configures the quality of service (QoS) parameters to be applied. apn-ambr: Configures the aggregate maximum bit rate (AMBR) for the APN. max-ul mbr_up: Defines the maximum bit rates for uplink traffic. mbr_up must be an integer from 1 to (4 Tbps). max-dl mbr_down: Defines the maximum bit rates for downlink traffic. mbr_up must be an integer from 1 to (4 Tbps). remove: Removes the APN AMBR changes from the configuration for this APN profile. 27

30 Configuring Dedicated Bearer MBR Values 5G NSA for MME Configuring Dedicated Bearer MBR Values Use the following configuration to configure the quality of service maximum bit rate (MBR) values for the dedicated bearer. configure apn-profile apn_profile_name qos dedicated-bearer mbr max-ul mbr_up max-dl mbr_down remove qos dedicated-bearer end NOTES: apn-profile apn_profile: Creates an instance of an Access Point Name (APN) profile. apn_profile_name specifies the name of the APN profile as an alphanumeric string of 1 to 64 characters. qos: Configures the quality of service (QoS) parameters to be applied. dedicated-bearer mbr: Configures the maximum bit rate (MBR) for the dedicated bearer. max-ul mbr_up: Defines the maximum bit rate for uplink traffic. mbr_up must be an integer from 1 to (4 Tbps). max-dl mbr_down: Defines the maximum bit rate for downlink traffic. mbr_down must be an integer from 1 to (4 Tbps). remove: Deletes the dedicated bearer MBR changes from the configuration for this APN profile. Configuring UE AMBR Values Use the following configuration to configure the values for aggregate maximum bit rate stored on the UE (UE AMBR). configure call-control-profile profile_name qos ue-ambr { max-ul mbr_up max-dl mbr_down } remove qos ue-ambr end NOTES: call-control-profile profile_name: Creates an instance of a call control profile. profile_name specifies the name of a call control profile entered as an alphanumeric string of 1 to 64 characters. qos: Configures the quality of service (QoS) parameters to be applied. ue-ambr: Configures the aggregate maximum bit rate stored on the UE (UE AMBR). max-ul mbr_up: Defines the maximum bit rate for uplink traffic. mbr_up must be an integer from 1 to (4 Tbps). max-dl mbr_down: Defines the maximum bit rate for uplink traffic. mbr_down must be an integer from 1 to (4 Tbps). remove: Deletes the configuration from the call control profile. 28

31 5G NSA for MME Monitoring and Troubleshooting Monitoring and Troubleshooting This section provides information regarding show commands and bulk statistics available to monitor and troubleshoot the 5G NSA feature. Show Commands and Outputs show mme-service db record imsi The output of this command includes the following fields: ARD: Dual-Connectivity-NR-not-allowed Displays True or False to identify if the ARD received from HSS indicates the DCNR feature is allowed for the given IMSI or not. show mme-service name <mme_svc_name> The output of this command includes the "DCNR" field to indicate if the DCNR feature is enabled or disabled at MME service. show mme-service session full all The output of this command includes the following fields: UE DC-NR Information: DC-NR capable UE Indicates whether the UE is DCNR capable. DC-NR operation allowed Indicates whether the DCNR operation is allowed by MME for the DCNR capable UE. show mme-service statistics Dual Connectivity with NR Statistics: Attach Procedure Attach Request Rcvd Indicates the number of Attach Request messages received with UE advertising DCNR support. Attach Acc DCNR allowed Indicates the number of Attach Accept messages sent by the MME acknowledging the DCNR support for UE (Restrict DCNR bit not set in Attach Accept). Attach Acc DCNR denied Indicates the number of Attach Accepts sent by MME rejecting the DCNR support for the UE (Restrict DCNR bit set in Attach Accept). Attach Reject Sent Indicates the number of Attach Reject messages sent by MME whose corresponding Attach Request messages have DCNR support capability. Attach Complete Rcvd Indicates the number of Attach Complete messages received by MME whose corresponding Attach Request messages have DCNR support capability. 29

32 Show Commands and Outputs 5G NSA for MME Intra MME TAU Procedure TAU Request Rcvd Indicates the number of TAU Request messages received for Intra-MME TAU procedure with UE advertising DCNR support. TAU Accept DCNR allowed Indicates the number of TAU Accept messages sent by the MME acknowledging the DCNR support for UE (Restrict DCNR bit not set in TAU Accept) for Intra-MME TAU procedure. TAU Accept DCNR denied Indicates the number of TAU Accept messages sent by the MME rejecting the DCNR support for UE (Restrict DCNR bit set in TAU Accept) for Intra-MME TAU procedure. TAU Complete Rcvd Indicates the number of TAU Complete messages received by the MME whose corresponding Intra-MME TAU Requests have DCNR support capability. Inter MME TAU Procedure TAU Request Rcvd Indicates the number of TAU Request messages received for Inter-MME TAU procedure with UE advertising DCNR support. TAU Accept DCNR allowed Indicates the number of TAU Accept messages sent by the MME acknowledging the DCNR support for UE (Restrict DCNR bit not set in TAU Accept) for Inter-MME TAU procedure. TAU Accept DCNR denied Indicates the number of TAU Accept messages sent by the MME rejecting the DCNR support for UE (Restrict DCNR bit set in TAU Accept) for Inter-MME TAU procedure. TAU Reject Sent Indicates the number of TAU Reject messages sent by the MME whose corresponding Inter-MME TAU Requests have DCNR support capability. TAU Complete Rcvd Indicates the number of TAU Complete messages received by the MME whose corresponding Inter-MME TAU Requests have DCNR support capability. Dual Connectivity with NR Subscribers Attached Calls Indicates the number of DCNR supported UEs attached with the MME. Connected Calls Indicates the number of DCNR supported UEs in connected mode at the MME. Idle Calls Indicates the number of DCNR supported UEs in idle mode at the MME. Node Selection: SGW DNS: Common Indicates the number of times S-GW DNS selection procedures are performed with DNS RR excluding the NR network capability. NR Capable Indicates the number of times S-GW DNS selection procedures are performed with DNS RR including the NR network capability. SGW Local Config Common Indicates the number of times S-GW selection procedures are performed with locally configured S-GW address, without considering the NR network capability. 30

33 5G NSA for MME Show Commands and Outputs PGW DNS: Common Indicates the number of times P-GW DNS selection procedures are performed with DNS RR excluding the NR network capability. NR Capable Indicates the number of times P-GW DNS selection procedures are performed with DNS RR including the NR network capability. PGW Local Config: Common Indicates the number of times P-GW selection procedures are performed with locally configured P-GW address, without considering the NR network capability. Important When UE is defined with "UE usage type" and "NR Capable", S-GW/P-GW via DNS is selected in the following order: 1. MME chooses S-GW/P-GW that support both +ue and +nr services. 2. If step 1 fails, MME selects S-GW/P-GW that supports +nr service only. 3. If step 2 fails, MME selects S-GW/P-GW that supports +ue service only. 4. If step 3 fails, MME selects S-GW/P-GW without +nr or +ue service. Handover Statistics: Bearer Statistics ERAB Modification Indication Attempted Indicates the number of bearers for which the E-RAB Modification Indication procedure is attempted (bearer level stats). Success Indicates the number of bearers for which the E-RAB Modification Indication procedure has succeeded (bearer level stats). Failures Indicates the number of bearers for which the E-RAB Modification Indication procedure has failed (bearer level stats). Bearer Statistics ERAB Modification Indication Attempted Indicates the number of bearers for which the E-RAB Modification Indication procedure is attempted (bearer level stats). Success Indicates the number of bearers for which the E-RAB Modification Indication procedure has succeeded (bearer level stats). Failures Indicates the number of bearers for which the E-RAB Modification Indication procedure has failed (bearer level stats). 31

34 Bulk Statistics 5G NSA for MME show mme-service statistics s1ap The output of this command includes the following fields: S1AP Statistics: Transmitted S1AP Data: E-RAB Modification Cfm Indicates the number of E-RAB Modification Confirm messages sent by MME upon successful E-RAB modification procedure. Received S1AP Data E-RAB Mod Ind Indicates the number of E-RAB Modification Indication messages received from the master enodeb. Bulk Statistics show subscribers mme-service The output of this command includes the "DCNR Devices" field to indicate the number of DCNR devices that are attached to the MME. This section provides information on the bulk statistics for the 5G NSA feature on MME. MME Schema The following 5G NSA feature related bulk statistics are available in the MME schema. Bulk Statistics attached-dcnr-subscriber connected-dcnr-subscriber idle-dcnr-subscriber dcnr-attach-req dcnr-attach-acc-allowed dcnr-attach-acc-denied dcnr-attach-rej dcnr-attach-comp Description The current total number of attached subscribers capable of operating in DCNR. The current total number of subscribers capable of operating in DCNR and in connected state. The current total number of subscribers capable of operating in DCNR and in idle state. The total number of Attach Request messages that are received with DCNR supported. The total number of Attach Accept messages that are sent with DCNR allowed. The total number of Attach Accept messages that are sent with DCNR denied. The total number of DCNR requested Attach Rejected messages. The total number of Attach Complete messages that are received for DCNR supported attaches. 32

35 5G NSA for MME MME Schema Bulk Statistics dcnr-intra-tau-req dcnr-intra-tau-acc-allowed dcnr-intra-tau-acc-denied dcnr-intra-tau-comp dcnr-inter-tau-req dcnr-inter-tau-acc-allowed dcnr-inter-tau-acc-denied dcnr-inter-tau-rej dcnr-inter-tau-comp s1ap-recdata-erabmodind s1ap-transdata-erabmodcfm erab-modification-indication-attempted erab-modification-indication-success erab-modification-indication-failures emmevent-path-update-attempt emmevent-path-update-success emmevent-path-update-failure Description The total number of Intra-TAU Request messages that are received with DCNR supported. The total number of Intra-TAU Accept messages that are sent with DCNR allowed. The total number of Intra-TAU Accept messages that are sent with DCNR denied. The total number of Intra-TAU Complete messages that are received for DCNR supported requests. The total number of Inter-TAU Request messages that are received with DCNR supported. The total number of Inter-TAU Accept messages that are sent with DCNR allowed. The total number of Inter-TAU Accept messages that are sent with DCNR denied. The total number of DCNR requested Inter-TAU Request messages that are rejected. The total number of Inter-TAU Complete messages that are received for DCNR supported requests. The total number of S1 Application Protocol - E-RAB Modification Indication messages received from all enodebs. The total number of E-RAB Modification Confirmation messages sent by the MME to the enodeb. The total number of bearers for which E-RAB Modification Indication messages were sent. The total number of bearers for which E-RAB Modification Indication messages were sent. The total number of bearers for which E-RAB Modification Indication failed as shown in E-RAB Modification Indication Confirm message. The total number of EPS Mobility Management events - Path Update attempted. The total number of EPS Mobility Management events - Path Update successes. The total number of EPS Mobility Management events - Path Update failures. 33

36 MME Schema 5G NSA for MME Bulk Statistics dcnr-dns-sgw-selection-common dcnr-dns-sgw-selection-nr dcnr-dns-sgw-selection-local dcnr-dns-pgw-selection-common dcnr-dns-pgw-selection-nr dcnr-dns-pgw-selection-local Description The total number of times S-GW DNS selection procedures are performed with DNS RR excluding NR network capability. The total number of times S-GW DNS selection procedures were performed with DNS RR including NR network capability. The total number of times S-GW selection procedures were performed with locally configured S-GW address, without considering the NR network capability. The total number of times P-GW DNS selection procedures were performed with DNS RR excluding NR network capability. The total number of times P-GW DNS selection procedures were performed with DNS RR including NR network capability. The total number of times P-GW selection procedures were performed with locally configured P-GW address, without considering the NR network capability. 34

37 CHAPTER 5 5G NSA for SAEGW Feature Summary and Revision History, on page 35 Feature Description, on page 36 How It Works, on page 39 Configuring 5G NSA for SAEGW, on page 42 Monitoring and Troubleshooting, on page 46 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area P-GW S-GW SAEGW Applicable Platform(s) ASR 5000 ASR 5500 VPC-DI VPC-SI Feature Default Related Changes in This Release Disabled - Configuration Required Not applicable 35

38 Feature Description 5G NSA for SAEGW Related Documentation 5G Non Standalone Solution Guide AAA Interface Administration and Reference Command Line Interface Reference P-GW Administration Guide S-GW Administration Guide SAEGW Administration Guide Statistics and Counters Reference Revision History Revision Details The 5G NSA solution is qualified on the ASR 5000 platform. The 5G NSA solution is enhanced to support: Feature License Release Dedicated bearers Gy interface support URLLC QCI support Show output commands enhanced First introduced Feature Description Cisco 5G Non Standalone (NSA) solution leverages the existing LTE radio access and core network (EPC) as an anchor for mobility management and coverage. This solution enables operators using the Cisco EPC Packet Core to launch 5G services in shorter time and leverage existing infrastructure. Thus, NSA provides a seamless option to deploy 5G services with very less disruption in the network. Overview 5G is the next generation of 3GPP technology, after 4G/LTE, defined for wireless mobile data communication. The 5G standards are introduced in 3GPP Release 15 to cater to the needs of 5G networks. The two solutions defined by 3GPP for 5G networks are: 5G Non Standalone (NSA): The existing LTE radio access and core network (EPC) is leveraged to anchor the 5G NR using the Dual Connectivity feature. This solution enables operators to provide 5G services with shorter time and lesser cost. 36

39 5G NSA for SAEGW Feature Description Note The 5G NSA solution is supported in this release. 5G Standalone (SA): An all new 5G Packet Core will be introduced with several new capabilities built inherently into it. The SA architecture comprises of 5G New Radio (5G NR) and 5G Core Network (5GC). Network Slicing, CUPS, Virtualization, Multi-Gbps support, Ultra low latency, and other such aspects will be natively built into the 5G SA Packet Core architecture. Dual Connectivity The E-UTRA-NR Dual Connectivity (EN-DC) feature supports 5G New Radio (NR) with EPC. A UE connected to an enodeb acts as a Master Node (MN) and an en-gnb acts as a Secondary Node (SN). The enodeb is connected to the EPC through the S1 interface and to the en-gnb through the X2 interface. The en-gnb can be connected to the EPC through the S1-U interface and other en-gnbs through the X2-U interface. The following figure illustrates the E-UTRA-NR Dual Connectivity architecture. Figure 4: EN-DC Architecture If the UE supports dual connectivity with NR, then the UE must set the DCNR bit to "dual connectivity with NR supported" in the UE network capability IE of the Attach Request/Tracking Area Update Request message. If the UE indicates support for dual connectivity with NR in the Attach Request/Tracking Area Update Request message, and the MME decides to restrict the use of dual connectivity with NR for the UE, then the MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept/Tracking Area Update Accept message. If the RestrictDCNR bit is set to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept/Tracking Area Update Accept message, the UE provides the indication that dual connectivity with NR is restricted to the upper layers. If the UE supports DCNR and DCNR is configured on MME, and if HSS sends ULA/IDR with "Access-Restriction" carrying "NR as Secondary RAT Not Allowed", MME sends the "NR Restriction" bit 37

40 Feature Description 5G NSA for SAEGW set in "Handover Restriction List" IE during Attach/TAU/Handover procedures. Similarly, MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the Attach Accept/Tracking Area Update Accept message. Accordingly, UE provides the indication that dual connectivity with NR is restricted to the upper layers. The "Handover Restriction List" IE is present in the "Initial Context Setup Request" message for Attach and TAU procedure with data forwarding procedure, in the "Handover Required" message for S1 handover procedure, in the "Downlink NAS Transport" message for TAU without active flag procedure. Important 5G NSA requires a separate feature license from release 21.8 onwards. For more information on licenses, contact your Cisco Account representative 5G radio offers downlink data throughput up to 20Gbps and uplink data throughput up to 10Gbps. Some of the interfaces in EPC are capable of handling (encode or decode) 5G throughput ranges. Example, NAS supports up to 65.2 Gbps (APN-AMBR) and S5/S8/S10/S3 (GTP-v2 interfaces) support up to 4.2 Tbps. The diameter interfaces S6a and Gx support only up to 4.2Gbps throughput, S1-AP supports only up to 10 Gbps and NAS supports up to 10 Gbps (MBR, GBR). New AVP/IE have been introduced in S6a, Gx, S1-AP, and NAS interfaces to support 5G throughput rates Dual Connectivity with New Radio (DCNR) supports the following functionality: Supports configuration of DCNR feature at the P-GW-service. Supports configuration of Extended-BW-NR feature in IMSA service. Advertises the DCNR feature support by sending Extended-BW-NR feature bit in Feature-List-ID-2 towards PCRF. Forwards AVP "Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL" in CCR messages when it receives APN-AMBR values greater than 4.2Gbps from MME/S-GW. Decodes the extended AVP "Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL" when it is received from PCRF. Sends AVP "Extended-Max-Requested-BW-UL", "Extended-Max-Requested-BW-DL", "Extended-GBR-UL" and "Extended-GBR-DL" when it receives MBR and GBR values greater than 4.2Gbps from MME/S-GW. Decodes the AVP "Extended-Max-Requested-BW-UL", "Extended-Max-Requested-BW-DL", "Extended-GBR-UL" and "Extended-GBR-DL" when received from PCRF. Supports dedicated bearer establishment with extended QoS. Sends AVP Extended-Max-Requested-BW-UL and "Extended-Max-Requested-BW-DL" in Gy records. Supports 5G requirements of ultra-low latency. 3GPP introduced URLLC QCI 80 (Non-GBR resource type), QCI 82 and 83 (GBR resource type). P-GW establishes default bearers with URLLC QCI 80, which is typically used by low latency embb applications. P-GW establishes dedicated bearers with URLLC QCI 82 and 83 (also with QCI 80 if dedicated bearers of Non-GBR type to be established), which is typically used by discrete automation services (industrial automation). Dynamic S-GW and P-GW selection by MME for DCNR capable UE. When DCNR capable UE attempts to register in MME and when all DCNR validations are successful (for example DCNR feature configuration on MME, HSS not sending access-restriction for NR, and son on), the MME sets UP Function Selection Indication Flags IE with DCNR flag set to 1 in Create Session Request message. 38

41 5G NSA for SAEGW How It Works This feature is relevant for CUPS architecture to help SGW-C and PGW-C to select SGW-U and PGW-U which supports dual connectivity with NR. When S-GW receives this IE over S11, it sends this IE over S5 to P-GW. S-GW ignores IE if it receives it in Non-CUPS deployement. How It Works Architecture This section describes the architecture for Gx (PCRF) with respect to DCNR feature. Gx (PCRF) Gx interface introduced new "AVP Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL" in grouped "AVP QoS-Information" and "Conditional-APN-Aggregate-Max-Bitrate" to handle 5G throughput range for default bearers. To handle 5G throughput range for dedicated bearers, new AVP "Extended-GBR-UL", "Extended-GBR-DL", "Extended-Max-Requested-BW-UL" and "Extended-Max-Requested-BW-DL" have been introduced in grouped AVP QoS-Information. When the maximum bandwidth value set for UL (or DL, respectively) traffic is higher than bits per second, the "Max-Requested-Bandwidth-UL" (or DL, respectively) AVP should be present, and set to its upper limit along with the "Extended-Max-Requested-BW-UL" (or -DL, respectively) should be present, and set to the requested bandwidth value in kilobits per second. The same principal applies for "Extended-GBR-UL/DL" and "Extended-APN-AMBR-UL/DL". Following new AVPs have been introduced under the grouped AVP QoS-Information: Extended-Max-Requested-BW-UL Extended-Max-Requested-BW-DL Extended-GBR-UL Extended-GBR-DL Extended-APN-AMBR-UL Extended-APN-AMBR-DL Following new AVPs have been introduced under the grouped AVP Conditional-APN-Aggregate-Max-Bitrate Extended-APN-AMBR-UL Extended-APN-AMBR-DL Gy (OCS) Gy interface has introduced new AVP "Extended-Max-Requested-BW-UL" and "Extended-Max-Requested-BW-DL" in grouped AVP QoS-Information to handle 5G throughput ranges for dedicated bearers. Though 3GPP specification mentioned about "Extended-GBR-UL/DL" and "Extended-APN-AMBR-UL/DL", they are not applicable to Gy implementation. When the maximum bandwidth value is set for UL/DL traffic is higher than bits per second, P-GW sets the "Max-Requested-Bandwidth-UL/DL" AVP to its upper limit and sets the 39

42 Limitations 5G NSA for SAEGW "Extended-Max-Requested-BW-UL/DL" to the required bandwidth value in kilobits per second in CCR-I/CCR-U messages. NSA feature has been extended to only standard Gy dictionary dcca-custom13. Limitations This section describes the known limitations for DCNR: 5G NSA feature is implemented only for Gx standard dictionary (r8-gx-standard). 5G NSA feature has been implemented only for Gy standard dictionary "dcca-custom13". In order to support NSA feature for other Gx and Gy dictionaries, dynamic dictionary must be built. Contact your Cisco Account representative for more details. In this release, ICSR support for this feature is not available. When PCRF sends "Extended-Max-Requested-BW-UL/DL" AVP, it is expected to send "Extended-Max-Requested-BW-UL/DL" with maximum value: bps. When "Extended-Max-Requested-BW-UL/DL" AVP is present, P-GW ignores the received value in "Max-Requested-Bandwidth-UL/DL" AVP and assume it to be bps. When PCRF sends "Extended-GBR-UL/DL" AVP, it is expected to send "Extended-GBR-UL/DL" with maximum value bps. When "Extended-GBR-UL/DL" AVP is present, P-GW ignores the received value in "Guaranteed-Bitrate-UL/DL" AVP and assume it to be bps. Flows This section describes the following call flows related to the DCNR feature. Initial Registration by DCNR Capable UE 1. DCNR capable UE sets DCNR bit in NAS message Attach Request in UE Network Capability IE. 2. MME successfully authenticates the UE. 40

43 5G NSA for SAEGW Flows 3. As part of authorization process, while sending ULR to HSS, MME advertises the DCNR support by sending NR as Secondary RAT feature bit in Feature-List-ID HSS sends ULA by advertising the DCNR by sending NR as Secondary RAT feature bit in Feature-List-ID-2 and sends Max-Requested-Bandwidth-UL as bps, Max-Requested-Bandwidth-DL as bps and the extended bandwidth values in new AVPs "Extended-Max-Requested-BW-UL" and "Extended-Max-Requested-BW-DL". If HSS determines that the UE is not authorized for DCNR services, HSS sends Subscription-Data with Access-Restriction carrying NR as Secondary RAT Not Allowed. 5. MME sends Create Session Request with the extended APN-AMBR values in existing AMBR IE. As the APN-AMBR values in GTP-v2 interface are encoded in kbps, existing AMBR IE handles the 5G-NSA bit rates. 6. PGW sends CCR-I to PCRF advertising the DCNR by sending Extended-BW-NR feature bit in Feature-List-ID-2. PGW also sends "APN-Aggregate-Max-Bitrate-UL" as Bits/Sec, "APN-Aggregate-Max-Bitrate-DL" as Bits/Sec and the extended bandwidth values in new AVPs "Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL". 7. PCRF sends CCA-I advertising the DCNR by sending Extended-BW-NR feature bit in Feature-List-ID-2. PCRF also sends APN-Aggregate-Max-Bitrate-UL as bps and "APN-Aggregate-Max-Bitrate-DL" as bps and the extended bandwidth values in new "AVPs Extended-APN-AMBR-UL" and "Extended-APN-AMBR-DL". PCRF may offer the same extended APN-AMBR values requested by PCEF or may modify the extended APN-AMBR values. PGW enforces the APN-AMBR values accordingly. 8. PGW honors the APN-AMBR values as offered by PCRF and sends the extended APN-AMBR values in existing IE APN-AMBR in the Create Sesssion Response. 9. MME computes the UE-AMBR values and sends the extended UE-AMBR values in new IEs Extended UE Aggregate Maximum Bit Rate Downlink and Extended UE Aggregate Maximum Bit Rate Uplink also by setting the legacy UE AMBR Uplink and UE AMBR Downlink values to the maximum allowed value bps(10 Gbps) in Initial Context Setup Request. MME sends the APN-AMBR values up to 65.2 Gbps in existing IE APN-AMBR in NAS Activate Default EPS Bearer Context Request Attach Accept. If the APN-AMBR values are beyond 65.2 Gbps, MME sends the extended APN-AMBR values in new IE Extended APN aggregate maximum bit rate If ULA is received with Access-Restriction carrying NR as Secondary RAT Not Allowed, MME sends the Initial Context Setup Request with NR Restriction bit set in Handover Restriction List IE. Also MME sets the "RestrictDCNR" bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the ATTACH ACCEPT message. Accordingly, UE provides the indication that dual connectivity with NR is restricted to the upper layers. If DCNR feature is not configured at MME-service or call-control-profile level, MME sets the RestrictDCNR bit to "Use of dual connectivity with NR is restricted" in the EPS network feature support IE of the ATTACH ACCEPT message. Accordingly, UE provides the indication that dual connectivity with NR is restricted to the upper layers. 10. enodeb sends Initial Context Setup Response. If master enodeb determines to establish the bearer on secondary enodeb, F-TEID of secondary enodeb may be sent in this step (Transport layer address and TEID of secondary enodeb). It is transparent to MME if the bearer is established on master enodeb or secondary enodeb. 41

44 Supported Standards 5G NSA for SAEGW 11. enodeb sends Uplink NAS Transport with NAS message Attach Complete - Activate Default EPS Bearer Context Accept. 12. MME sends Modify Bearer Request to SGW with S1-U FTEID details as received in Initial Context Setup Response. 13. MME receives Modify Bearer Response from SGW. Supported Standards The 5G Non-Standalone(NSA) feature complies with the following standards: 3GPP Release General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access 3GPP Release Policy and Charging Control (PCC) 3GPP Release GPP Evolved Packet System (EPS); Evolved General Packet Radio Service (GPRS) Tunneling Protocol for Control plane (GTPv2-C); Stage 3 Configuring 5G NSA for SAEGW This section describes how to configure 5G NSA to support SAEGW. Configuring 5G NSA on SAEGW involves: Enabling DCNR in P-GW Service, on page 42 Configuring Bearer Duration Statistics for URLLC QCI, on page 45 Configuring EGTPC QCI Statistics for URLLC QCI, on page 45 Configuring Extended Bandwidth with New Radio, on page 43 Configuring Network Initiated Setup/Teardown for URLLC QCI, on page 44 Configuring URLLC QCI in APN Configuration, on page 44 Configuring URLCC QCI In Charging Action, on page 43 Configuring URLCC QCI in QCI QOS Mapping Table, on page 43 Enabling DCNR in P-GW Service Use the following configuration to enable Dual Connectivity with New Radio (DCNR) to support 5G Non Standalone (NSA). configure context context_name pgw-service service_name [ no ] dcnr end NOTES: 42

45 5G NSA for SAEGW Configuring Extended Bandwidth with New Radio pgw-service service_name: Creates an P-GW service or configures an existing P-GW service. service_name must be an alphanumeric string of 1 to 63 characters. no: Disables the DCNR configuration. The dcnr CLI command is disabled by default. Configuring Extended Bandwidth with New Radio Use the following configuration to configure extended bandwidth with new-radio in IMS authentication service mode. configure context context_name ims-auth-service ims_auth_service_name policy-control diameter encode-supported-features extended-bw-newradio [ no ] diameter encode-supported-features end NOTES: ims-auth-service ims_auth_service_name: Creates an IMS authentication service. ims_auth_service_name must be a alphanumeric string of 1 through 63 characters. policy-control: Configures Diameter authorization and policy control parameter for IMS authorization. extended-bw-newradio: Enables extended bandwidth with New-Radio feature. no: Removes configured supported features. Configuring URLCC QCI in QCI QOS Mapping Table Use the following configuration to configure URLCC QCI in QCI QOS Mapping Table. configure qci-qos-mapping qci_qos_mapping [ no ] qci qci_value end NOTES: qci-qos-mapping qci_qos_mapping: Specifies the map name. qci_qos_mapping must be an alphanumeric string of 1 through 63 characters. qci qci_val: Specifies the QoS Class Identifier. qci_val must be an integer between 1to 9, 65, 66, 69, 70, 80, 82, and 83. no: Disables the QCI value. Configuring URLCC QCI In Charging Action Use the following configuration to configure URLCC QCI in Charging Action Configuration mode. 43

46 Configuring URLLC QCI in APN Configuration 5G NSA for SAEGW configure active-charging service service_name charging-action charging_action_name qos-class-identifier qos_class_identifier no qos-class-identifier end NOTES: active-charging service service_name: Specifies name of the active charging service. service_name must be an alphanumeric string of 1 through 15 characters. charging-action charging_action_name : Creates an charging action. qos_class_identifier must be an alphanumeric string of 1 through 63 characters. qos-class-identifier qos_class_identifier: Specifies the QoS Class Identifier. qos_class_identifier must be an integer between 1 to 9, 65, 66, 69, 70, 80, 82, and 83. no: Disables the QoS Class Identifier. Configuring URLLC QCI in APN Configuration Use the following configuration to configure URLCC QCI in the APN Configuration mode. configure context context_name apn apn_name qos rate-limit direction { downlink uplink } qci qci_val no qos rate-limit direction { downlink uplink } end NOTES: qos rate-limit: Configures the action on a subscriber traffic flow that violates or exceeds the peak/committed data rate under traffic shaping and policing functionality. direction { downlink uplink }: Specifies the direction of traffic on which this QoS configuration needs to be applied. downlink: Apply the specified limits and actions to the downlink. uplink: Apply the specified limits and actions to the uplink. qci qci_val: Specifies the QoS Class Identifier. qci_val must be an integer between 1 to 9, 80, 82, and 83. no: Disables the QoS data rate limit configuration for the APN. Configuring Network Initiated Setup/Teardown for URLLC QCI Use the following configuration to configure network initiated setup/teardown statistics/kpi for URLCC QCI. configure transaction-rate nw-initiated-setup-teardown-events qci qci_val [ default no ] transaction-rate nw-initiated-setup-teardown-events 44

47 5G NSA for SAEGW Configuring Bearer Duration Statistics for URLLC QCI qci end NOTES: transaction-rate nw-initiated-setup-teardown-events: Enables operators to set the Quality of Class Identifier (QCI) value for use in tracking Network InitiatedSetup/Tear down Events per Second key performance indicator (KPI) information. qci qci_val: Specifies the QoS Class Identifier. Configures QCIs for which these events needs to be incremented. qci_val must be an integer between 1-9, 65, 66, 69, 70, 80, 82, 83, and no: Disables the collection of network-initiated setup/teardown events for the specified QCI value. default: Returns the setting to its default value. The default is for network-initiated setup/teardown events to be tracked for all supported QCI values. Configuring Bearer Duration Statistics for URLLC QCI Use the following configuration to configure QCI based duration statistics for URLLC QCI. configure context context_name apn apn_name [ no ] bearer-duration-stats qci qci_val end NOTES: apn apn_name: Creates or deletes Access Point Name (APN) templates and enters the APN Configuration Mode within the current context. apn_name specifies a name for the APN template as an alphanumeric string of 1 through 62 characters that is case insensitive. bearer-duration-stats: Enables or disables per QCI call duration statistics for dedicated bearers. qci qci_val: Specifies the QoS Class Identifier. qci_val must be an integer between 1 to 9, 80, 82, and 83. no: Disables per QCI call duration statistics. Configuring EGTPC QCI Statistics for URLLC QCI Use the following configuration to configure QCI based EGTPC QCI statistics for URLLC QCI. configure context context_name apn apn_name [ no ] egtpc-qci-stats { qci80 qci82 qci83 } default egtpc-qci-stats end Notes: 45

48 Monitoring and Troubleshooting 5G NSA for SAEGW apn apn_name: Creates or deletes Access Point Name (APN) templates and enters the APN Configuration Mode within the current context. apn_name specifies a name for the APN template as an alphanumeric string of 1 through 62 characters that is case insensitive. egtpc-qci-stats: Enables/Disables an APN candidate list for the apn-expansion bulkstats schema. qci80: Configure apn-qci-egtpc statistics for QCI 80. qci82: Configure apn-qci-egtpc statistics for QCI 82. qci83: Configure apn-qci-egtpc statistics for QCI 83. no: Disables APN candidate list(s) for the apn-expansion bulkstat schema. default: Disables an APN candidate list for the apn-expansion bulkstat schema. Monitoring and Troubleshooting This section provides information regarding show commands and bulk statistics available to monitor and troubleshoot the 5G NSA feature. Show Commands and Outputs This section provides information on show commands and their corresponding outputs for the DCNR feature. show pgw-service name The output of this command includes the "DCNR" field to indicate if the DCNR feature is enabled or disabled at P-GW service. show ims-authorization service name The output of this command includes the following fields: Diameter Policy Control: Supported Features: extended-bw-nr show gtpu statistics The output of this command includes the following fields: Uplink Packets Displays the total number of QCI 80, QCI 82, and QCI 83 uplink packets. Uplink Bytes Displays the total number of QCI 80, QCI 82, and QCI 83 uplink bytes. Downlink Packets Displays the total number of QCI 80, QCI 82, and QCI 83 downlink packets. Downlink Bytes Displays the total number of QCI 80, QCI 82, and QCI 83 downlink bytes. Packets Discarded Displays the total number of discarded QCI 80, QCI 82, and QCI 83 packets. Bytes Discarded Displays the total number of discarded QCI 80, QCI 82, and QCI 83 bytes. 46

49 5G NSA for SAEGW Show Commands and Outputs show apn statistics all The output of this command includes the following fields: 4G Bearers Released By Reasons: Admin disconnect Displays dedicated bearers released due to adminstration clear from P-GW for QCI 80, QCI 82, and QCI 83. Bearer Active Displays the total number for QCI 80, QCI 82, and QCI 83 active bearers. Bearer setup Displays the total number for QCI 80, QCI 82, and QCI 83 bearers setup. Bearer Released Displays the total number for QCI 80, QCI 82, and QCI 83 released bearers. Bearer Rejected Uplink Bytes Forwarded Displays the total number for QCI 80, QCI 82, and QCI 83 uplink packets forwarded. Uplink pkts forwarded Displays the total number for QCI 80, QCI 82, and QCI 83 downlink packets forwarded. Uplink Bytes dropped Displays the total number for QCI 80, QCI 82, and QCI 83 uplink bytes forwarded. Downlink Bytes forwarded Displays the total number for QCI 80, QCI 82, and QCI 83 downlink bytes forwarded. Uplink pkts dropped Displays the total number for QCI 80, QCI 82, and QCI 83 uplink packets dropped. Downlink Bytes dropped Displays the total number for QCI 80, QCI 82, and QCI 83 downlink bytes dropped. Uplink Bytes dropped(mbr Excd) Displays the total number for QCI 80, QCI 82, and QCI 83 uplink bytes dropped due to MBR being exceeded. Uplink pkts dropped(mbr Excd) Displays the total number for QCI 80, QCI 82, and QCI 83 uplink packets dropped due to MBR being exceeded. Downlink pkts forwarded Displays the total number for QCI 80, QCI 82, and QCI 83 downlink packets forwarded. Downlink pkts dropped Displays the total number for QCI 80, QCI 82, and QCI 83 downlink packets dropped. Downlink Bytes dropped(mbr Excd) Displays the total number for QCI 80, QCI 82, and QCI 83 downlink bytes dropped due to MBR being exceeded. Downlink pkts dropped(mbr Excd) Displays the total number for QCI 80, QCI 82, and QCI 83 downlink packets dropped due to MBR being exceeded. show pgw-service statistics all verbose The output of this command includes the following fields: Bearers By QoS characteristics: Active Displays the total number of active bearers for QCI 80, QCI 82, and QCI

50 Show Commands and Outputs 5G NSA for SAEGW Released Displays the total number of bearers released for QCI 80, QCI 82, and QCI 83. Setup Displays the total number of bearers setup for QCI 80, QCI 82, and QCI 83. Data Statistics Per PDN-Type: Uplink: Packets Displays the total number of uplink packets fowarded for QCI 80, QCI 82, and QCI 83. Bytes Displays the total number of uplink bytes forwarded for QCI 80, QCI 82, and QCI 83. Dropped Packets Displays the total number of uplink packets dropped for QCI 80, QCI 82, and QCI 83. Dropped Bytes Displays the total number of uplink bytes dropped for QCI 80, QCI 82, and QCI 83. Downlink: Packets Displays the total number of downlink packets forwarded for QCI 80, QCI 82, and QCI 83. Bytes Displays the total number of downlink bytes forwarded for QCI 80, QCI 82, and QCI 83. Dropped Packets Displays the total number of downlink packets dropped for QCI 80, QCI 82, and QCI 83. Dropped Bytes Displays the total number of downlink bytes dropped for QCI 80, QCI 82, and QCI 83. show sgw-service statistics all verbose The output of this command includes the following fields: Bearers By QoS characteristics: Active Displays the total active EPS Bearers for QCI 80, QCI 82, and QCI 83. Displays the total number of EPS Bearers released for QCI 80, QCI 82, and QCI 83. Displays the total number of EPS bearers setup for QCI 80, QCI 82, and QCI 83. Modified Displays the total number of EPS bearers modified for QCI 80, QCI 82, and QCI 83. Dedicated Bearers Released By Reason: P-GW Initiated Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason P-GW initiated on the S-GW. S1 Error Indication Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S1 error indication on the S-GW. S5 Error Indication Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S5 error indication on the S-GW. S4 Error Indication Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S4 error indication on the S-GW. S12 Error Indication Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S12 error indication on the S-GW. 48

51 5G NSA for SAEGW Show Commands and Outputs Local Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason local error indication on the S-GW. PDN Down Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released due to PDN cleanup on the S-GW. Path Failure S1-U Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S1-U path failure on the S-GW. Path Failure S5-U Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S5-U path failure on the S-GW. Path Failure S5 Displays the total number of dedicated EPS bearers for QCI 80, QCI 82, and QCI 83 released with the reason S5 path failure on the S-GW. Path Failure S11 Displays the total number of Dedicated Bearers for QCI 80, QCI 82, and QCI 83 released due to Path Failure on the S11 interface. Path Failure S4-U Displays the total number of Dedicated Bearers for QCI 80, QCI 82, and QCI 83 released due to Path Failure on S4-U interface. Path Failure S12 Displays the total number of Dedicated Bearers for QCI 80, QCI 82, and QCI 83 released due to Path Failure on S12 interface. Inactivity Timeout Displays the total number of Dedicated Bearers for QCI 80, QCI 82, and QCI 83 released due to the Inactivity Timeout. Other Displays the total number of Dedicated Bearers for QCI 80, QCI 82, and QCI 83 released due to Other reasons. Data Statistics Per Interface: S1-U/S11-U/S4-U/S12/S5-U/S8-U Total Data Statistics: Uplink: Packets Displays the total number of uplink data packets received by the S-GW for a bearer with QCI 80, QCI 82, and QCI 83. Bytes Displays the total number of uplink data bytes received by the S-GW for a bearer with QCI 80, QCI 82, and QCI 83. Dropped Packets Displays the total number of uplink data packets dropped by the S-GW for a bearer with a QCI 80, QCI 82, and QCI 83. Dropped Bytes Displays the total number of uplink data bytes dropped by the S-GW for a bearer with QCI 80, QCI 82, and QCI 83. Downlink: Packets Displays the total number of downlink data packets received by the S-GW for a bearer with QCI 80, QCI 82, and QCI 83. Bytes Displays the total number of downlink data bytes received by the S-GW for a bearer with QCI 80, QCI 82, and QCI 83. Dropped Packets Displays the total number of downlink data packets dropped by the S-GW for bearer with QCI 80, QCI 82, and QCI

52 Bulk Statistics 5G NSA for SAEGW Dropped Bytes Displays the total number of downlink data bytes dropped by the S-GW for a bearer with QCI 80, QCI 82, and QCI 83. show saegw-service statistiscs all verbose The output of this command includes the following fields: Bearers By QoS characteristics: Active Displays the total number of QCI 80, QCI 82, and QCI 83 active bearers. Released Displays the total number of QCI 80, QCI 82, and QCI 83 released bearers. Setup Displays the total number of QCI 80, QCI 82, and QCI 83 bearers setup. Data Statistics Per PDN-Type: Uplink: Packets Displays the total number of QCI 80, QCI 82, and QCI 83 uplink packets forwarded. Bytes Displays the total number of QCI 80, QCI 82, and QCI 83 uplink bytes forwarded. Dropped Packets Displays the total number of QCI 80, QCI 82, and QCI 83 uplink packets dropped. Dropped Bytes Displays the total number of QCI 80, QCI 82, and QCI 83 uplink bytes dropped. Downlink: Packets Displays the total number of QCI 80, QCI 82, and QCI 83 downlink packets forwarded. Bytes Displays the total number of QCI 80, QCI 82, and QCI 83 downlink bytes forwarded. Dropped Packets Displays the total number of QCI 80, QCI 82, and QCI 83 downlink packets dropped. Dropped Bytes Displays the total number of QCI 80, QCI 82, and QCI 83 downlink bytes dropped. Bulk Statistics The following statistics are added to in support of the 5G NSA feature. APN Schema The following 5G NSA feature related bulk statistics are available in the APN schema. Bulk Statistics qci80-actbear qci82-actbear qci83-actbear qci80-setupbear Description The total number of QoS Class Index (QCI) of 80 active bearers. The total number of QoS Class Index (QCI) of 82 active bearers. The total number of QoS Class Index (QCI) of 83 active bearers. The total number of QCI of 80 bearers setup. 50

53 5G NSA for SAEGW APN Schema Bulk Statistics qci82-setupbear qci83-setupbear qci80-relbear qci82-relbear qci83-relbear qci80-uplinkpkt-fwd qci82-uplinkpkt-fwd qci83-uplinkpkt-fwd qci80-dwlinkpkt-fwd qci82-dwlinkpkt-fwd qci83-dwlinkpkt-fwd qci80-uplinkbyte-fwd qci82-uplinkbyte-fwd qci83-uplinkbyte-fwd qci80-dwlinkbyte-fwd qci82-dwlinkbyte-fwd qci83-dwlinkbyte-fwd qci80-uplinkpkt-drop qci82-uplinkpkt-drop qci83-uplinkpkt-drop Description The total number of QCI of 82 bearers setup. The total number of QCI of 83 bearers setup. The total number of QCI of 80 released bearers. The total number of QCI of 82 released bearers. The total number of QCI of 83 released bearers. The total number of QCI of 80 uplink packets forwarded. The total number of QCI of 82 uplink packets forwarded. The total number of QCI of 83 uplink packets forwarded. The total number of QCI of 80 downlink packets forwarded. The total number of QCI of 82 downlink packets forwarded. The total number of QCI of 83 downlink packets forwarded. The total number of QCI of 80 uplink bytes forwarded. The total number of QCI of 82 uplink bytes forwarded. The total number of QCI of 83 uplink bytes forwarded. The total number of QCI of 80 downlink bytes forwarded. The total number of QCI of 82 downlink bytes forwarded. The total number of QCI of 83 downlink bytes forwarded. The total number of QCI of 80 uplink packets dropped. The total number of QCI of 82 uplink packets dropped. The total number of QCI of 83 uplink packets dropped. 51

54 APN Schema 5G NSA for SAEGW Bulk Statistics qci80-dwlinkpkt-drop qci82-dwlinkpkt-drop qci83-dwlinkpkt-drop qci80-uplinkbyte-drop qci82-uplinkbyte-drop qci83-uplinkbyte-drop qci80-dwlinkbyte-drop qci82-dwlinkbyte-drop qci83-dwlinkbyte-drop qci80-uplinkpkt-drop-mbrexcd qci82-uplinkpkt-drop-mbrexcd qci83-uplinkpkt-drop-mbrexcd qci80-dwlinkpkt-drop-mbrexcd qci82-dwlinkpkt-drop-mbrexcd qci83-dwlinkpkt-drop-mbrexcd qci80-uplinkbyte-drop-mbrexcd qci82-uplinkbyte-drop-mbrexcd qci83-uplinkbyte-drop-mbrexcd qci80-dwlinkbyte-drop-mbrexcd Description The total number of QCI of 80 downlink packets dropped. The total number of QCI of 82 downlink packets dropped. The total number of QCI of 83 downlink packets dropped. The total number of QCI of 80 uplink bytes dropped. The total number of QCI of 82 uplink bytes dropped. The total number of QCI of 83 uplink bytes dropped. The total number of QCI of 80 downlink bytes dropped. The total number of QCI of 82 downlink bytes dropped. The total number of QCI of 83 downlink bytes dropped. The total number of QCI of 80 uplink packets dropped due to MBR being exceeded. The total number of QCI of 82 uplink packets dropped due to MBR being exceeded. The total number of QCI of 83 uplink packets dropped due to MBR being exceeded. The total number of QCI of 80 downlink packets dropped due to MBR being exceeded. The total number of QCI of 82 downlink packets dropped due to MBR being exceeded. The total number of QCI of 83 downlink packets dropped due to MBR being exceeded. The total number of QCI of 80 uplink bytes dropped due to MBR being exceeded. The total number of QCI of 82 uplink bytes dropped due to MBR being exceeded. The total number of QCI of 83 uplink bytes dropped due to MBR being exceeded. The total number of QCI of 80 uplink bytes dropped due to MBR being exceeded. 52

55 5G NSA for SAEGW System Schema Bulk Statistics qci82-dwlinkbyte-drop-mbrexcd qci83-dwlinkbyte-drop-mbrexcd qci80-rejbearer qci82-rejbearer qci83-rejbearer sessstat-bearrel-ded-admin-clear-qci80 sessstat-bearrel-ded-admin-clear-qci82 sessstat-bearrel-ded-admin-clear-qci83 Description The total number of QCI of 82 uplink bytes dropped due to MBR being exceeded. The total number of QCI of 83 uplink bytes dropped due to MBR being exceeded. The total number of QCI of 80 rejected bearers. The total number of QCI of 82 rejected bearers. The total number of QCI of 83 rejected bearers. The total number dedicated bearers released due to admin clear from P-GW for QCI of 80. The total number dedicated bearers released due to admin clear from P-GW for QCI of 82. The total number dedicated bearers released due to admin clear from P-GW for QCI of 83. System Schema The following 5G NSA feature related bulk statistics are available in the System schema. Bulk Statistics sess-bearerdur-5sec-qci80 sess-bearerdur-5sec-qci82 sess-bearerdur-5sec-qci83 sess-bearerdur-10sec-qci80 sess-bearerdur-10sec-qci82 sess-bearerdur-10sec-qci83 sess-bearerdur-30sec-qci80 Description The current number of a bearer sessions with a duration of 5 seconds and having a QoS Class Index (QCI) of 80. The current number of a bearer sessions with a duration of 5 seconds and having a QoS Class Index (QCI) of 82. The current number of a bearer sessions with a duration of 5 seconds and having a QoS Class Index (QCI) of 83. The current number of bearer sessions with a duration of 10 seconds and having a QCI of 80. The current number of bearer sessions with a duration of 10 seconds and having a QCI of 82. The current number of bearer sessions with a duration of 10 seconds and having a QCI of 83. The current number of bearer sessions with a duration of 30 seconds and having a QCI of

56 System Schema 5G NSA for SAEGW Bulk Statistics sess-bearerdur-30sec-qci82 sess-bearerdur-30sec-qci83 sess-bearerdur-1min-qci80 sess-bearerdur-1min-qci82 sess-bearerdur-1min-qci83 sess-bearerdur-2min-qci80 sess-bearerdur-2min-qci82 sess-bearerdur-2min-qci83 sess-bearerdur-5min-qci80 sess-bearerdur-5min-qci82 sess-bearerdur-5min-qci83 sess-bearerdur-15min-qci80 sess-bearerdur-15min-qci82 sess-bearerdur-15min-qci83 sess-bearerdur-30min-qci80 sess-bearerdur-30min-qci82 sess-bearerdur-30min-qci83 sess-bearerdur-1hr-qci80 Description The current number of bearer sessions with a duration of 30 seconds and having a QCI of 82. The current number of bearer sessions with a duration of 30 seconds and having a QCI of 83. The current number of bearer sessions with a duration of 1 minute and having a QCI of 80. The current number of bearer sessions with a duration of 1 minute and having a QCI of 82. The current number of bearer sessions with a duration of 1 minute and having a QCI of 83. The current number of bearer sessions with a duration of 2 minutes and having a QCI of 80. The current number of bearer sessions with a duration of 2 minutes and having a QCI of 82. The current number of bearer sessions with a duration of 2 minutes and having a QCI of 83. The current number of bearer sessions with a duration of 5 minutes and having a QCI of 80. The current number of bearer sessions with a duration of 5 minutes and having a QCI of 82. The current number of bearer sessions with a duration of 5 minutes and having a QCI of 83. The current number of bearer sessions with a duration of 15 minutes and having a QCI of 80. The current number of bearer sessions with a duration of 15 minutes and having a QCI of 82. The current number of bearer sessions with a duration of 15 minutes and having a QCI of 83. The current number of bearer sessions with a duration of 30 minutes and having a QCI of 80. The current number of bearer sessions with a duration of 30 minutes and having a QCI of 82. The current number of bearer sessions with a duration of 30 minutes and having a QCI of 83. The current number of bearer sessions with a duration of 1 hour and having a QCI of

57 5G NSA for SAEGW System Schema Bulk Statistics sess-bearerdur-1hr-qci82 sess-bearerdur-1hr-qci83 sess-bearerdur-4hr-qci80 sess-bearerdur-4hr-qci82 sess-bearerdur-4hr-qci83 sess-bearerdur-12hr-qci80 sess-bearerdur-12hr-qci82 sess-bearerdur-12hr-qci83 sess-bearerdur-24hr-qci80 sess-bearerdur-24hr-qci82 sess-bearerdur-24hr-qci83 sess-bearerdur-over24hr-qci80 sess-bearerdur-over24hr-qci82 sess-bearerdur-over24hr-qci83 sess-bearerdur-2day-qci80 sess-bearerdur-2day-qci82 sess-bearerdur-2day-qci83 sess-bearerdur-4day-qci80 Description The current number of bearer sessions with a duration of 1 hour and having a QCI of 82. The current number of bearer sessions with a duration of 1 hour and having a QCI of 83. The current number of bearer sessions with a duration of 4 hours and having a QCI of 80. The current number of bearer sessions with a duration of 4 hours and having a QCI of 82. The current number of bearer sessions with a duration of 4 hours and having a QCI of 83. The current number of bearer sessions with a duration of 12 hours and having a QCI of 80. The current number of bearer sessions with a duration of 12 hours and having a QCI of 82. The current number of bearer sessions with a duration of 12 hours and having a QCI of 83. The current number of bearer sessions with a duration of 24 hours and having a QCI of 80. The current number of bearer sessions with a duration of 24 hours and having a QCI of 82. The current number of bearer sessions with a duration of 24 hours and having a QCI of 83. The current number of bearer sessions with a duration of over 24 hours and having a QCI of 80. The current number of bearer sessions with a duration of over 24 hours and having a QCI of 82. The current number of bearer sessions with a duration of over 24 hours and having a QCI of 83. The current number of bearer sessions with a duration of 2 days and having a QCI of 80. The current number of bearer sessions with a duration of 2 days and having a QCI of 82. The current number of bearer sessions with a duration of 2 days and having a QCI of 83. The current number of bearer sessions with a duration of 4 days and having a QCI of

58 System Schema 5G NSA for SAEGW Bulk Statistics sess-bearerdur-4day-qci82 sess-bearerdur-4day-qci83 sess-bearerdur-5day-qci80 sess-bearerdur-5day-qci82 sess-bearerdur-5day-qci83 Description The current number of bearer sessions with a duration of 4 days and having a QCI of 82. The current number of bearer sessions with a duration of 4 days and having a QCI of 83. The current number of bearer sessions with a duration of 5 days and having a QCI of 80. The current number of bearer sessions with a duration of 5 days and having a QCI of 82. The current number of bearer sessions with a duration of 5 days and having a QCI of

59 CHAPTER 6 API-based VNFM Upgrade Process Feature Summary and Revision History, on page 57 Feature Description, on page 58 VNFM Upgrade Workflow, on page 58 Initiating the VNFM Upgrade, on page 60 Limitations, on page 63 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled - Configuration required Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced

60 Feature Description API-based VNFM Upgrade Process Feature Description Important This feature is not fully qualified in this release. It is available only for testing purposes. For more information, contact your Cisco Accounts representative. In releases prior to 6.2, the USP-based VNF would have to be completely terminated in order to perform an upgrade of the ESC-based VNFM. With this release, the ESC-based VNFM can optionally be upgraded as part of a rolling patch upgrade process in order to preserve the operational state of the VNF and UAS deployments. Important The VNFM upgrade process is supported for Ultra M deployments that leverage the Hyper-Converged architecture and for stand-alone AutoVNF deployments. VNFM Upgrade Workflow This section describes the sequence in which the rolling patch upgrade of VNFM occurs. Figure 5: VNFM Upgrade Process Flow, on page 59 illustrates the VNFM upgrade process for Ultra M deployments. For stand-alone AutoVNF deployments, the upgrade software image is uploaded to the onboarding server (step 1) and the upgrade command is executed from AutoVNF (step 3). 58

61 API-based VNFM Upgrade Process VNFM Upgrade Workflow Figure 5: VNFM Upgrade Process Flow 1. Onboard the new USP ISO containing the VNFM upgrade image to the Ultra M Manager node. 2. Update the deployment network service description (NSD) to identify the new package. Package information is defined in the VNF package descriptor (vnf-packaged) as follows: <---SNIP---> vnf-packaged <upgrade_package_descriptor_name> location <package_url> validate-signature false configuration staros external-url /home/ubuntu/system.cfg <---SNIP---> The package must then be referenced in the virtual descriptor unit (VDU) pertaining to the UEM: <---SNIP---> vdu esc vdu-type cisco-esc login-credential esc_login netconf-credential esc_netconf image vnf-package vnf-rack vnf-rack1 vnf-package primary <upgrade_package_descriptor_name> vnf-package secondary <previous_package_descriptor_name> <---SNIP---> 59

62 Initiating the VNFM Upgrade API-based VNFM Upgrade Process Important The secondary image is used as a fallback in the event an issue is encountered through the upgrade process. If no secondary image is specified, the upgrade process will stop and generate an error log. 3. The rolling upgrade request is triggered through AutoDeploy which initiates the process with AutoVNF. 4. AutoVNF determines which VNFM VM is active and which is standby by communicating with each of the VMs over the management interface. 5. AutoVNF triggers the shutdown of the standby VNFM via the VIM. 6. AutoVNF waits until the VIM confirms that the standby VNFM VM has been completely terminated. 7. AutoVNF initiates the deployment of a new VNFM VM via the VIM using the upgrade image. The VNFM VM is deployed in standby mode. 8. The standby VNFM VM synchronizes data with the active VNFM VM. 9. AutoVNF waits until the VIM confirms that the new VM has been deployed and is in standby mode. If it detects that there is an issue with the VM, AutoVNF re-initiates the VNFM VM with the previous image. If no issues are detected, AutoVNF proceeds with the upgrade process. 10. Repeat the steps 4, on page 60 to 7, on page 60 for the VNFM VM that is currently active. Initiating the VNFM Upgrade VNFM upgrades are initiated through a remote procedure call (RPC) executed from the ConfD command line interface (CLI) or via a NETCONF API. Via the CLI To perform an upgrade using the CLI, log in to AutoDeploy (Ultra M deployments) or AutoVNF (stand-alone AutoVNF deployments) as the ConfD CLI admin user and execute the following command: update-sw nsd-id <nsd_name> rolling { true false } vnfd <vnfd_name> vnf-package <pkg_id> NOTES: <nsd_name> and <vnfd_name> are the names of the network service descriptor (NSD) file and VNF descriptor (VNFD) (respectively) in which the VNF component (VNFC) for the VNFM VNF component is defined. If the rolling false operator is used, the upgrade terminates the entire deployment. In this scenario, the vnfd<vnfd_name> operator should not be included in the command. If it is included, a transaction ID for the upgrade is generated and failed. The AutoVNF upstart log reflects this status. <pkg_id> is the name of the USP ISO containing the upgraded VNFM VM image. Ensure that the upgrade package is defined as a VNF package descriptor within the NSD and that it is specified as the primary package in the VNFM VDU configuration. 60

63 API-based VNFM Upgrade Process Via the NETCONF API Ensure that the current (pre-upgrade) package is specified as the secondary package in the VNFM VDU configuration in order to provide rollback support in the event of errors. Via the NETCONF API Operation: nsd:update-sw Namespace: xmlns:nsd=" Parameters: Parameter Name Required Type Description nsd M string NSD name rolling M boolean Specifies if the rolling is enabled (true) /disabled (false) vnfd M string VNFD name, mandatory in case of rolling upgrade package M string Package descriptor name that should be used to update the vnfd instance mentioned by vnfd NOTES: If the rolling false operator is used, the upgrade terminates the entire deployment. In this scenario, the vnfd<vnfd_name> operator should not be included in the command. If it is included, a transaction ID for the upgrade is generated and failed. The AutoVNF upstart log reflects this status. Ensure that the upgrade package is defined as a VNF package descriptor within the NSD and that it is specified as the primary package in the VNFM VDU configuration. Ensure that the current (pre-upgrade) package is specified as the secondary package in the VNFM VDU configuration in order to provide rollback support in the event of errors. Example RPC <nc:rpc message-id="urn:uuid:bac690a2-08af-4c9f c907d6e12ba" xmlns=" <nsd-id>fremont-autovnf</nsd-id> <vim-identity>vim1</vim-identity> <vnfd xmlns=" <vnfd-id>esc</vnfd-id> <vnf-type>esc</vnf-type> <version>6.0</version> <configuration> <boot-time>1800</boot-time> <set-vim-instance-name>true</set-vim-instance-name> </configuration> <external-connection-point> <vnfc>esc</vnfc> <connection-point>eth0</connection-point> <nsd 61

64 Via the NETCONF API API-based VNFM Upgrade Process </external-connection-point> <high-availability>true</high-availability> <vnfc> <vnfc-id>esc</vnfc-id> <health-check> <enabled>false</enabled> </health-check> <vdu> <vdu-id>esc</vdu-id> </vdu> <connection-point> <connection-point-id>eth0</connection-point-id> <virtual-link> <service-vl>mgmt</service-vl> </virtual-link> </connection-point> <connection-point> <connection-point-id>eth1</connection-point-id> <virtual-link> <service-vl>orch</service-vl> </virtual-link> </connection-point> </vnfc> </vnfd> </nsd> <vim xmlns=" <vim-id>vim1</vim-id> <api-version>v2</api-version> <auth-url> <user>vim-admin-creds</user> <tenant>abcxyz</tenant> </vim> <secure-token xmlns=" <secure-id>vim-admin-creds</secure-id> <user>abcxyz</user> <password>******</password> </secure-token> <vdu xmlns=" <vdu-id>esc</vdu-id> <vdu-type>cisco-esc</vdu-type> <flavor> <vcpus>2</vcpus> <ram>4096</ram> <root-disk>40</root-disk> <ephemeral-disk>0</ephemeral-disk> <swap-disk>0</swap-disk> </flavor> <login-credential>esc_login</login-credential> <netconf-credential>esc_netconf</netconf-credential>  <vnf-rack>abcxyz-vnf-rack</vnf-rack> <vnf-package> <primary>usp_6_2t</primary> <secondary>usp_throttle</secondary> </vnf-package> <volume/> </vdu> <secure-token xmlns=" <secure-id>esc_login</secure-id> <user>admin</user> <password>******</password> </secure-token> 62

65 API-based VNFM Upgrade Process Limitations <secure-token xmlns=" <secure-id>esc_netconf</secure-id> <user>admin</user> <password>******</password> </secure-token> <vnf-packaged xmlns=" <vnf-package-id>usp_throttle</vnf-package-id> <location> <validate-signature>false</validate-signature> <configuration> <name>staros</name> <external-url> </configuration> </vnf-packaged> </config> Limitations The following limitations exist with the VNFM upgrade feature: This functionality is only available after upgrading to the 6.2 release. The rolling VNFM patch upgrade process can only be used to upgrade to new releases that have a compatible database schema. As new releases become available, Cisco will provide information as to whether or not this functionality can be used to perform the upgrade. For Ultra M deployments, AutoDeploy and AutoIT must be upgraded before using this functionality. Upgrading these products will terminate the VNF deployment. For stand-alone AutoVNF deployments, AutoVNF must be upgraded before using this functionality. Upgrading these products will terminate the VNF deployment. Make sure there are no additional operations running while performing an upgrade/rolling upgrade process. Upgrade/rolling upgrade procedure should be done only in a maintenance window. 63

66 Limitations API-based VNFM Upgrade Process 64

67 CHAPTER 7 API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process Feature Summary and Revision History, on page 65 Feature Description (AutoDeploy and AutoIT), on page 66 AutoDeploy and AutoIT Upgrade Workflow, on page 66 Upgrading AutoDeploy or AutoIT, on page 66 Feature Description (AutoVNF), on page 67 AutoVNF Upgrade Workflow, on page 68 Initiating the AutoVNF Upgrade, on page 69 Limitations, on page 72 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled - Configuration required Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced

68 Feature Description (AutoDeploy and AutoIT) API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process Feature Description (AutoDeploy and AutoIT) Important This feature is not fully qualified in this release. It is available only for testing purposes. For more information, contact your Cisco Accounts representative. In releases prior to 6.2, the USP-based VNF would have to be completely terminated in order to perform an upgrade of AutoDeploy and AutoIT. With this release, these UAS modules can optionally be upgraded as part of a rolling upgrade process in order to preserve the operational state of the VNF and UAS deployments. The rolling upgrade process is possible as long as the AutoDeploy and AutoIT were deployed in high availability (HA) mode. This allows their CDBs to be synchronized between the active and standby instances. Important The AutoDeploy and AutoIT rolling upgrade processes are supported for Ultra M deployments that leverage the Hyper-Converged architecture and for stand-alone AutoVNF deployments. AutoDeploy and AutoIT Upgrade Workflow The rolling upgrade process for AutoDeploy and AutoIT occurs as follows: 1. Onboard the new USP ISO containing the VNFM upgrade image to the Ultra M Manager node. 2. The rolling upgrade is triggered via a script on a separate machine other than the AutoDeploy/AutoIT VM. 3. The script terminates the first AutoDeploy or AutoIT VM instance. 4. Upon successful termination of the VM, the script deploys a new VM instance. If it detects that there is an issue with the VM, the script re-initiates the VM with the previous image. If no issues are detected, the script proceeds with the upgrade process. 5. Repeat the steps 3, on page 66 and 4, on page 66 for the second AutoDeploy or AutoIT VM instance. Important If AutoDeploy and AutoIT were not deployed with HA mode enabled, or if you prefer to perform an upgrade through a complete reinstall, you must first terminate the current installation using the information and instructions in the Ultra Services Platform Deployment Automation Guide. Upgrading AutoDeploy or AutoIT AutoDeploy and AutoIT upgrades are performed by executing a script manually. 1. Log on to the AutoDeploy VM as the root user. 66

69 API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process Feature Description (AutoVNF) 2. Initiate the upgrade from another VM: 1. Execute the upgrade script:./boot_uas.py --kvm { --autodeploy --autoit } --upgrade-uas 2. Enter the password for the user ubuntu at the prompt. 3. Enter the path and name for the upgrade image at the prompt. 3. Upon completion of the upgrade, check the software version. 1. Login to the ConfD CLI as the admin user. confd_cli u admin C 2. View the status. show uas Example command output: uas version uas state active uas external-connection-point INSTANCE IP STATE ROLE alive CONFD-MASTER alive CONFD-SLAVE NAME LAST HEARTBEAT AutoIT-MASTER :24:30 USPCFMWorker :24:30 USPCHBWorker :24:30 USPCWorker :24:30 Feature Description (AutoVNF) Important This feature is not fully qualified in this release. It is available only for testing purposes. For more information, contact your Cisco Accounts representative. In releases prior to 6.2, the USP-based VNF would have to be completely terminated in order to perform an upgrade of AutoVNF. With this release, AutoVNF can optionally be upgraded as part of a rolling upgrade process in order to preserve the operational state of the VNF and UAS deployments. Important The AutoVNF upgrade process is supported for Ultra M deployments that leverage the Hyper-Converged architecture and for stand-alone AutoVNF deployments. 67

70 AutoVNF Upgrade Workflow API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process AutoVNF Upgrade Workflow This section describes the sequence in which the AutoVNF upgrade procedure will be performed. Figure 6: AutoVNF Upgrade Process Flow, on page 68 illustrates the AutoVNF upgrade process for Ultra M deployments. For stand-alone AutoVNF deployments, the upgrade software image is uploaded to the onboarding server (step 1) and the upgrade command is executed from AutoVNF (step 3). Figure 6: AutoVNF Upgrade Process Flow 1. Onboard the new USP ISO containing the VNFM upgrade image to the Ultra M Manager node. 2. Update the deployment network service description (NSD) to identify the new package. Package information is defined in the VNF package descriptor (vnf-packaged) as follows: <---SNIP---> vnf-packaged <upgrade_package_descriptor_name> location <package_url> validate-signature false configuration staros external-url /home/ubuntu/system.cfg <---SNIP---> The package must then be referenced in the virtual descriptor unit (VDU) pertaining to the UEM: 68

71 API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process Initiating the AutoVNF Upgrade <---SNIP---> vdu autovnf vdu-type automation-service login-credential autovnf_login scm scm image vnf-package vnf-rack vnf-rack1 vnf-package primary <upgrade_package_descriptor_name> vnf-package secondary <previous_package_descriptor_name> <---SNIP---> Important The secondary image is used as a fallback in the event an issue is encountered through the upgrade process. If no secondary image is specified, the upgrade process will stop and generate an error log. 3. The rolling upgrade request is triggered through AutoDeploy which initiates the process with the VIM through AutoIT. 4. AutoIT determines which AutoVNF VM is active and which is standby by communicating with each of the VMs over the management interface. 5. AutoIT triggers the shutdown of the standby AutoVNF VM via the VIM. 6. AutoIT waits until the VIM confirms that the standby AutoVNF VM has been completely terminated. 7. AutoIT initiates the deployment of a new AutoVNF VM via the VIM using the upgrade image. The AutoVNF VM is deployed in standby mode. 8. The standby AutoVNF VM synchronizes data with the active AutoVNF VM. 9. AutoIT waits until the VIM confirms that the new VM has been deployed and is in standby mode. If it detects that there is an issue with the VM, AutoIT re-initiates the AutoVNF VM with the previous image. If no issues are detected, AutoIT proceeds with the upgrade process. 10. Repeat the steps 4, on page 69 to 7, on page 69 for the AutoVNF VM that is currently active. Initiating the AutoVNF Upgrade AutoVNF upgrades are initiated through a remote procedure call (RPC) executed from the ConfD command line interface (CLI) or via a NETCONF API. Via the CLI To perform an upgrade using the CLI, log in to AutoDeploy (Ultra M deployments) or AutoVNF (stand-alone AutoVNF deployments) as the ConfD CLI admin user and execute the following command: update-sw nsd-id <nsd_name> rolling { true false } vnfd <vnfd_name> vnf-package <pkg_id> NOTES: 69

72 Via the NETCONF API API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process <nsd_name> and <vnfd_name> are the names of the network service descriptor (NSD) file and VNF descriptor (VNFD) (respectively) in which the VNF component (VNFC) for the VNFM VNF component is defined. If the rolling false operator is used, the upgrade terminates the entire deployment. In this scenario, the vnfd<vnfd_name> operator should not be included in the command. If it is included, a transaction ID for the upgrade is generated and failed. The AutoVNF upstart log reflects this status. <pkg_id> is the name of the USP ISO containing the upgraded VNFM VM image. Ensure that the upgrade package is defined as a VNF package descriptor within the NSD and that it is specified as the primary package in the AutoVNF VDU configuration. Ensure that the current (pre-upgrade) package is specified as the secondary package in the AutoVNF VDU configuration in order to provide rollback support in the event of errors. Via the NETCONF API Operation: nsd:update-sw Namespace: xmlns:nsd=" Parameters: Parameter Name Required Type Description nsd M string NSD name rolling M boolean Specifies if the rolling is enabled (true) /disabled (false) vnfd M string VNFD name, mandatory in case of rolling upgrade package M string Package descriptor name that should be used to update the vnfd instance mentioned by vnfd NOTES: If the rolling false operator is used, the upgrade terminates the entire deployment. In this scenario, the vnfd<vnfd_name> operator should not be included in the command. If it is included, a transaction ID for the upgrade is generated and failed. The AutoVNF upstart log reflects this status. Ensure that the upgrade package is defined as a VNF package descriptor within the NSD and that it is specified as the primary package in the AutoVNF VDU configuration. Ensure that the current (pre-upgrade) package is specified as the secondary package in the AutoVNF VDU configuration in order to provide rollback support in the event of errors. 70

73 API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process Via the NETCONF API Example RPC <nc:rpc message-id="urn:uuid:bac690a2-08af-4c9f c907d6e12ba" <nsd xmlns=" <nsd-id>fremont-autovnf</nsd-id> <vim-identity>vim1</vim-identity> <vnfd xmlns=" <vnfd-id>esc</vnfd-id> <vnf-type>esc</vnf-type> <version>6.0</version> <configuration> <boot-time>1800</boot-time> <set-vim-instance-name>true</set-vim-instance-name> </configuration> <external-connection-point> <vnfc>esc</vnfc> <connection-point>eth0</connection-point> </external-connection-point> <high-availability>true</high-availability> <vnfc> <vnfc-id>esc</vnfc-id> <health-check> <enabled>false</enabled> </health-check> <vdu> <vdu-id>esc</vdu-id> </vdu> <connection-point> <connection-point-id>eth0</connection-point-id> <virtual-link> <service-vl>mgmt</service-vl> </virtual-link> </connection-point> <connection-point> <connection-point-id>eth1</connection-point-id> <virtual-link> <service-vl>orch</service-vl> </virtual-link> </connection-point> </vnfc> </vnfd> </nsd> <vim xmlns=" <vim-id>vim1</vim-id> <api-version>v2</api-version> <auth-url> <user>vim-admin-creds</user> <tenant>abcxyz</tenant> </vim> <secure-token xmlns=" <secure-id>vim-admin-creds</secure-id> <user>abcxyz</user> <password>******</password> </secure-token> <vdu xmlns=" <vdu-id>esc</vdu-id> <vdu-type>cisco-esc</vdu-type> <flavor> <vcpus>2</vcpus> <ram>4096</ram> <root-disk>40</root-disk> <ephemeral-disk>0</ephemeral-disk> <swap-disk>0</swap-disk> </flavor> 71

74 Limitations API-based AutoDeploy, AutoIT and AutoVNF Upgrade Process <login-credential>esc_login</login-credential> <netconf-credential>esc_netconf</netconf-credential>  <vnf-rack>abcxyz-vnf-rack</vnf-rack> <vnf-package> <primary>usp_6_2t</primary> <secondary>usp_throttle</secondary> </vnf-package> <volume/> </vdu> <secure-token xmlns=" <secure-id>esc_login</secure-id> <user>admin</user> <password>******</password> </secure-token> <secure-token xmlns=" <secure-id>esc_netconf</secure-id> <user>admin</user> <password>******</password> </secure-token> <vnf-packaged xmlns=" <vnf-package-id>usp_throttle</vnf-package-id> <location> <validate-signature>false</validate-signature> <configuration> <name>staros</name> <external-url> </configuration> </vnf-packaged> </config> Limitations The following limitations exist with the API-based AutoDeploy, AutoIT and AutoVNF upgrade feature: This functionality is only available after upgrading to the 6.2 release. Regardless of the UAS component (AutoDeploy, AutoIT, or AutoVNF), the rolling patch upgrade process can only be used to upgrade to new releases that have a compatible database schema. As new releases become available, Cisco will provide information as to whether or not this functionality can be used to perform the upgrade. For Ultra M deployments, AutoDeploy and AutoIT must be upgraded before using this functionality to upgrade AutoVNF. Upgrading these products will terminate the VNF deployment. Make sure there are no additional operations running while performing an upgrade/rolling upgrade process. Upgrade/rolling upgrade procedure should be done only in a maintenance window. 72

75 CHAPTER 8 Automatic Disabling of Unused OpenStack Services Feature Summary and Revision History, on page 73 Feature Changes, on page 73 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced. 6.2 Feature Changes In previous releases, the OpenStack ceilometer service was automatically started when UAS was used to deploy the VIM and VIM Orchestrator even though the system did not leverage any of its capabilities. 73

76 Feature Changes Automatic Disabling of Unused OpenStack Services As of this release, in order to conserve system resources and improve performance, the ceilometer service and other related telemetry services like aodh and gnocchi are disabled in Ultra M setup. Important Fault monitoring is automatically enabled for all OpenStack services even though the aodh, ceilometer and gnocchi OpenStack services are not supported in Ultra M setup. You have to manually exclude these services in your fault management configuration. 74

77 CHAPTER 9 Automatic Enabling of Syslogging for Ceph Services Feature Summary and Revision History, on page 75 Feature Changes, on page 75 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Enabled - Always-on Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced. 6.2 Feature Changes Previously, the Ultra M Manager automatically started syslogging for the following OpenStack services: Nova 75

78 Feature Changes Automatic Enabling of Syslogging for Ceph Services Cinder Kinder Glance With this release, the following Ceph OpenStack services are also automatically started through the Ultra M Manager: Ceph monitor (on Controller nodes) Ceph OSD (on OSD Compute nodes) 76

79 CHAPTER 10 BGP Peer Limit Feature Summary and Revision History, on page 77 Feature Description, on page 78 How It Works, on page 78 Configuring BGP Peer Limit, on page 78 Monitoring and Troubleshooting, on page 80 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) VPC - DI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference Statistics and Counters Reference VPC-DI System Administration Guide Revision History Revision Details Release First introduced

80 Feature Description BGP Peer Limit Feature Description How It Works In the Cisco Virtualized Packet Core Distributed Instance (VPC-DI)UGP architecture, the flexibility of BGP peering is provided across packet processing cards namely, Session Function (SF) cards, including the demux SF cards. In deployment setups based on contrail model of the SDN, each packet processing card has a vrouter within the compute node. In this model, with the current flexible BGP peering scheme, the BGP configurations needs to be implemented on each of those vrouters. This poses a challenge to service providers when there are large number of SF cards in their network. The number of lines of configuration required, poses a scaling challenge. To overcome this challenge, the BGP Peer Limit feature is introduced that restricts BGP peering to only two SF cards in the VPC-DI architecture. This feature mandates that the routing table has only two routes corresponding to the two SF cards, with a third route being a blackhole or a null route. To ensure that the new routes are longest-prefix-match routes, provisioning of only host-addresses only (/32 bitmask) is enforced. This drastically reduces the amount of configuration and the routing table size. This feature is implemented using the ip route kernel command. When configured, BGP peering is restricted to only the two SF cards with the special route. When the blackhole keyword is configured, it enables the kernel routing engine to block or drop packets going out of the node. This is not limited to any interface and defaults to a wildcard interface. For information on configuring the BGP Peer Limit feature, see the "Configuring BGP Peer Limit" section. Limitations This feature support is limited only to the context level. There is no support provided at the VRF level. This feature is supported only for IPv4. Configuring BGP Peer Limit The following section provides the configuration command to enable or disable the functionality. Configuring Packet Processing Card Routes Use the following CLI commands to add the special (static) route to any two packet processing interfaces (SF cards) defined in the context configuration. configure context context_name [ no ] ip route kernel ip_address/ip_address_mask_combo egress_intrfc_name 78

81 BGP Peer Limit Configuring Blackhole Route cost number end NOTES: no: Deletes the added routes. kernel: Allows static route in the kernel routing table options. ip_address/ip_address_mask_combo: Specifies a combined IP address subnet mask bits to indicate what IP addresses the route applies to. ip_address_mask_combo must be specified using CIDR notation where the IP address is specified using IPv4 dotted-decimal notation and the mask bits are a numeric value, which is the number of bits in the subnet mask. egress_intrfc_name : Specifies the name of an existing egress interface as an alphanumeric string of 1 through 79 characters. cost number : Defines the number of hops to the next gateway. The cost must be an integer from 0 through 255 where 255 is the most expensive. Default is 0. This functionality is disabled by default. Configuring Blackhole Route Use the following CLI commands to block or drop packets going out of the node. configure context context_name [ no ] ip route kernel ip_address/ip_address_mask_combo egress_intrfc_name cost number blackhole end NOTES: no: Deletes the added routes. kernel: Allows static route in the kernel routing table options. ip_address/ip_address_mask_combo: Specifies a combined IP address subnet mask bits to indicate what IP addresses the route applies to. ip_address_mask_combo must be specified using CIDR notation where the IP address is specified using IPv4 dotted-decimal notation and the mask bits are a numeric value, which is the number of bits in the subnet mask. egress_intrfc_name : Specifies the name of an existing egress interface as an alphanumeric string of 1 through 79 characters. The default is *, that is, a wildcard interface. cost number : Defines the number of hops to the next gateway. The cost must be an integer from 0 through 255 where 255 is the most expensive. The default is 0. blackhole: Defines the blackhole route to install in the kernel to to block or drop packets. This functionality is disabled by default. 79

82 Monitoring and Troubleshooting BGP Peer Limit Monitoring and Troubleshooting This section provides information regarding the CLI command available in support of monitoring and troubleshooting the feature. Show Command(s) and/or Outputs This section provides information regarding the show command and/or its output in support of this feature. show ip route This show command CLI now includes the value for the following new field when a static route is added to any two packet processing interfaces (SF cards). kernel-only 80

83 CHAPTER 11 Cisco Ultra Traffic Optimization This chapter describes the following topics: Feature Summary and Revision History, on page 81 Overview, on page 82 How Cisco Ultra Traffic Optimization Works, on page 82 Configuring Cisco Ultra Traffic Optimization, on page 85 Multi-Policy Support for Traffic Optimization, on page 91 Monitoring and Troubleshooting, on page 96 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area IPSG P-GW Applicable Platform(s) ASR 5500 Ultra Gateway Platform Feature Default Disabled - License Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference IPSG Administration Guide Revision History Revision Details Release With this release, Cisco Ultra Traffic Optimization is qualified on IPSG

84 Overview Cisco Ultra Traffic Optimization Overview In a high-bandwidth bulk data flow scenario, user experience is impacted due to various wireless network conditions and policies like shaping, throttling, and other bottlenecks that induce congestion, especially in the RAN. This results in TCP applying its saw-tooth algorithm for congestion control and impacts user experience, and overall system capacity is not fully utilized. The Cisco Ultra Traffic Optimization solution provides clientless optimization of TCP and HTTP traffic. This solution is integrated with Cisco P-GW and has the following benefits: Increases the capacity of existing cell sites and therefore, enables more traffic transmission. Improves Quality of Experience (QoE) of users by providing more bits per second. Provides instantaneous stabilizing and maximizing per subscriber throughput, particularly during network congestion. How Cisco Ultra Traffic Optimization Works The Cisco Ultra Traffic Optimization solution achieves its capabilities by: Stabilizing the TCP session at given or optimum target bandwidth. Monitoring the TCP session and minimizing bursts. Timing adjustment without discarding packets, and with limited buffering. Stabilizing and reducing TCP performance jitter. Increasing the number of simultaneous sessions served and reducing connection latency. Providing adequate logging information for debugging and operability. Architecture StarOS has a highly optimized packet processing framework, the Cisco Ultra Traffic Optimization engine, where the user packets (downlink) are processed in the operating systems user space. The high-speed packet processing, including the various functions of the P-GW, is performed in the user space. The Cisco Ultra Traffic Optimization engine is integrated into the packet processing path of Cisco s P-GW with a well-defined Application Programming Interface (API) of StarOS. The following graphic shows a high-level overview of P-GW packet flow with traffic optimization. 82

85 Cisco Ultra Traffic Optimization Handling of Traffic Optimization Data Record Handling of Traffic Optimization Data Record List of Attributes and File Format The Traffic Optimization Data Record (TODR) is generated only on the expiry of idle-timeout of the Cisco Ultra Traffic Optimization engine. No statistics related to session or flow from P-GW is included in this TODR. The data records are a separate file for the Traffic Optimization statistics, and available to external analytics platform. All TODR attributes of traffic optimization is enabled by a single CLI command. The output is always comma separated, and in a rigid format. Standard TODR The following is the format of a Standard TODR: instance_id,flow_type,srcip,dstip,policy_id, proto_type, dscp, flow_first_pkt_rx_time_ms,flow_last_pkt_rx_time_ms,flow_cumulative_rx_bytes Example: 1,0, , ,0,1,0, , , Where: instance_id: Instance ID. flow_type: Standard flow (0) srcip: Indicates the source IP address. 83

86 List of Attributes and File Format Cisco Ultra Traffic Optimization dstip: Indicates the destination IP address. policy_id: Indicates the traffic optimization policy ID. proto_type: Indicates the IP protocol being used. The IP protocols are: TCP and UDP. dscp: Indicates the DSCP code for upstream packets. flow_first_pkt_rx_time_ms: Indicates the timestamp when the first packet was detected during traffic optimization. flow_last_pkt_rx_time_ms: Indicates the timestamp when the last packet was detected during traffic optimization. flow_cumulative_rx_bytes: Indicates the number of bytes transferred by this flow. Large TODR The following is a sample output of a Large TODR. 2,1,2606:ae00:c663:b66f:0000:0058:be03:ae01,0172:0020:0224:0059:2200:0000:0000:0033,0,0, , , , ,3900,3900,0,0,0,11,0,0,11,1,1, ,1950,4,0,0,1, ,0,2010,0,0,1, ,0,2007,0,0,1, ,0,2008, 0,0,1, ,0,2005,0,0,1, ,0,2003,0,0,1, ,0,2005,0,0,1, ,0, 2003,0,0,1, ,0,1848,0,0,0, ,0,2002,0,0,1, ,0,107,0,0,1, ,0,2007,0,0,0,3,1, ,0,2004,0,0,0,3 Where: instance_id: Instance ID. flow_type: Large flow (1) srcip: Indicates the source IP address. dstip: Indicates the destination IP address. policy_name: Identifies the name of the configured traffic optimization policy. policy_id: Indicates the traffic optimization policy ID. proto_type: Indicates the IP protocol being used. The IP protocols are: TCP and UDP. dscp: Indicates the DSCP code for upstream packets. flow_first_pkt_rx_time_ms: Indicates the timestamp when the first packet was detected during traffic optimization. flow_last_pkt_rx_time_ms: Indicates the timestamp when the last packet was detected during traffic optimization. flow_cumulative_rx_bytes: Indicates the number of bytes transferred by this flow. large_detection_time_ms: Indicates the timestamp when the flow was detected as Large. avg_burst_rate_kbps: Indicates the average rate in Kbps of all the measured bursts. avg_eff_rate_kbps: Indicates the average effective rate in Kbps. final_link_peak_kbps: Indicates the highest detected link peak over the life of the Large flow. recovered_capacity_bytes: Indicates the recovered capacity in Kbps for this Large flow. 84

87 Cisco Ultra Traffic Optimization Licensing recovered_capacity_ms: Indicates the timestamp of recovered capacity for this Large flow. phase_count: Indicates the Large flow phase count. min_gbr_kbps: Indicates the Minimum Guaranteed Bit Rate (GBR) in Kbps. max_gbr_kbps: Indicates the Maximum Guaranteed Bit Rate (MBR) in Kbps. phase_count_record: Indicates the number of phases present in this record. end_of_phases: 0 (not end of phases) or 1 (end of phases). Large flow phase attributes: phase_type: Indicates the type of the phase phase_start_time_ms: Indicates the timestamp for the start time of the phase. burst_bytes: Indicates the burst size in bytes. burst_duration_ms: Indicates the burst duration in milliseconds. link_peak_kbps: Indicates the peak rate for the flow during its life. flow_control_rate_kbps: Indicates the rate at which flow control was attempted (or 0 if non-flow control phase). max_num_queued_packets: Identifies the maximum number of packets queued. policy_id: Identifies the traffic optimization policy ID. Licensing The Cisco Ultra Traffic Optimization is a licensed Cisco solution. Contact your Cisco account representative for detailed information on specific licensing requirements. For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide. Limitations and Restrictions The values which the P-GW chooses to send to the Cisco Ultra Traffic Optimization engine are the values associated from the bearer GBR and bearer MBR. In the current implementation, only downlink GBR and MBR are sent to the engine for traffic optimization. The IPSG supports only certain triggers for which the information is available with the IPSG service. Configuring Cisco Ultra Traffic Optimization This section provides information on enabling support for the Cisco Ultra Traffic Optimization solution. 85

88 Loading Traffic Optimization Cisco Ultra Traffic Optimization Loading Traffic Optimization Use the following configuration under the Global Configuration Mode to load the Cisco Ultra Traffic Optimization as a solution: configure require active-charging traffic-optimization end Important Enabling or disabling the traffic optimization can be done through the Service-scheme framework. Important In 21.3, and 21.5 and later releases, the dependency on the chassis reboot is not valid anymore. The Cisco Ultra Traffic Optimization engine is loaded by default. The Cisco Ultra Traffic Optimization configuration CLIs are available when the license is enabled. As such, the traffic-optimization keyword has been deprecated. Enabling Cisco Ultra Traffic Optimization Configuration Profile Use the following configuration under ACS Configuration Mode to enable the Cisco Ultra Traffic Optimization profile: configure active-charging service service_name traffic-optimization-profile end NOTES: The above CLI command enables the Traffic Optimization Profile Configuration, a new configuration mode. Configuring the Operating Mode Use the following CLI commands to configure the operating mode under Traffic Optimization Profile Configuration Mode for the Cisco Ultra Traffic Optimization engine: configure active-charging service service_name traffic-optimization-profile mode [ active passive ] end Notes: mode: Sets the mode of operation for traffic optimization. active: Active mode where both traffic optimization and flow monitoring is done on the packet. passive: Passive mode where no flow-control is performed but monitoring is done on the packet. 86

89 Cisco Ultra Traffic Optimization Configuring Threshold Value Configuring Threshold Value Use the following CLI commands to configure the threshold value for the TCP flow to be considered for the traffic optimization: configure active-charging service service_name traffic-optimization-profile heavy-session detection-threshold bytes end Notes: detection-threshold bytes: Specifies the Detection Threshold (in bytes), beyond which it is considered as heavy session. bytes must be an integer from 1 to For optimum traffic optimization benefits, it is recommended to set the threshold above 3 MB. Enabling Cisco Ultra Traffic Optimization Configuration Profile Using Service-scheme Framework The service-scheme framework is used to enable traffic optimization at APN, rule base, QCI, and Rule level. There are two main constructs for the service-scheme framework: Subscriber-base This helps in associating subscribers with service-scheme based on the subs-class configuration. subs-class The conditions defined under subs-class enables in classifying the subscribers based on rule base, APN, v-apn name. The conditions can also be defined in combination, and both OR as well as AND operators are supported while evaluating them. Service-scheme This helps in associating actions based on trigger conditions which can be triggered either at call-setup time, Bearer-creation time, or flow-creation time. trigger-condition For any trigger, the trigger-action application is based on conditions defined under the trigger-condition. trigger-actions Defines the actions to be taken on the classified flow. These actions can be traffic optimization, throttle-suppress, and so on. Session Setup Trigger The any-match = TRUE, a wildcard configuration, is the only supported condition for this trigger and so this is applicable to all the flows of the subscriber. Following is a sample configuration: configure active-charging service service_name trigger-action trigger_action_name traffic-optimization exit 87

90 Bearer Creation Trigger Cisco Ultra Traffic Optimization Bearer Creation Trigger Flow Creation Trigger trigger-condition trigger_condition_name1 any-match = TRUE exit service-scheme service_scheme_name trigger sess-setup priority priority_value trigger-condition trigger_condition_name1 trigger-action trigger_action_name exit subs-class sub_class_name apn = apn_name exit subscriber-base subscriber_base_name priority priority_value subs-class sub_class_name bind service-scheme service_scheme_name end The trigger conditions related to QCI can be used for this trigger, and so this is applicable to all the flows of specific bearers. The following is a sample configuration: configure active-charging service service_name trigger-action trigger_action_name traffic-optimization exit trigger-condition trigger_condition_name1 any-match = TRUE exit trigger-condition trigger_condition_name2 qci = qci_value exit service-scheme service_scheme_name trigger bearer-creation priority priority_value trigger-condition trigger_condition_name2 trigger-action trigger_action_name exit exit subs-class sub_class_name apn = apn_name exit subscriber-base subscriber_base_name priority priority_value subs-class sub_class_name bind service-scheme service_scheme_name end The trigger conditions related to rule-name and QCI can be used here, and so this is related to specific flow. The following is a sample configuration: 88

91 Cisco Ultra Traffic Optimization Flow Creation Trigger configure active-charging service service_name trigger-action trigger_action_name traffic-optimization exit trigger-condition trigger_condition_name1 any-match = TRUE exit trigger-condition trigger_condition_name2 qci = qci_value exit trigger-condition trigger_condition_name3 rule-name = rule_name exit service-scheme service_scheme_name trigger bearer-creation priority priority_value trigger-condition trigger_condition_name3 trigger-action trigger_action_name exit exit subs-class sub_class_name apn = apn_name exit subscriber-base subscriber_base_name priority priority_value subs-class sub_class_name bind service-scheme service_scheme_name end Notes: trigger_condition_name3 can have only rules, only QCI, both rule and QCI, or either of rule and QCI. The following table illustrates the different levels of Traffic Optimization and their corresponding Subscriber Class configuration and Triggers. Traffic Optimization Levels Applicable to all the calls or flows Applicable to all calls or flows of a rulebase Applicable to all calls or flows of an APN Subscriber Class configuration and Triggers subs-class sc1 any-match = TRUE exit Sessetup trigger condition is any-match = TRUE subs-class sc1 rulebase = prepaid exit Sessetup trigger condition is any-match = TRUE subs-class sc1 apn = cisco.com exit Sessetup trigger condition is any-match = TRUE 89

92 Generating TODR Cisco Ultra Traffic Optimization Traffic Optimization Levels Applicable to all flows of a Bearer Applicable to a particular flow Subscriber Class configuration and Triggers trigger-condition TC1 qci = 1 exit Bearer creation trigger condition is TC1 trigger-condition TC1 qci = 1 rule-name = tcp multi-line-or all-lines exit Flow creation trigger condition is TC1 Important In case of LTE to ehrpd handover, since QCI is not valid for ehrpd, it is recommended to configure rule-name as the trigger-condition under service-scheme. Generating TODR Use the following CLI commands under ACS Configuration Mode to enable Traffic Optimization Data Record (TODR) generation: configure active-charging service service_name traffic-optimization-profile data-record end NOTES: If previously configured, use the no data-record command to disable generating TODR. Configuring Rulebase to Allow UDP Traffic Optimization Important From Release 21.8 onwards, it is recommended to enable TCP and UDP protocol for Traffic Optimization by using the CLI commands mentioned in the Enabling TCP and UDP section of this chapter. Use the following configuration in ACS Rulebase Configuration Mode to turn ON/OFF the traffic optimization for UDP traffic. Important Enabling/Disabling the Cisco Ultra Traffic Optimization solution is controlled by Service-scheme Framework. configure active-charging service service_name 90

93 Cisco Ultra Traffic Optimization Multi-Policy Support for Traffic Optimization rulebase rulebase_name [ no ] traffic-optimization udp end NOTES: udp: Specifies traffic optimization for UDP traffic. By default, UDP traffic optimization is disabled. If previously configured, use the no traffic-optimization udp CLI command to disable traffic optimization for UDP traffic. Multi-Policy Support for Traffic Optimization Cisco Ultra Traffic Optimization engine supports Traffic Optimization for multiple policies and provides Traffic Optimization for a desired location. It supports a maximum of 32 policies. By default, two policies are pre-configured. Operators can configure several parameters under each Traffic Optimization policy. This feature includes the following functionalities: By default, Traffic Optimization is enabled for TCP and UDP data for a particular Subscriber, Bearer or Flow that use the Service-Schema. Important UDP/QUIC based Traffic Optimization is supported only on PORT 443. Selection of a policy depends on the priority configured. Priorities can be configured for traffic optimization policies using a Trigger Condition. The priority can be set regardless of a specific location where the traffic optimization policy is being applied. A traffic optimization policy can be overridden by another policy based on the priorities configured. A configuration to associate a traffic optimization policy with a Trigger Action, under the Service-Schema. A configuration to select a Traffic Optimization policy for a Location Trigger. Currently, only ecgi Change Detection is supported under the Local Policy Service Configuration mode. Important Location Change Trigger is not supported with IPSG. Important Policy ID for a flow is not recovered after a Session Recovery (SR) or Inter-Chassis Session Recovery (ICSR). Important The Multi-Policy Support feature requires the same Cisco Ultra Traffic Optimization license key be installed. Contact your Cisco account representative for detailed information on specific licensing requirements. 91

94 How Multi-Policy Support Works Cisco Ultra Traffic Optimization How Multi-Policy Support Works Policy Selection Cisco s Ultra Traffic Optimization engine provides two default policies Managed and Unmanaged. When Unmanaged policy is selected, traffic optimization is not performed. When Managed policy is selected, traffic optimization is performed using default parameters. Managed policy is applied when a policy is not specified in a Trigger Action where traffic optimization is enabled without specifying a policy. When Managed policy is selected, traffic optimization is performed using default parameters. Managed policy is applied when a policy is not specified in a Trigger Action where traffic optimization is enabled without specifying a policy. Session Setup Trigger If a Trigger Action is applied only for a Session Setup in a Service-Schema, then the trigger action is only applied to new sessions only. Bearer Setup Trigger If a trigger action is applied only for a Bearer Setup, changes in the trigger action will be applicable to newly created bearers and its flows. Flow Creation Trigger Under a trigger condition corresponding to a flow create, conditions can be added based on a rule-name, local-policy-rule or an IP protocol in addition to the trigger condition: any-match. When traffic optimization on existing flows is disabled because of a trigger condition, then the traffic optimization engine will apply the default Unmanaged policy on them. Deleting a Policy Before deleting a Policy profile, all association to a traffic optimization policy should be removed. For more information on deletion of a policy, refer to the Traffic Optimization Policy Configuration section. Configuring Multi-Policy Support The following sections describes the required configurations to support the Multi-Policy Support. Configuring a Traffic Optimization Profile Use the following CLI commands to configure a Traffic Optimization Profile. configure require active-charging active-charging service service_name [ no ] data-record [ no ] efd-flow-cleanup-interval cleanup_interval [ no ] stats-interval stats_interval [ no ] stats-options { flow-analyst [ flow-trace ] flow-trace [ flow-analyst ] } end NOTES: require active-charging: Enables the configuration requirement for Active Charging service. 92

95 Cisco Ultra Traffic Optimization Configuring a Traffic Optimization Policy data-record: Enables the generation of traffic optimization data record. efd-flow-cleanup-interval: Configures EFD flow cleanup interval. The interval value is an integer that ranges from 10 to 5000 milliseconds. stats-interval: Configures the flow statistics collection and reporting interval in seconds. The interval value is an integer that ranges from 1 to 60 seconds. stats-options: Configures options to collect the flow statistics. Note The heavy-session command is deprecated in this release. Configuring a Traffic Optimization Policy Use the following CLI commands to configure a Traffic Optimization Policy. configure require active-charging active-charging service service_name [ no ] traffic-optimization-policy policy_name bandwidth-mgmt { backoff-profile [ managed unmanaged ] [ min-effective-rate effective_rate [ min-flow-control-rate flow_rate ] min-flow-control-rate flow_rate [ min-effective-rate effective_rate ] ] min-effective-rate effective_rate [ backoff-profile [ managed unmanaged ] [ min-flow-control-rate flow_rate ] min-flow-control-rate control_rate [ backoff-profile [ managed unmanaged ] ] min-flow-control-rate [ [ backoff-profile [ managed unmanaged ] [ min-effective-rate effective_rate ] [ min-effective-rate effective_rate ] [ backoff-profile [ managed unmanaged ] ] } [ no ] bandwidth-mgmt curbing-control { max-phases max_phase_value [ rate curbing_control_rate [ threshold-rate threshold_rate [ time curbing_control_duration ] ] ] rate curbing_control_rate [ max-phases [ threshold-rate threshold_rate [ time curbing_control_duration ] ] ] threshold-rate [ max-phases max_phase_value [ rate curbing_control_rate [ time curbing_control_duration ] ] ] time [ max-phases max_phase_value [ rate curbing_control_rate [ threshold-rate threshold_rate] ] ] } [ no ] curbing-control heavy-session { standard-flow-timeout [ threshold threshold_value threshold threshold_value [ standard-flow-timeout timeout_value ] } [ no ] heavy-session link-profile { initial-rate initial_seed_value [ max-rate max_peak_rate_value [ peak-lock ] ] max-rate [ initial-rate initial_seed_value [ peak-lock ] ] peak-lock [ initial-rate initial_seed_value [ max-rate max_peak_rate_value ] ] } [ no ] link-profile session-params { tcp-ramp-up tcp_rampup_duration [ udp-ramp-up udp_rampup_duration ] udp-ramp-up udp_rampup_duration [ tcp-ramp-up tcp_rampup_duration ] } 93

96 Configuring a Traffic Optimization Policy Cisco Ultra Traffic Optimization NOTES: [ no ] session-params end no: Overwrites the traffic-optimization configured parameter(s) with default values. Before deleting a policy profile, all policies associated to the policy profile should be removed. If policy associations are not removed before deletion, the following error message will be displayed: Failure: traffic-optimization policy in use, cannot be deleted. bandwidth-mgmt: Configures bandwidth management parameters. backoff-profile: Determines the overall aggressiveness of the back off rates. managed: Enables both traffic monitoring and traffic optimization. unmanaged: Only enables traffic monitoring. min-effective-rate: Configures minimum effective shaping rate in Kbps. The shaping rate value is an integer ranging from 100 to min-flow-control-rate: Configures the minimum rate allowed in Kbps to control the flow of heavy-session-flows during congestion. The control rate value is an integer ranging from 100 to curbing-control: Configures curbing flow control related parameters. max-phases: Configures consecutive phases where target shaping rate is below threshold-rate to trigger curbing flow control. The maximum phase value is an integer ranging from 2 to 10. rate: Configures the curbing flow-control at a fixed rate in Kbps instead of a dynamic rate. The control rate value is an integer ranging from 0 to To disable fixed flow control rate, set the flow control rate value to 0. threshold-rate: Configures the minimum target shaping rate in kbps to trigger curbing. The threshold rate is an integer ranging from 100 to time: Configures the duration of a flow control phase in milliseconds. The flow control duration value is an integer ranging from 0 to To disable flow control, set the flow control duration value to 0. heavy-session: Configures parameters for heavy-session detection. standard-flow-timeout: Configures the idle timeout in milliseconds, for expiration of standard flows. The timeout value is an integer ranging from 100 to threshold: Configures heavy-session detection threshold in bytes. On reaching the threshold, the flow will be monitored and potentially managed. The threshold value is an integer ranging from 0 to link-profile: Configures link profile parameters. initial-rate: Configures the initial seed value of the acquired peak rate in Kbps for a traffic session. The initial seed value is an integer ranging from 100 to max-rate: Configures the maximum learned peak rate allowed in Kbps for a traffic session. The max rate value is an integer ranging from 100 to peak-lock: Confirms with the link peak rate available at the initial link peak rate setting. session-params: Configures session parameters. 94

97 Cisco Ultra Traffic Optimization Associating a Trigger Action to a Traffic Optimization Policy tcp-ramp-up: Configures the ramp-up-phase duration in milliseconds, for TCP traffic. The TCP ramp-up duration is an integer ranging from 0 to udp-ramp-up: Configures the ramp-up-phase duration in milliseconds, for UDP traffic. The UDP ramp-up duration is an integer ranging from 0 to Traffic Optimization Policy - Default Values Bandwidth-Mgmt: Backoff-Profile Min-Effective-Rate Min-Flow-Control-Rate : Managed : 600 (kbps) : 250 (kbps) Flow-Control: Time : 0 (ms) Rate : 600 (kbps) Max-Phases : 2 Threshold-Rate : 600 (kbps) Heavy-Session: Threshold Timeout : (bytes) : 500 (ms) Link-Profile: Initial-Rate Max-Rate Peak-Lock : 7000 (kbps) : (kbps) : Disabled Session-Params: Tcp-Ramp-Up Udp-Ramp-Up : 5000 (ms) : 0 (ms) Associating a Trigger Action to a Traffic Optimization Policy Use the following CLI commands to associate a Trigger Action to a Traffic Optimization Policy. configure require active-charging active-charging service service_name trigger-action trigger_action_name traffic-optimization policy policy_name [ no ] traffic-optimization end NOTES: traffic-optimization policy: Configures a traffic optimization policy. no: Removes the configured traffic optimization policy. Enabling TCP and UDP Use the following CLI commands to enable TCP and UDP protocol for Traffic Optimization: 95

98 Service-Scheme Configuration for Multi-Policy Support Cisco Ultra Traffic Optimization configure require active-charging active-charging service service_name trigger-condition trigger_condition_name [ no ] ip protocol = [ tcp udp ] end NOTES: no: Deletes the Active Charging Service related configuration. ip: Establishes an IP configuration. protocol: Indicates the protocol being transported by the IP packet. tcp: Indicates the TCP protocol to be transported by the IP packet. udp: Indicates the UDP protocol to be transported by the IP packet. Service-Scheme Configuration for Multi-Policy Support The service-schema framework enables traffic optimization at APN, rule base, QCI, and Rule level. With the Multi-Policy Support feature, traffic optimization in a service-scheme framework allows the operator to configure multiple policies and to configure traffic optimization based on a desirable location. The service-scheme framework helps in associating actions based on trigger conditions, which can be triggered either at call-setup time, Bearer-creation time, or flow-creation time. Monitoring and Troubleshooting This section provides information regarding commands available to monitor and troubleshoot the Cisco Ultra Traffic Optimization solution on the P-GW. Cisco Ultra Traffic Optimization Show Commands and/or Outputs This section provides information about show commands and the fields/counters that is introduced in support of Cisco Ultra Traffic Optimization solution. show active-charging rulebase name <rulebase_name> The output of this show command has been enhanced to display if the UDP traffic optimization is Enabled or Disabled. Following are the fields that has been introduced: Traffic Optimization: UDP: Enabled/Disabled show active-charging traffic-optimization counters The show active-charging traffic-optimization counters sessmgr { all instance number } CLI command is introduced where: counters Displays aggregate flow counters/statistics from Cisco Ultra Traffic Optimization engine. 96

99 Cisco Ultra Traffic Optimization show active-charging traffic-optimization counters Important This CLI command is license dependent and visible only if the license is loaded. Following are the new field/counters: Traffic Optimization Flows: Active Normal Flow Count: Active Large Flow Count: Active Managed Large Flow Count: Active Unmanaged Large Flow Count: Total Normal Flow Count: Total Large Flow Count: Total Managed Large Flow Count: Total Unmanaged Large Flow Count: Total IO Bytes: Total Large Flow Bytes: Total Recovered Capacity Bytes: Total Recovered Capacity ms: On executing the above command, the following new fields are displayed for the Multi-Policy Support feature: Important This CLI command is license dependent and visible only if the license is loaded. TCP Traffic Optimization Flows: Active Normal Flow Count: Active Large Flow Count: Active Managed Large Flow Count: Active Unmanaged Large Flow Count: Total Normal Flow Count: Total Large Flow Count: Total Managed Large Flow Count: Total Unmanaged Large Flow Count: Total IO Bytes: Total Large Flow Bytes: 97

100 show active-charging traffic-optimization info Cisco Ultra Traffic Optimization Total Recovered Capacity Bytes: Total Recovered Capacity ms: UDP Traffic Optimization Flows: Active Normal Flow Count: Active Large Flow Count: Active Managed Large Flow Count: Active Unmanaged Large Flow Count: Total Normal Flow Count: Total Large Flow Count: Total Managed Large Flow Count: Total Unmanaged Large Flow Count: Total IO Bytes: Total Large Flow Bytes: Total Recovered Capacity Bytes: Total Recovered Capacity ms: show active-charging traffic-optimization info This show command has been introduced in Exec Mode, where: traffic-optimization Displays all traffic optimization options. info Displays Cisco Ultra Traffic Optimization engine information. The output of this CLI command displays the version, mode, and configuration values. Following are the new fields/counters: Version: Mode: Configuration: Threshold Bytes: Lower Bandwidth: Upper Bandwidth: Min Session Time: Min Session Size: Data Records (TODR) 98

101 Cisco Ultra Traffic Optimization show active-charging traffic-optimization policy Statistics Options EFD Flow Cleanup Interval Statistics Interval show active-charging traffic-optimization policy On executing the above command, the following new fields are displayed for the Multi-Policy Support feature: Policy Name Policy-Id Bandwidth-Mgmt Backoff-Profile Min-Effective-Rate Min-Flow-Control-Rate Curbing-Control Time Rate Max-phases Threshold-Rate Heavy-Session Threshold Standard-Flow-Timeout Link-Profile Initial-Rate Max-Rate Peak-Lock Session-Params Tcp-Ramp-Up Udp-Ramp-Up 99

102 show active-charging traffic-optimization policy Cisco Ultra Traffic Optimization 100

103 CHAPTER 12 Configuration Support for Heartbeat Value Feature Summary and Revision History, on page 101 Feature Changes, on page 102 Command Changes, on page 102 Monitoring and Troubleshooting, on page 103 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) All ASR 5500 VPC - DI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation ASR 5500 System Administration Guide Command Line Interface Reference VPC-DI System Administration Guide Statistics and Counters Reference Revision History Revision Details Release In this release, the default heartbeat value between the management and data processing card can be modified to prevent the management card from incorrectly detecting and reporting the packet processing card as failed

104 Feature Changes Configuration Support for Heartbeat Value Revision Details First introduced. Release Pre 21.2 Feature Changes In certain deployment scenarios, the management card reports the packet processing card as failed when it is unable to detect a heartbeat for about two seconds. This assumed failure is observed when the hearbeat is delayed or lost due to congestion in the internal DI network. This release addresses this issue. Command Changes Previous Behavior: The management card reports the packet processing card as failed due to its inability to detect the heartbeat within the default value of two seconds, thereby causing an unplanned switchover. New Behavior: To prevent the management card from incorrectly detecting and reporting the packet processing card as failed, the default heartbeat value between the management and data processing card can now be modified. Customer Impact: Prevents the management card from wrongful reporting of the data processing card and unplanned switchover. high-availability fault-detection The above CLI command is enhanced to include the card hb-loss value keyword, which is used to configure the heartbeat value between the management and packet processing cards. This command is configured in the Global Configuration Mode. configure [default] high-availability fault-detection card hb-loss value end NOTES: default: Restores the heartbeat value to the default value of 2 heartbeats. card: Specifies the packet processing card. hb-loss value: Configures the heartbeat loss value. The default value is 2 heartbeats. The heartbeat value between a management to management card is set to the default value of 2 heartbeats. This command modifies the heartbeat value only between the management and packet processing cards. By default, this CLI is disabled. 102

105 Configuration Support for Heartbeat Value Monitoring and Troubleshooting Monitoring and Troubleshooting This section provides information regarding show commands and/or their outputs in support of this feature. show heartbeat statistics hb-loss all This show command now includes the value for the following new fields for the all packet processing cards. Max Bounces Total HB Miss Total HB Card Failure Card/Cpu Total Age/Intf/Seqno/TimeStamp AFD(oldest first) show heartbeat statistics hb-loss card <card-number> This show command now includes the value for the following new fields for the specified packet processing card. Max Bounces Total HB Miss Total HB Card Failure Card/Cpu Total Age/Intf/Seqno/TimeStamp AFD(oldest first) 103

106 show heartbeat statistics hb-loss card <card-number> Configuration Support for Heartbeat Value 104

107 CHAPTER 13 Dedicated Core Networks on MME This chapter describes the Dedicated Core Networks feature in the following sections: Feature Summary and Revision History, on page 105 Feature Description, on page 106 How It Works, on page 109 Configuring DECOR on MME, on page 117 Monitoring and Troubleshooting, on page 122 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) MME ASR 5500 VPC-DI VPC-SI Feature Default Enabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference MME Administration Guide Statistics and Counters Reference 105

108 Feature Description Dedicated Core Networks on MME Revision History Revision Details In release 21.8, the DECOR feature is enhanced to support: Association of DCNs to a specific RAT Type under call-control-profile Release 21.8 Association of multiple DCN profiles (to designate dedicated or default core network) under call-control-profile DNS selection of S-GW / P-GW / MME / S4-SGSN/ MMEGI lookup for specified UE Usage Type or DCN-ID DIAMETER_AUTHENTICATION_DATA_UNAVAILABLE result code for the S6a (HSS) interface When UE moves from a service area where DCN is not used to another area where DCN is supported, then MME does not receive the UE-Usage-Type from peer. In this case, MME will do an explicit AIR towards HSS for UE-Usage lookup. The enhancements to the DECOR feature in release 21.6 are fully qualified. The enhancements to the DECOR feature in release 21.6 are not fully qualified and are available only for testing purposes. In release 21.6, the DECOR feature is enhanced to support: DNS based MMEGI selection DCN-ID IE in Attach/TAU Accept and GUTI Reallocation Command message towards UE DCN-ID IE in INITIAL UE MESSAGE from enodeb HSS initiated DCN reselection MME initiated DCN reselection Network sharing with same MMEGI for different PLMNs Network sharing with different MMEGIs for different PLMNs Served DCNs Items IE in S1 Setup Response and MME Configuration Update messages towards enodebs First introduced Feature Description The Dedicated Core (DECOR) Networks feature allows an operator to deploy one or more dedicated core network within a PLMN, with each core network dedicated for a specific type of subscriber. The specific dedicated core network that serves a UE is selected based on subscription information and operator 106

109 Dedicated Core Networks on MME Overview configuration, without requiring the UEs to be modified. This feature aims to route and maintain UEs in their respective DCNs. The DECOR feature can either provide specific characteristics and functions to the UE or subscriber, or isolate them to a UE or subscriber. For example, Machine-to-Machine (M2M) subscribers, subscribers belonging to a specific enterprise or separate administrative domain, and so on. Overview Dedicated Core Networks (DCN) enable operators to deploy multiple core networks consisting of one or more MME/SGSN and optionally one or more S-GW/P-GW/PCRF. If a network deploys a DCN selection based on both LAPI indication and subscription information (MME/SGSN), then DCN selection based on the subscription information provided by MME/SGSN overrides the selection based on the Low Access Priority Indication (LAPI) by RAN. A new optional subscription information parameter, UE Usage Type, stored in the HSS, is used by the serving network to select the DCNs that must serve the UE. The operator can configure DCNs and its serving UE Usage Type as required. Multiple UE Usage Types can be served by the same DCN. The HSS provides the UE Usage Type value in the subscription information of the UE to the MME/SGSN/MSC. The serving network chooses the DCN based on the operator configured (UE Usage Type to DCN) mapping, other locally configured operator's policies, and the UE related context information available at the serving network. Note One UE subscription can be associated only with a single UE Usage Type, which describes its characteristics and functions. External Interfaces DNS The following components are enhanced to support the DECOR feature on the MME: S-GW or P-GW Selection MME performs S-GW or P-GW selection from DCNs serving UE Usage Type or DCN-ID, based on the configuration in the decor profile. The existing service parameters of the SNAPTR records are enhanced by appending the character string "+ue-<ue usage type>" or "+ue-<dcn-id> to the "app-protocol" name identifying the UE usage type(s) or DCN-ID for which the record applies. For example: S-GW service parameter x-3gpp-sgw:x-s11+ue will represent the S-GW which is part of a DCN serving UE usage types or DCN-ID 1, 10, and 20. For example: P-GW service parameter x-3gpp-pgw:x-s5-gtp+ue :x-s8-gtp+ue will represent the P-GW which is part of a DCN serving UE usage types or DCN-ID 1, 10, and 20. MMEGI Retrieval MME uses local configuration for MMEGI corresponding to the UE Usage Type and DNS SNAPTR procedures. 107

110 S6a (HSS) Interface Dedicated Core Networks on MME The configuration options for static (local) or DNS or both are provided under decor-profile. If both options are enabled, then DNS is given preference. When DNS lookup fails, static (local) value is used as fallback. To retrieve the MMEGI identifying the DCN serving a particular UE usage type, the SNAPTR procedure uses the Application-Unique String set to the TAI FQDN. The existing service parameters are enhanced by appending the character string "+ue-<ue usage type>" or "+ue-<dcn-id> to the "app-protocol" name identifying the UE usage type for which the discovery and selection procedures are performed. For example: MME will discover the MMEGI for a particular UE usage type or DCN-ID by using the "Service Parameters" of "x-3gpp-mme:x-s10+ue-<ue usage type>" or "x-3gpp-mme:x-s10+ue-<dcn-id>". The service parameters are enhanced to identify the UE usage type(s) for which the record applies. The MMEGI will be provisioned in the host name of the records and MMEGI will be retrieved from the host name. MME or S4-SGSN Selection To perform MME/S4-SGSN selection from the same DCN during handovers, the existing service parameters are enhanced by appending the character string "+ue-<ue usage type>" or "+ue-<dcn-id>" to the "app-protocol" name identifying the UE usage type. If the MME fails to find a candidate list for the specific UE Usage Type, it falls back to the legacy DNS selection procedure. For example: For an MME to find a candidate set of target MMEs "x-3gpp-mme:x-s10+ue-<ue usage type>" or "x-3gpp-mme:x-s10+ue-<dcn-id>" For an MME to find a candidate set of target SGSNs "x-3gpp-sgsn:x-s3+ue-<ue usage type>" or "x-3gpp-sgsn:x-s3+ue-<dcn-id>" Note I-RAT handovers between MME and Gn-SGSN is not supported in this release. S6a (HSS) Interface GTPv2 (MME or S4-SGSN) To request the UE Usage Type from HSS, MME sets the "Send UE Usage Type" flag in the AIR-Flags AVP, in the AIR command. The AIR-Flag is set only if the decor s6a ue-usage-type CLI command is enabled under MME-service or Call-Control-Profile. HSS may include the UE-Usage-Type AVP in the AIA response command in the case of DIAMETER_SUCCESS or DIAMETER_AUTHENTICATION_DATA_UNAVAILABLE result code. MME will store the UE Usage Type in the UE context for both the result codes. MME supports the UE Usage Type IE in Identification Response, Forward Relocation Request, and Context Response Messages. If the subscribed UE Usage Type is available, it will be set to the available value, otherwise the MME encodes the length field of this IE with 0. Similarly, MME will parse and store the UE Usage Type value when received from the peer node. 108

111 Dedicated Core Networks on MME How It Works How It Works MME obtains the UE Usage type and determines the MMEGI that serves the corresponding DCN. The MME then compares this MMEGI with its own MMEGI to perform a reroute or process further. In case of reroute, the request message is redirected to the appropriate MME. Refer to the ATTACH/TAU Procedure, on page 111 call flow for more information. The following deployment scenarios are supported when DECOR is enabled on the MME: MME can be deployed where the initial request is sent by RAN (enodeb) when sufficient information is not available to select a specific DCN. MME can be deployed as a part of DCN to serve one or more UE Usage Types. MME can be deployed as part of a Common Core Network (CCN) or Default Core Network, to serve UE Usage Types for which specific DCN is not available. Note An MME can service initial RAN requests and also be a part of a DCN or a CCN. However, a particular MME service can only belong to one DCN or CCN within a PLMN domain. The Dedicated Core Network implements the following functionalities on the MME: NAS Message Redirection ATTACH and TAU and Handover Procedures UE Usage Type support on S6a and GTPv2 interfaces S-GW/P-GW DNS selection procedures with UE Usage Type or DCN-ID MME/S4-SGSN selection procedures with UE Usage Type or DCN-ID during handovers Roaming Network Sharing DNS based MMEGI selection with UE-Usage-Type or DCN-ID DCN ID Support HSS/MME initiated DCN reselection When UE moves from a service area where DCN is not used to another area where DCN is supported, then MME does not receive the UE-Usage-Type from peer. In this case, MME will do an explicit AIR towards HSS for UE-Usage lookup. Flows This section describes the call flows related to the DECOR feature. UE Assisted Dedicated Core Network Selection, on page 110 NAS Message Redirection Procedure, on page

112 UE Assisted Dedicated Core Network Selection Dedicated Core Networks on MME ATTACH/TAU Procedure, on page 111 HSS Initiated Dedicated Core Network Reselection, on page 114 UE Assisted Dedicated Core Network Selection The UE assisted Dedicated Core Network Selection feature selects the correct DCN by reducing the need for DECOR reroute by using DCN-ID sent from the UE and DCN-ID used by RAN. 1. The DCN-ID will be assigned to the UE by the serving PLMN and is stored in the UE per PLMN-ID. Both standardized and operator specific values for DCN-ID are acceptable. The UE will use the PLMN specific DCN-ID whenever it is stored for the target PLMN. 2. The HPLMN may provision the UE with a single default standardized DCN-ID that will be used by the UE only if the UE has no PLMN specific DCN-ID of the target PLMN. When a UE configuration is changed with a new default standardized DCN-ID, the UE will delete all stored PLMN specific DCN-IDs. 3. The UE provides the DCN-ID to RAN at registration to a new location in the network, that is, in Attach, TAU, and RAU procedures. 4. RAN selects the serving node MME based on the DCN-ID provided by UE and configuration in RAN. For E-UTRAN, the enodeb is conveyed with DCNs supported by the MME during setup of the S1 connection in S1 Setup Response. NAS Message Redirection Procedure Reroute NAS message is used to reroute a UE from one CN node to another CN node during Attach, TAU, or RAU procedure. This is also used by the MME/SGSN or HSS initiated Dedicated Core Network Reselection procedure. When the first MME determines the UE Usage Type, it fetches the DCN configuration serving the UE and the corresponding MMEGI (from configuration or DNS). If the MME s MMEGI is not the same as the MMEGI of the DCN, MME moves the UE to another MME using the NAS messaging redirection procedure. The following call flow illustrates the NAS Message Redirection procedure: 110

113 Dedicated Core Networks on MME ATTACH/TAU Procedure Figure 7: NAS Message Redirection Procedure Step Description The first new MME sends a Reroute NAS Message Request to enodeb including UE Usage Type and MMEGI among other parameters. RAN selects a new MME based on MMEGI. If no valid MME can be obtained from MMEGI, it selects MME from the CCN or forwards to the same first MME. The second new MME determines from the MMEGI field if the incoming request is a re-routed NAS request or not. Now, if the received MMEGI belongs to the second MME, the call is serviced, else the call is rejected. No further rerouting is performed. If the UE Usage Type is received by the second MME, it is used for S-GW/P-GW selection. ATTACH/TAU Procedure The following figure illustrates a detailed flow of the ATTACH or TAU procedure. 111

114 ATTACH/TAU Procedure Dedicated Core Networks on MME Figure 8: ATTACH and TAU Procedure Step 1 Description In the RRC Connection Complete message transferring the NAS Request message, the UE provides the DCN-ID, if available. If the UE has a PLMN specific DCN-ID. the UE provides this value and if no PLMN specific DCN-ID exists, then the pre-provisioned default standardized DCN-ID will be provided, if pre-provisioned in the UE. The RAN node selects a DCN and a serving MME/SGSN within the network of the selected core network operator based on the DCN-ID and configuration in the RAN node. The NAS Request message is sent to the selected node. The DCN-ID is provided by the RAN to the MME/SGSN together with the NAS Request message. 112

115 Dedicated Core Networks on MME ATTACH/TAU Procedure Step 2 Description The first new MME does not receive the MMEGI from enodeb. The MME determines the UE Usage Type as follows: 1. It may receive the UE Usage Type from the peer MME/S4-SGSN. 2. It may determine from the locally available UE context information. 3. It sends an AIR message to the HSS requesting the UE Usage Type by adding the parameter "Send UE Usage Type" flag in the message. If authentication vectors are available in the database or received from peer, MME will not send the Immediate-Response-Preferred flag in the AIR message. 4. It may determine from the local configuration. 3 4 When UE Usage Type is available, and if the MME has to send an AIR message to the HSS to fetch authentication vectors, then the Send UE Usage Type flag is not set in the AIR message. The first new MME determines to handle the UE: 1. When there is a configured DCN and the first new MME belongs to the MMEGI serving the DCN. 2. It will continue with the call flow. 3. The MME/SGSN sends the DCN-ID, if available, for the new DCN to the UE in the NAS Accept message. The UE updates its stored DCN-ID parameter for the serving PLMN if DCN-ID for serving PLMN is changed. 5 The first new MME determines to reject the UE: 1. When UE Usage Type is available but without a matching DCN. 2. The NAS message is rejected with parameters (for example: T3346 backoff timer) such that the UE does not immediately re-initiate the NAS procedure. 6 The first new MME determines to reroute the UE: 1. When there is a configured DCN and the first new MME does not belong to the MMEGI. 2. The first new MME sends a Context Acknowledge message with cause code indicating that the procedure is not successful. The old MME/SGSN will continue as if Context Request was never received. 3. The first new MME performs the NAS redirection procedure and the request may be routed by RAN to a second new MME. 7 The second new MME determines to handle the UE or reject it; the MME does not perform another re-route. The process of handling the UE or rejecting the UE is similar to the procedure used in the case of the first new MME. The second new MME does not fetch the UE Usage Type from HSS. It is received either from the RAN node or the old MME. 113

116 HSS Initiated Dedicated Core Network Reselection Dedicated Core Networks on MME HSS Initiated Dedicated Core Network Reselection This procedure is used by the HSS to update (add, modify, or delete) the UE Usage Type subscription parameter in the serving node. This procedure may result in change of serving node of the UE. The following call flow illustrates the HSS Initiated DCN Reselection procedure. Figure 9: HSS Initiated Dedicated Core Network Reselection Procedure Step 1 2 Description The HSS sends an Insert Subscriber Data Request (IMSI, Subscription Data) message to the MME. The Subscription Data includes the UE Usage Type information. The MME updates the stored Subscription Data and acknowledges the Insert Subscriber Data Request message by returning an Insert Subscriber Data Answer (IMSI) message to the HSS. The procedure ends if the MME/SGSN continues to serve the UE. 114

117 Dedicated Core Networks on MME Impact to Handover Procedures Step Description As per this callflow, one of the following steps occur: Steps 3 through 6 occur in case the UE is already in connected mode or UE enters connected mode by initiating data transfer. Step 7 occurs in case the UE is in idle mode and performs a TAU/RAU procedure. Important Paging is not supported in this release. If the UE is in idle mode, MME waits until the UE becomes active The UE initiates NAS connection establishment either by uplink data or by sending a TAU/RAU Request. The MME triggers the GUTI re-allocation procedure and includes a non-broadcast TAI. The MME releases RAN resources and UE is moved to idle mode. The non-broadcast TAI triggers the UE to immediately start the TAU procedure. The MME receives the TAU Request message. The UE performs a TAU request. The MME receives the TAU Request message The MME triggers the NAS Message redirection procedure to redirect the UE if: the UE Usage Type for the UE has been added or modified and if it is not served by the MME the UE Usage Type has been withdrawn from the HSS subscription data and subscriptions without UE Usage Type are not served by the MME Note HSS Initiated UE Usage Type withdrawal is not supported. The addition or change in usage type is supported. Impact to Handover Procedures This section describes the impact during handover procedures: In a forward relocation request, the source MME includes the UE-Usage-Type, if available. If an S-GW needs to be relocated, MME applies the UE-Usage-Type or DCN-ID based DNS selection, that is similar to the Attach/TAU procedure. MME or S4-SGSN selection during handover considers UE-Usage-Type or DCN-ID. The following two scenarios apply to DCNs deployed partially or heterogeneously: Handover from service area where DCN is not used to an area where DCN is supported. In this case, MME does not receive the UE-Usage-Type from peer and MME does an Explicit AIR towards HSS for UE-Usage lookup. The target MME or SGSN obtains the UE-Usage-Type information from the HSS during the subsequent TAU or RAU procedure. 115

118 Roaming Dedicated Core Networks on MME If the target MME/SGSN determines that the S-GW does not support the UE-Usage-Type, the target MME/SGSN must trigger the S-GW relocation as part of the handover procedure. S-GW relocation is not supported in this release. If the target MME/SGSN does not serve the UE-Usage-Type, the handover procedure must complete successfully and the target MME initiates the GUTI re-allocation procedure with non-broadcast TAI to change the serving DCN of the UE. Roaming MME in the visited PLMN provides an operator policy that allows to serve a UE whose home PLMN does not support DCNs. MME also provides operator policies that support the UE Usage Type parameter received from the HPLMN HSS. Network Sharing MME supports DCN selection based on the selected PLMN information received from the UE. Limitations The DECOR feature has the following limitations: Only one MMEGI can be configured per DCN. DCN deployments as part of a PLMN is not supported. The ability to configure DCN for a set of TAI/TAC is not supported. HSS Initiated UE usage type withdrawal is not supported. Only change in UE usage type is supported. DCNs can be deployed partially or heterogeneously. The target MME or SGSN obtains the UE Usage Type information from the HSS during the subsequent TAU or RAU procedure. If the target MME/SGSN determines that the S-GW does not support the UE Usage Type, the target MME/SGSN must trigger the S-GW relocation as part of the handover procedures. In this release, S-GW relocation is not supported. Standards Compliance The DECOR feature complies with the following standards: 3GPP Release General Packet Radio Service (GPRS) enhancements for Evolved Universal Terrestrial Radio Access Network (E-UTRAN) access 3GPP Release Digital cellular telecommunications system (Phase 2+) (GSM); Universal Mobile Telecommunications System (UMTS); LTE; Diameter applications; 3GPP specific codes and identifiers 116

119 Dedicated Core Networks on MME Configuring DECOR on MME 3GPP Release Universal Mobile Telecommunications System (UMTS); LTE; 3GPP Evolved Packet System (EPS); Evolved General Packet Radio Service (GPRS) Tunnelling Protocol for Control plane (GTPv2-C); Stage 3 3GPP Release Universal Mobile Telecommunications System (UMTS); LTE; Domain Name System Procedures; Stage 3 Configuring DECOR on MME This section describes the CLI commands to configure the DECOR feature. This feature supports the following configurations: DCN profile with UE-Usage-Type Static MMEGI DNS lookup for MMEGI PLMN DCN-ID Relative Capacity for the served DCN DNS Service parameters using UE Usage Type or DCN-ID for S-GW / P-GW / MME / S4-SGSN selection / MMEGI lookup using DNS Associate DCNs to a specific RAT Type under MME service Associate multiple DCN profiles (to designate dedicated or default core network) under MME service Associate DCNs to a specific RAT Type under Call-Control-Profile Associate multiple DCN profiles (to designate dedicated or default core network) under Call-Control-Profile Non-broadcast TAI Request UE-Usage-Type from HSS on S6a interface UE-Usage-Type per IMSI/IMEI range Configuring DECOR Profile Use the following configuration to create and configure a DECOR profile by specifying the MMEGI hosting the DCN and the associated UE usage type using that DCN. configure [ no ] decor-profile profile_name [ -noconfirm ] dcn-id dcn_id dns service-param ue-usage-type [ no ] mmegi { mmegi_value dns } 117

120 Configuring DECOR Profile Dedicated Core Networks on MME NOTES: plmn-id mcc mcc_id mnc mnc_id served-dcn [ relative-capacity capacity ] [ no ] ue-usage-types num_ue_usage_types no { dcn-id dns service-param plmn-id served-dcn } end decor-profile profile_name: Configures the DECOR feature as deployed by operator. A DECOR profile without any UE Usage Types configuration is treated as a Common Core Network. profile_name must be an alphanumeric string of 1 through 63 characters. Entering the decor-profile profile_name command results in the following prompt and changes to the Decor Profile Configuration mode: [context_name]host_name(config-decor-profile-<profile_name>)# dns service-param ue-usage-type: Configures the service parameter to select peer nodes using UE Usage Type or DCN-ID for S-GW / P-GW / MME / S4-SGSN / MMEGI lookup using DNS. service-param: Configures the service parameter types used for DNS peer lookup. ue-usage-type: Configures the UE Usage type that will be used for DNS service parameter. For UE Usage Type based DECOR configuration: If only UE-USAGE-TYPE is configured, DNS lookup uses UE-USAGE-TYPE. If only DCN-ID is configured, DNS lookup uses DCN-ID without dns service-param ue-usage-type CLI command or UE-USAGE-TYPE with dns service-param ue-usage-type CLI (default profile). If both UE-USAGE-TYPE and DCN-ID are configured, DCN-ID is used without dns service-param ue-usage-type CLI command or UE-USAGE-TYPE with dns service-param ue-usage-type CLI command. If both UE-USAGE-TYPE and DCN-ID are not configured, DNS lookup uses UE-USAGE-TYPE (default profile). dcn-id dcn_id: Configures the DCN identifier for the specified DECOR profile. dcn_id must be an integer from 0 to mmegi { mmegi_value dns }: Identifies the MME Group Identifier (MMEGI) of the configured DCN. mmegi_value must be an integer from to dns: Enables DNS for MMEGI retrieval using UE Usage Type. The mmegi dns command will work only when the dns peer-mme command is enabled under MME-service. plmn-id mcc mcc_id mnc mnc_id: Configures the PLMN identifier for the specified DECOR profile. This supports network sharing with different MMEGIs for different PLMNs. mcc mcc_id: Configures the mobile country code (MCC) for the specified DECOR profile. mcc_id must be a 3-digit number between 000 to 999. mnc mnc_id: Configures the mobile network code (MNC) for the specified DECOR profile. mnc_id must be a 2- or 3-digit number between 00 to

121 Dedicated Core Networks on MME Associating a DECOR Profile under MME Service served-dcn [ relative-capacity capacity ]: Configures the MME that is serving the DCN and its relative capacity. These values are sent by MME to enodeb during S1 Setup Response to indicate DCN-IDs served by the MME and their relative capacity. relative-capacity capacity: Set the relative capacity of this DCN. capacity must be an integer from 0 to 255. The default relative-capacity is 255. ue-usage-types num_ue_usage_types: Specifies the number of UE Usage Types in the dedicated core network. num_ue_usage_types is an integer from 0 to 255. A maximum number of 20 UE Usage Types are supported per DCN. no: Removes the specified DECOR parameters from the Global Configuration. MME will send the "MME CONFIGURATION UPDATE" message to all connected enodebs when a new DECOR profile is created with served-dcn relative-capacity and dcn-id CLI commands. MME will send the "MME CONFIGURATION UPDATE" message to all connected enodebs whenever there is a change in served-dcn relative-capacity or dcn-id CLI commands in a DECOR profile. Associating a DECOR Profile under MME Service Use the following configuration to associate a DECOR profile with an MME service. configure context context_name mme-service service_name [ no ] associate decor-profile profile_name access-type { all eutran nb-iot } end NOTES: associate: Associates a DECOR profile with an MME service. decor-profile profile_name: Specifies the DECOR profile that is associated with the MME Service. access-type: Configures the type of network access E-UTRAN, NB-IoT, or both. all : Specifies to allow all access types. eutran: Specifies the access type as E-UTRAN. nb-iot: Specifies the access-type as NB-IoT. no: Removes the specified DECOR profile from the configuration. A maximum number of 16 DECOR profiles can be associated to an MME service. Associating a DECOR Profile under Call Control Profile Use the following configuration to associate a DECOR profile under call control profile. configure call-control-profile profile_name [ remove ] associate decor-profile profile_name [ access-type { all 119

122 Configuring UE Usage Type over S6a Interface under MME Service Dedicated Core Networks on MME eutran nb-iot } ] end NOTES: associate: Associates a DECOR profile under call control profile. decor-profile profile_name: Specifies the DECOR profile that is associated with the call control profile. profile_name must be an alphanumeric string of 1 through 63 characters. access-type: Configures the type of network access for the DECOR profile E-UTRAN, NB-IoT, or both. all : Specifies allows all access types. eutran: Specifies the access type as E-UTRAN. nb-iot: Specifies the access-type as NB-IoT. remove: Removes the specified DECOR profile from the configuration. A maximum number of 16 DECOR profile associations can be configured for the call control profile. Configuring UE Usage Type over S6a Interface under MME Service Use the following configuration to advertise or request UE Usage Type over S6a interface. configure context context_name mme-service service_name [ no ] decor s6a ue-usage-type end NOTES: decor: Specifies the DECOR configuration. s6a: Configures the S6a interface. ue-usage-type: Specifies the UE Usage Type that needs to be sent in the Authentication-Information-Request message over S6a interface. no: Disables the specified configuration. Configuring UE Usage Type over S6a Interface under Call Control Profile Use the following configuration to disable UE Usage Type requests over the S6a interface. configure call-control-profile profile_name decor s6a ue-usage-type [ suppress ] remove decor s6a ue-usage-type end NOTES: 120

123 Dedicated Core Networks on MME Configuring UE Usage Type under Call Control Profile decor: Specifies the DECOR configuration. s6a: Enables the DECOR S6a configuration. ue-usage-type: Requests the UE Usage Type in S6a Authentication-Information-Request message. suppress: Suppresses sending the UE Usage Type in S6a Authentication-Information-Request message. remove: Removes the DECOR configuration. The configuration under call control profile overrides the MME service configuration. Configuring UE Usage Type under Call Control Profile Use the following configuration to locally configure the UE Usage Types for UEs matching the Call Control Profile criteria. configure call-control-profile profile_name decor ue-usage-type usage_type_value remove decor ue-usage-type end NOTES: decor: Specifies the DECOR configuration. ue-usage-type usage_type_value: Configures a UE Usage Type locally. usage_type_value must be an integer from 0 to 255. remove: Removes the specified configuration. Configuring Non-Broadcast TAI Use the following configuration to configure non-broadcast TAI. The configuration is added in support of HSS Initiated Dedicated Core Network Reselection. When HSS sends ISDR with different UE-Usage-Type value other than what is already used by the subscriber and MME decides to move that UE to a new DCN, MME will send the GUTI Reallocation command with unchanged GUTI and non-broadcast TAI. configure context context_name mme-service service_name tai non-broadcast mcc mcc_id mnc mnc_id tac tac_id no tai non-broadcast end NOTES: tai non-broadcast mcc mcc_id mnc mnc_id tac tac_id: Specifies the Tracking Area Identity (TAI) which is not assigned to any area. mcc mcc_id: Configures the mobile country code (MCC) for the specified decor profile. mcc_id must be a 3-digit number between 000 to

124 Monitoring and Troubleshooting Dedicated Core Networks on MME mnc mnc_id: Configures the mobile network code (MNC) for the specified decor profile. mnc_id must be a 2- or 3-digit number between 00 to 999. tac tac_id: Configures the tracking area code (TAC) for the specified decor profile. tac_id must be an integer from 0 to no: Deletes the specified configuration. Monitoring and Troubleshooting This section provides information on the show commands available to support DECOR on MME. Show Commands and/or Outputs show decor-profile full all This section provides information regarding show commands and/or their outputs in support of the DECOR feature. The output of this command includes the following information Decor Profile Name Displays the configured decor-profile name. UE Usage Types Displays the configured UE usage types. MMEGI Displays the MMEGI value. DNS Indicates whether DNS is enabled or disabled. DCN Id Displays the configured DCN identifier. Displays "Not Defined" if not configured. PLMN Id Displays the configured PLMN identifier. Displays "Not Defined" if not configured. Serving DCN Indicates whether MME is serving the DCN. Displays "Not Defined" if not configured. Relative capacity Indicates the configured relative capacity. DNS Service Param Displays the configured DNS service parameter. show mme-service name <mme_svc_name> The output of this command includes the following information: Non-Broadcast TAI Displays the configured values for MCC, MNC, and TAC. show mme-service session full all The output of this command includes the following DECOR information: DECOR Information: UE Usage type DCN Id 122

125 Dedicated Core Networks on MME show mme-service statistics decor decor-profile <decor_profile_name> show mme-service statistics decor decor-profile <decor_profile_name> This show command displays the DECOR statistics for a specified DECOR profile. The DECOR profile level statistics are pegged only if a DECOR profile is configured. The output of this command includes the following information: Decor Statistics Attached Calls Initial Requests ATTACH Accepts Reroutes Rejects TAU Accepts Reroutes Rejects Rerouted Requests ATTACH Accepts Rejects TAU Accepts Rejects UE-Usage-Type Source HSS UE Context Peer MME Peer SGSN Config enb GUTI Reallocation Cmd due to UE-Usage-Type Change Attempted 123

126 show mme-service statistics decor decor-profile <decor_profile_name> Dedicated Core Networks on MME Success Failures Handover from service area DCN Non DCN Explicit AIR Attach Inbound relocation Inbound relocation using TAU procedure ISDR UE-Usage-Type Change MMEGI Selection DNS Local Failure Node Selection SGW DNS Common Dedicated SGW Local Config Common PGW DNS Common Dedicated PGW Local Config Common MME DNS Common Dedicated MME Local Config Common 124

127 Dedicated Core Networks on MME show mme-service statistics decor SGSN DNS Common Dedicated SGSN Local Config Common show mme-service statistics decor The output of this command includes the following information: Decor Statistics Attached Calls Initial Requests ATTACH Accepts Reroutes Rejects TAU Accepts Reroutes Rejects Rerouted Requests ATTACH Accepts Rejects TAU Accepts Rejects UE-Usage-Type Source HSS UE Context Peer MME 125

128 show mme-service statistics decor Dedicated Core Networks on MME Peer SGSN Config enodeb GUTI Reallocation Cmd due to UE-Usage-Type Change Attempted Success Failures Handover from service area DCN Non DCN Explicit AIR Attach Inbound relocation Inbound relocation using TAU procedure ISDR UE-Usage-Type Change MMEGI Selection DNS Local Failure Node Selection SGW DNS Common Dedicated SGW Local Config Common PGW DNS Common Dedicated PGW Local Config Common 126

129 Dedicated Core Networks on MME show mme-service statistics MME DNS Common Dedicated MME Local Config Common SGSN DNS Common Dedicated SGSN Local Config Common show mme-service statistics The output of this command includes the following information at an MME service level: S1AP Statistics Reroute NAS Requests Decor Statistics Attached Calls Initial Requests ATTACH Accepts Reroutes Rejects TAU Accepts Reroutes Rejects Rerouted Requests ATTACH Accepts Rejects 127

130 show mme-service statistics Dedicated Core Networks on MME TAU Accepts Rejects UE-Usage-Type Source HSS UE Context Peer MME Peer SGSN Config enodeb GUTI Reallocation Cmd due to UE-Usage-Type Change Attempted Success Failures Handover from service area DCN Non DCN Explicit AIR Attach Inbound relocation Inbound relocation using TAU procedure ISDR UE-Usage-Type Change MMEGI Selection DNS Local Failure Node Selection SGW DNS Common Dedicated 128

131 Dedicated Core Networks on MME show mme-service statistics recovered-values SGW Local Config Common PGW DNS Common Dedicated PGW Local Config Common MME DNS Common Dedicated MME Local Config Common SGSN DNS Common Dedicated SGSN Local Config Common show mme-service statistics recovered-values The output of this command includes the following information: Decor Statistics: Initial Requests ATTACH Accepts Reroutes Rejects TAU Accepts Reroutes Rejects 129

132 Bulk Statistics Dedicated Core Networks on MME Rerouted Requests ATTACH Accepts Rejects TAU Accepts Rejects Bulk Statistics The MME schema and MME Decor schema include the supported bulk statistics for the DECOR feature. MME Schema The following bulk statistics are added in the MME schema: Bulk Statistics mme-decor-attached-subscriber mme-decor-initial-attach-req-accept mme-decor-initial-attach-req-reroute mme-decor-initial-attach-req-reject mme-decor-reroute-attach-req-accept mme-decor-reroute-attach-req-reject mme-decor-initial-tau-req-accept mme-decor-initial-tau-req-reroute mme-decor-initial-tau-req-reject Description Indicates the number of MME sessions attached that have an associated UE usage type. Indicates the total number of Initial Attach Requests accepted by the MME, which functions as a DCN. Indicates the total number of Initial Attach Requests which are rerouted by the MME, which functions as a DCN. Indicates the total number of Initial Attach Rejects due to No Reroute data and not handled by the MME, which functions as a DCN. Indicates the total number of Rerouted Attach Requests which are accepted by the MME, which functions as a DCN. Indicates the total number of Rerouted Attach Requests which are rejected by the MME, which functions as a DCN. Indicates the total number of Initial TAU Requests accepted by the MME, which functions as a DCN. Indicates the total number of Initial TAU Requests which are rerouted by the MME, which functions as a DCN. Indicates the total number of Initial TAU Rejects due to No Reroute data and not handled by the MME, which functions as a DCN. 130

133 Dedicated Core Networks on MME MME Schema Bulk Statistics mme-decor-reroute-tau-req-accept mme-decor-reroute-tau-req-reject mme-decor-ue-usage-type-src-hss mme-decor-ue-usage-type-src-ue-ctxt mme-decor-ue-usage-type-src-peer-mme mme-decor-ue-usage-type-src-peer-sgsn mme-decor-ue-usage-type-src-cfg mme-decor-ue-usage-type-src-enb mme-decor-sgw-sel-dns-common mme-decor-sgw-sel-dns-dedicated mme-decor-sgw-sel-local-cfg-common mme-decor-pgw-sel-dns-common mme-decor-pgw-sel-dns-dedicated Description Indicates the total number of Rerouted TAU Requests which are accepted by the MME, which functions as a DCN. Indicates the total number of Rerouted TAU Requests which are rejected by the MME, which functions as a DCN. Indicates the number of MME subscriber sessions, where UE usage type was obtained from HSS/AUC. Indicates the number of MME subscriber sessions, where UE usage type was obtained from MME DB record. Indicates the number of MME subscriber sessions, where UE usage type was obtained from peer MME as part of handover. Indicates the number of MME subscriber sessions, where UE usage type was obtained from peer SGSN as part of handover. Indicates the number of MME subscriber sessions, where UE usage type was obtained from local configuration. Indicates the number of MME subscriber sessions, where UE usage type was obtained from the enodeb, in the S1 message as part of reroute. Indicates the number of times S-GW DNS selection procedures were performed with DNS RR excluding UE usage type. This counter increments only when the DNS RR with UE usage type is absent. Indicates the number of times S-GW DNS selection procedures were performed with DNS RR including UE usage type parameter(s). This counter increments only when the DNS RR with UE usage type is present. Indicates the number of times S-GW selection procedures were performed with locally configured S-GW address, without considering the UE usage type. Indicates the number of times PGW DNS selection procedures were performed with DNS RR excluding UE usage type. This counter increments only when the DNS RR with UE usage type is absent. Indicates the number of times S-GW DNS selection procedures were performed with DNS RR including UE usage type parameter(s). This counter increments only when the DNS RR with UE usage type is present. 131

134 MME Schema Dedicated Core Networks on MME Bulk Statistics mme-decor-pgw-sel-local-cfg-common mme-decor-mme-sel-dns-common mme-decor-mme-sel-dns-dedicated mme-decor-mme-sel-local-cfg-common mme-decor-sgsn-sel-dns-common mme-decor-sgsn-sel-dns-dedicated mme-decor-handover-srv-area-dcn mme-decor-handover-srv-area-non-dcn mme-decor-explicit-air-attach Description Indicates the number of times P-GW selection procedures were performed with locally configured P-GW address without considering the UE usage type. Indicates the number of times MME DNS selection procedures were performed with DNS RR excluding UE usage type. This counter increments only when the DNS RR with UE usage type is absent. Indicates the number of times MME DNS selection procedures were performed with DNS RR including UE usage type parameter(s). This counter increments only when the DNS RR with UE usage type is present. Indicates the number of times MME selection procedures were performed with locally configured MME address without considering the UE usage type. Indicates the number of times SGSN DNS selection procedures were performed with DNS RR excluding UE usage type. This counter increments only when the DNS RR with UE usage type is absent. Indicates the number of times SGSN DNS selection procedures were performed with DNS RR including UE usage type parameter(s). This counter increments only when the DNS RR with UE usage type is present. Indicates the total number of inbound handovers from the service area where DCN is supported. This counter increments for every inbound handover from DCN service area. Indicates the total number of inbound handovers from the service area where DCN is not supported. This counter increments for every inbound handover from non DCN service area. Indicates the number of explicit AIR messages during Attach. This counter increments when MME triggers an explicit AIR during Attach. 132

135 Dedicated Core Networks on MME MME Schema Bulk Statistics mme-decor-explicit-air-in-reallocation mme-decor-explicit-air-tau-in-reallocation mme-decor-sgsn-sel-local-cfg-common s1ap-transdata-reroutenasreq mme-decor-mmegi-sel-dns mme-decor-mmegi-sel-local-cfg mme-decor-mmegi-sel-fail mme-decor-guti-reallocation-attempted mme-decor-guti-reallocation-success mme-decor-guti-reallocation-failures mme-decor-isdr-ue-usage-type-change recovered-mme-decor-initial-attach-req-accept recovered-mme-decor-initial-attach-req-reroute recovered-mme-decor-initial-attach-req-reject Description Indicates the number of explicit AIR messages during inbound relocation. This counter increments when MME triggers explicit an AIR during inbound relocation. Indicates the number of explicit AIR messages during inbound relocation using TAU. This counter increments when MME triggers an explicit AIR during inbound relocation using TAU. Indicates the number of times SGSN selection procedures were performed with locally configured SGSN address without considering the UE usage type. Indicates the number of S1 Reroute NAS Request Message sent by MME. Indicates the total number of times MMEGI is selected through DNS from a dedicated pool (DNS records having UE Usage Type which is matching). Indicates the total number of times MMEGI is selected from local configuration. Indicates the total number of times MMEGI is selected from failure. This proprietary counter tracks the number of GUTI Reallocation procedures attempted due to UE-Usage-Type Change from HSS through ISDR OR after connected mode handover and UE-Usage-Type not served by the MME (NAS GUTI Reallocation Command message was sent by MME). Tracks the number of GUTI Reallocation procedures successful. Tracks the number of GUTI Reallocation procedure failures. Tracks the number of ISDR Messages received with different UE-Usage-Type from the HSS. Indicates the total number of Initial Attach Requests accepted by the MME, which functions as a DCN. Indicates the total number of Initial Attach Requests which are rerouted by the MME, which functions as a DCN. Indicates the total number of Initial Attach Rejects without the reroute data and that are not handled by the MME, which functions as a DCN. 133

136 MME Decor Schema Dedicated Core Networks on MME Bulk Statistics recovered-mme-decor-reroute-attach-req-accept recovered-mme-decor-reroute-attach-req-reject recovered-mme-decor-initial-tau-req-accept recovered-mme-decor-initial-tau-req-reroute recovered-mme-decor-initial-tau-req-reject recovered-mme-decor-reroute-tau-req-accept recovered-mme-decor-reroute-tau-req-accept Description Indicates the total number of Rerouted Attach Requests which are accepted by the MME, which functions as a DCN. Indicates the total number of Rerouted Attach Requests which are rejected by the MME, which functions as a DCN. Indicates the total number of Initial TAU Requests accepted by the MME, which functions as a DCN. Indicates the total number of Initial TAU Requests which are rerouted by the MME, which functions as a DCN. Indicates the total number of Initial TAU Rejects due to No Reroute data and not handled by the MME, which functions as a DCN. Indicates the total number of Rerouted TAU Requests which are accepted by the MME, which functions as a DCN. Indicates the total number of Rerouted TAU Requests which are rejected by the MME, which functions as a DCN. MME Decor Schema The following bulk statistics for a specific decor-profile are added in the MME Decor schema: Bulk Statistics mme-decor-profile-name mme-decor-profile-attached-subscriber mme-decor-profile-initial-attach-req-accept mme-decor-profile-initial-attach-req-reroute mme-decor-profile-initial-attach-req-reject mme-decor-profile-reroute-attach-req-accept mme-decor-profile-reroute-attach-req-reject mme-decor-profile-initial-tau-req-accept Description Indicates the name of the DECOR profile. Indicates the total number of subscribers on the MME which is acting as a DCN. Indicates the total number of Initial Attach Requests accepted by the MME that is acting as a DCN. Indicates the total number of Initial Attach Requests which are rerouted by the MME that is acting as a DCN. Indicates the total number of Initial Attach Rejects due to No Reroute Data and not handled by the MME that is acting as a DCN. Indicates the total number of Rerouted Attach Requests which are accepted by the MME that is acting as a DCN. Indicates the total number of Rerouted Attach Requests which are rejected by the MME that is acting as a DCN. Indicates the total number of Initial TAU Reuquests accepted by the MME that is acting as a DCN. 134

137 Dedicated Core Networks on MME MME Decor Schema Bulk Statistics mme-decor-profile-initial-tau-req-reroute mme-decor-profile-initial-tau-req-reject mme-decor-profile-reroute-tau-req-accept mme-decor-profile-reroute-tau-req-reject mme-decor-profile-ue-usage-type-src-hss mme-decor-profile-ue-usage-type-src-ue-ctxt mme-decor-profile-ue-usage-type-src-peer-mme mme-decor-profile-ue-usage-type-src-peer-sgsn mme-decor-profile-ue-usage-type-src-cfg mme-decor-profile-ue-usage-type-src-enb mme-decor-profile-sgw-sel-dns-common mme-decor-profile-sgw-sel-dns-dedicated mme-decor-profile-sgw-sel-local-cfg-common mme-decor-profile-pgw-sel-dns-common mme-decor-profile-pgw-sel-dns-dedicated mme-decor-profile-pgw-sel-local-cfg-common Description Indicates the total number of Initial TAU Reuquests which are rerouted by the MME that is acting as a DCN. Indicates the total number of Initial TAU Rejects due to No Reroute Data and not handled by the MME that is acting as a DCN. Indicates the total number of Rerouted TAU Requests which are accepted by the MME that is acting as a DCN. Indicates the total number of Rerouted TAU Requests which are rejected by the MME that is acting as a DCN. Indicates the total number of times UE Usage Type is received from the HSS and used by the MME. Indicates the total number of times UE Usage Type is fetched from the local DB Record and used by the MME. Indicates the total number of times UE Usage Type is received from the peer MME and used by the MME. Indicates the total number of times UE Usage Type is received from the peer SGSN and used by the MME. Indicates the total number of times UE Usage Type is fetched from the local configuration and used by the MME. Indicates the total number of times UE Usage Type is received from the enodeb and used by the MME. Indicates the total number of times S-GW is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times S-GW is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times S-GW is selected from the local configuration without UE Usage Type. Indicates the total number of times P-GW is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times P-GW is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times P-GW is selected from the local configuration without UE Usage Type. 135

138 MME Decor Schema Dedicated Core Networks on MME Bulk Statistics mme-decor-profile-mme-sel-dns-common mme-decor-profile-mme-sel-dns-dedicated mme-decor-profile-mme-sel-local-cfg-common mme-decor-profile-sgsn-sel-dns-common mme-decor-profile-sgsn-sel-dns-dedicated mme-decor-profile-sgsn-sel-local-cfg-common mme-decor-profile-mmegi-sel-dns mme-decor-profile-mmegi-sel-local-cfg mme-decor-profile-mmegi-sel-fail mme-decor-profile-guti-reallocation-attempted mme-decor-profile-guti-reallocation-success mme-decor-profile-guti-reallocation-failures mme-decor-profile-isdr-ue-usage-type-change mme-decor-profile-explicit-air-attach mme-decor-profile-explicit-air-in-relocation mme-decor-profile-explicit-air-tau-in-relocation Description Indicates the total number of times MME is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times MME is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times MME is selected from the local configuration without UE Usage Type. Indicates the total number of times SGSN is selected through DNS from a common pool (DNS records without UE Usage Type). Indicates the total number of times SGSN is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times SGSN is selected from the local configuration without UE Usage Type. Indicates the total number of times MMEGI is selected through DNS from a dedicated pool (DNS records with matching UE Usage Type). Indicates the total number of times MMEGI is selected from the local configuration. Indicates the total number of times MMEGI selection failed. Indicates the number of GUTI Reallocation procedures attempted due to UE-Usage-Type Change from HSS through ISDR OR after connected mode handover and UE-Usage-Type not served by this MME (NAS GUTI Reallocation Command message was sent by MME). Indicates the number of successful GUTI Reallocation procedures. Indicates the number of failed GUTI Reallocation procedures. Indicates the number of ISDR Messages received with different UE-Usage-Type from the HSS. Indicates the number of explicit AIR messages during Attach. Indicates the number of explicit AIR messages during inbound relocation. Indicates the number of explicit AIR messages during inbound relocation using TAU. 136

139 Dedicated Core Networks on MME MME Decor Schema Bulk Statistics mme-decor-profile-handover-srv-area-dcn mme-decor-profile-handover-srv-area-non-dcn Description Indicates the total number of inbound handovers from the service area where DCN is supported. Indicates the total number of inbound handovers from the service area where DCN is not supported. 137

140 MME Decor Schema Dedicated Core Networks on MME 138

141 CHAPTER 14 Diameter Proxy Consolidation This chapter describes the following topics: Feature Summary and Revision History, on page 139 Feature Changes, on page 140 Command Changes, on page 141 Performance Indicator Changes, on page 141 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All products that use Diameter Proxy Applicable Platform(s) VPC - DI Default Setting Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference P-GW Administration Guide SAE-GW administration Guide Revision History Revision Details Release First introduced

142 Feature Changes Diameter Proxy Consolidation Feature Changes The operator can control the number of Diameter proxy connections towards a peer during a VPC-DI/SCALE setup. A Diameter proxy runs on a card serving all session managers and AAA managers. In a VPC-DI/SCALE environment, a single Diameter proxy is spawned per SF card. With the increase in the support for more SF cards in the VPC-DI/SCALE setup, the number of Diameter proxy increases proportionally, in turn increasing the number of connections towards the Diameter peers. To limit the number of Diameter proxies spawned, a new keyword max is added to the require diameter-proxy command in the Global Configuration mode. The VPC-DI/SCALE setup supports the following mode of operation of the Diameter proxy, which is CLI controlled: Single mode (existing mode): In this mode of operation, only one Diameter proxy operates for the entire setup that serves all SF cards. Multiple mode (existing mode): In this mode of operation each SF card supports only one Diameter proxy, at any point of time, serving all session managers and AAA managers on that card. Max mode: In this new mode of operation, each SF card supports a maximum of one Diameter proxy. Here, one Diameter proxy serves session managers and AAA managers on multiple SF cards. SF cards that do not to have a Diameter proxy running in it are mapped to SF cards that have Diameter proxies running on it. The Max mode of operation includes the following functionalities: This mode operates only in a VPC-DI/SCALE environment. The total number of Diameter proxy spawned in the system is CLI controlled. Diameter proxies are spawned with a maximum of one Diameter proxy per VM. For SF cards where Diameter proxy is not spawned, Diameter proxy is allocated in a round-robin process. Increase in Diameter proxies during run-time is applied under the following conditions: Only for new VMs that have started during run-time. In an existing VM, the Diameter proxies are increased only when the inactive VMs are activated during run-time. For example, in a 16 VM setup that has 12 SF VM active (for session manager), the configuration applies only 6 Diameter proxies. Now, increasing the number of Diameter proxies is possible only if the other inactive VMs (4 VMs) become active. Until then no new Diameter proxy is spawned. There will be no reduction in the number of Diameter proxies spawned after initial configuration, unless the system is rebooted. The Diameter proxy functionality continues with its existing behavior. Each Diameter proxy continues to support all applications such as: Gx, Gy, Rf, S6B and so on. 140

143 Diameter Proxy Consolidation Limitations Limitations During unplanned card migration, if there is no standby card available and if there is a Diameter proxy that is down, then all associated cards of this Diameter proxy instance can have session loss. Command Changes This section describes the CLI configuration required to enable the Diameter proxy Consolidation feature. require diameter-proxy max Use the following CLI to configure the maximum number of Diameter proxies to be spawned in a system: configure require diameter-proxy max count [ no ] require diameter-proxy end NOTES: require diameter-proxy: This command enables the Diameter proxy mode. max : Configures the maximum number of Diameter proxies to be spawned in the system. count specifies the number Diameter proxies to be spawned in the system. The count value is an integer ranging from 1 to 48. no: Disables the Diameter proxy mode. In the above configuration, if the count value is specified as 1, only one Diameter proxy is spawned in the VPC-DI/SCALE environment for all SF cards. A single Diameter proxy is started on the active non-demux card. Spawning of one Diameter proxy in this configuration is different than the require diameter-proxy single configuration, which spawns a Diameter proxy on a DEMUX card. The count: value configured as 48 is similar to the require diameter-proxy multiple configuration. Performance Indicator Changes This section provides information regarding show commands and/or their outputs in support of this feature. show diameter diactrl proxy-vm-map If the MAX mode is configured and if the Diameter proxy to VM mapping is available, the following new fields are displayed: diamproxy instance: Indicates the Diameter proxy instance Started on VM: Indicates the VM on which the Diameter proxy instance exists. VM served: Indicates the number of VMs served for a particular Diameter proxy instance. 141

144 show diameter diactrl proxy-vm-map Diameter Proxy Consolidation If the MAX mode is configured and if Diameter proxy to VM mapping is not available, the following message is displayed: Error: no valid diameter proxy to VM mapping present in diactrl If MAX mode is not configured, the following message is displayed: Info: proxy-vm-map CLI is valid only for max mode configuration of diamproxy 142

145 CHAPTER 15 DI-Network RSS Encryption Feature Summary and Revision History, on page 143 Feature Changes, on page 144 Command Changes, on page 144 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) VPC-DI Feature Default Disabled - Configuration Required Related Changes in This Release Not applicable Related Documentation VPC-DI System Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release The default setting for Distributed Instance Network (DI-network) RSS traffiic is now disabled and can be enabled with a new CLI command. In prior releases, this was functionality was automatically enabled and was not configurable First introduced. Pre

146 Feature Changes DI-Network RSS Encryption Feature Changes Command Changes iftask di-net-encrypt-rss Previous Behavior: In Releases prior to 21.8, Receive Side Scaling (RSS) was enabled by default for all traffic on the internal Distributed Instance network (DI-network) for virtualized StarOS instances. New Behavior: In Release 21.8 and later, RSS is disabled by default and can be enabled via a new CLI. This new CLI command has been added to control the enablement of RSS on encrypted traffic on the DI-network. configure [no] iftask di-net-encrypt-rss end Note The default setting is disabled. 144

147 CHAPTER 16 ESC Event Integration with Ultra M Manager Feature Summary and Revision History, on page 145 Feature Changes, on page 145 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Enabled - Always-on Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced. N6.0 Though this feature was introduced in N6.0, it was not fully qualified. It is now fully 6.2 qualified as of this release. Feature Changes Though introduced in N6.0, this feature was not fully qualified in that release. It was made available only for testing purposes. 145

148 Feature Changes ESC Event Integration with Ultra M Manager In 6.2, this feature has been fully qualified for use in the appropriate deployment scenarios. Refer to the Ultra Services Platform Deployment Automation Guide for more information. 146

149 CHAPTER 17 Event Logging Support for VPP This chapter describes the following topics: Feature Summary and Revision History, on page 147 Feature Changes, on page 148 Command Changes, on page 148 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) All ASR 5500 VPC-SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference VPC-SI System Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release First introduced

150 Feature Changes Event Logging Support for VPP Feature Changes Command Changes logging filter (Exec Mode) Event logging facility for Vector Packet Processing (VPP) is added in this release. Previous Behavior: The VPP event logging facility was not supported in previous releases. New Behavior: The VPP event logging facility is supported in this release. Customer Impact: The VPP event logging facility improves maintainability, debugging, and ease of use. The logging filter CLI command in the Exec mode is enhanced to support the Vector Packet Processing (VPP) event logging facility. logging filter active facility facility level severity_level [ critical-info no-critical-info ] Notes: facility facility: Specifies the facility to modify the filtering of logged information. The vpp event logging facility is added in this release. level severity_level: Specifies the level of information to be logged from the following list which is ordered from highest to lowest: critical - reports critical errors error - reports error notifications warning - reports warning messages unusual - reports unusual errors info - reports informational messages trace - reports trace information debug - reports debug information critical-info: Specifies that events with a category attribute of critical information are to be displayed. Examples of these types of events can be seen at bootup when system processes and tasks are being initiated. This is the default setting. no-critical-info: Specifies that events with a category attribute of critical information are not to be displayed. 148

151 Event Logging Support for VPP logging filter (Global Configuration Mode) logging filter (Global Configuration Mode) The logging filter CLI command in the Global Configuration mode is enhanced to support the Vector Packet Processing (VPP) event logging facility. configure logging filter runtime facility facility level severity_level [ critical-info no-critical-info ] end Notes: facility facility: Specifies the facility to modify the filtering of logged information. The vpp event logging facility is added in this release. level severity_level: Specifies the level of information to be logged from the following list which is ordered from highest to lowest: critical - reports critical errors error - reports error notifications warning - reports warning messages unusual - reports unusual errors info - reports informational messages trace - reports trace information debug - reports debug information critical-info: Specifies that events with a category attribute of critical information are to be displayed. Examples of these types of events can be seen at bootup when system processes and tasks are being initiated. This is the default setting. no-critical-info: Specifies that events with a category attribute of critical information are not to be displayed. 149

152 logging filter (Global Configuration Mode) Event Logging Support for VPP 150

153 CHAPTER 18 Hash-Value Support in Header Enrichment This chapter describes the following topics: Feature Summary and Revision History, on page 151 Feature Changes, on page 152 Command Changes, on page 152 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) P-GW ASR 5500 VPC - DI VPC - SI Default Setting Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation P-GW Administration Guide Command Line Interface Reference Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release Hash-Value strings are implemented as a part of the Header Enrichment feature

154 Feature Changes Hash-Value Support in Header Enrichment Revision Details First introduced. Release Pre 21.2 Feature Changes Hash-Value strings are implemented as a part of the Header Enrichment feature. P-GW is enhanced to receive and store hash values that are received from the PCRF for each subscriber. The stored hash value is inserted in the HTTP/WSP header making it available for operators to handle subscriber profiles. Some mobile advertisement platforms generate a hashed string based on a subscriber s MSISDN value. When a hashed string is sent to content providers, they identify the subscriber s profile information and in turn insert advertisements on the subscriber s browser based on the user profile. To receive hash-values from the PCRF over the Gx interface, a new AVP: Hash-Value; with an octet-string data-type is implemented. The AVP supports a maximum length of 80 characters. The P-GW ignores the hashed string if it exceeds the maximum length. The hash-value received from the PCRF is inserted in the HTTP/WSP header only if HTTP Header Enrichment is enabled for a subscriber. The X-Header field is used to insert a hash-value in the HTTP/WSP headers. The hash-value can be encrypted based on the existing encryption mechanism of X-Header fields. These hash values (encrypted or un-encrypted) are inserted in the HTTP/WSP header based on the x-header format configured under the Charging Action configuration. Note A hash-value is check-pointed as a part of the subscriber s session information. It is check-pointed immediately, once received from the PCRF. Command Changes gx hash-value Use the following configuration to receive the hash-value string over the Gx interface configure require active-charging active-charging serviceservice_name xheader-format format_name insert xheader_field_name variable gx hash-value end NOTES: insert: This command allows you to configure the x-header fields to be inserted in HTTP/WSP GET and POST request packets. The xheader_field_name specifies the x-header field name to be inserted in the packets. It must be an alphanumeric string of 1 through 31 characters. variable: Specifies the name of the x-header field whose value must be inserted in the packets. 152

155 Hash-Value Support in Header Enrichment gx hash-value gx: Specifies the Gx interface. hash-value: Specifies the hash value string received in the Hash-Value AVP. 153

156 gx hash-value Hash-Value Support in Header Enrichment 154

157 CHAPTER 19 ICSR Switchover Configuration Support for SF Failures Feature Summary and Revision History, on page 155 Feature Description, on page 156 Configuring ICSR Switchover Support for SF Failures, on page 157 Monitoring and Troubleshooting, on page 157 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) VPC-DI Feature Default Disabled - Configuration Required Related Changes in This Release Not applicable Related Documentation Command Line Interface Reference SNMP MIB Reference Guide VPC-DI System Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N

158 Feature Description ICSR Switchover Configuration Support for SF Failures Revision Details From this release, a new configurable CLI, monitor system card-fail is introduced that supports Interchassis Session Recovery (ICSR) switchover for multiple SF card failures. First introduced. Release Pre 21.2 Feature Description How It Works In certain scenarios, multiple SF card failures are observed on the DI-network that lead to capacity or subscriber loss (or both). This occurs when the Standby SF card is unavailable when the second SF card fails, thereby leading to mulitple SF card failures. In this release, this limitation is addressed with the new configurable CLI, monitor system card-fail. This CLI implements Interchassis Session Recovery (ICSR) switchover for multiple SF card failures. When the monitor system card-fail CLI is configured, the VPN monitor checks the card failure status to assess if it is feasible to trigger an ICSR switchover. Therefore, when the VPN detects a card monitor failure, it forcefully triggers an SRP switchover with the switchover reason as "Multiple card failure". An ICSR switchover on the VPC-DI platform is triggered in the following scenarios: When any Active SF card fails without Standy card. During a planned SF card migration failure without a Standby card available. Limitations The ICSR Switchover Configuration Support for SF Failures has the following limitations. If the Standby ICSR instance has any failures and is unstable, ICSR is not triggered from the Active ICSR instance. Note A single Active SF card failure that already has an available Standby SF card to take over in a Standby ICSR chassis is not treated as a failure. Session Manager recovery in the Standby chassis is not treated as a failure. Standby ICSR instance must be in a good state to trigger ICSR to avoid cyclic switchovers. When the Active ICSR instance checks the status of the Standby ICSR instance on monitor failure and finds that the Standby ICSR has other configured monitor failures (multiple SF card failures, BGP, BFD, AAA, or Diameter), then ICSR switchover is not triggered. 156

159 ICSR Switchover Configuration Support for SF Failures Configuring ICSR Switchover Support for SF Failures Configuring ICSR Switchover Support for SF Failures monitor system card-fail The following section provides information about the CLI command available to enable or disable the feature. This new CLI command is added to enable or disable card failure monitoring on the VPC-DI system. This command is configured in the Service Redundancy Protocol Configuration Mode. configure [ no ] monitor system card-fail end NOTES: no: Disables card failure monitoring. By default, this CLI is disabled. Monitoring and Troubleshooting This section provides information regarding CLI commands available in support of monitoring and troubleshooting the feature. Show Command(s) and/or Outputs show srp call-loss statistics Bulk Statistics This section provides information regarding show commands and/or their outputs in support of this feature. This show command now includes a new value for the "Switchover Reason" field to indicate mulitiple card failure. Multiple Card Failure This section lists all the bulk statistics that have been added, modified, or deprecated to support this feature. ICSR Schema This section displays the new bulk statistc added for the ICSR Switchover Configuration Support for SF Failures feature. Bulk Statistics switchover reason Description Indicates the reason for the ICSR switchover. 157

160 ICSR Schema ICSR Switchover Configuration Support for SF Failures 158

161 CHAPTER 20 IMEI Validation Failure This chapter describes the following topics: Feature Summary and Revision History, on page 159 Feature Changes, on page 160 Performance Indicator Changes, on page 160 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) epdg ASR 5500 VPC-DI VPC-SI Feature Default Enabled - Always-on Related Changes in This Release Not Applicable Related Documentation epdg Administration Guide Statistics and Counters Reference Revision History Revision Details Release First introduced

162 Feature Changes IMEI Validation Failure Feature Changes If invalid IMEI was received from the UE in CFG payload of the first IKE_AUTH request, multiple SessMgr restart was observed. In this release, graceful handling is added to avoid SessMgr restart. A new disconnect reason and bulk statistic are added to indicate the IMEI validation failure. Performance Indicator Changes epdg Schema The following new bulk statistic is added in the epdg schema: show epdg-service statistics sess-disconnect-invalid-imei The total number of sessions disconnected due to Invalid IMEI received from the UE. The Invalid IMEI field added to the output of this command indicates the total number of sessions disconnected due to Invalid IMEI received from the UE. show session disconnect-reasons verbose The epdg-invalid-imei(661) field added to the output of this command indicates the total number of sessions disconnected due to Invalid IMEI received from the UE. 160

163 CHAPTER 21 Increased Maximum IFtask Thread Support Feature Summary and Revision History, on page 161 Feature Changes, on page 162 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) VPC-DI Feature Default Enabled - Always-on Related Changes in This Release Not applicable Related Documentation VPC-DI System Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release From this release, the maximum number of IFtask threads configuration supported 21.8 is increased to 22 cores. First introduced. Pre

164 Feature Changes Increased Maximum IFtask Thread Support Feature Changes When the number of DPDK Internal Forwarder (IFTask) threads configured (in /tmp/iftask.cfg) are greater than 14 cores, the IFTask drops packets or displays an error. Previous Behavior: Currently, the maximum number of IFtask threads configuration is limited to only 14 cores. New Behavior: From Release 21.8, the maximum number of IFtask threads configuration supported is increased to 22 cores. 162

165 CHAPTER 22 Increased Subscriber Map Limits This chapter describes the following topics: Feature Summary and Revision History, on page 163 Feature Changes, on page 164 Command Changes, on page 164 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) MME ASR 5500 VPC-DI VPC-SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference MME Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N

166 Feature Changes Increased Subscriber Map Limits Revision Details The subscriber-map limit to configure IMSI/IMEI groups is increased from 1024 to First introduced. Release 21.8 Pre 21.2 Feature Changes Command Changes In this release, the subscriber-map limit to configure IMSI/IMEI groups is increased from 1024 to Previous Behavior: The subscriber-map configuration allowed precedence configuration of New Behavior: The subscriber-map configuration allows precedence configuration of Customer Impact: Different services can be provided to more user groups. See the Operator Policy Selection Based on IMEI-TAC chapter in the MME Administration Guide for more information. precedence This command in the LTE Subscriber Map Configuration mode specifies the precedence level defined by the operator to resolve the selection of the operator policy when multiple variable combinations match for a particular UE. The lowest precedence number takes greater priority during selection. In this release, the limit of subscriber-map entries is increased from 1024 to configure lte-policy subscriber-map map_name precedence precedence_number match-criteria imei-tac group group_name [ imsi mcc mcc mnc mnc [ msin { first start_msin_value last end_msin_value } ] ] [ operator-policy-name policy_name ] end Notes: precedence_number must be an integer from 1 to 10000, where 1 has the highest precedence. 164

167 CHAPTER 23 Inline TCP Optimization This chapter includes the following topics: Feature Summary and Revision History, on page 165 Feature Description, on page 166 How It Works, on page 166 Configuring Inline TCP Optimization, on page 167 Monitoring and Troubleshooting, on page 169 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) P-GW ASR 5500 VPC - DI VPC - LI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference P-GW Administration Guide Stats and Counters Reference Revision History Revision Details Release First introduced

168 Feature Description Inline TCP Optimization Feature Description P-GW supports Inline TCP Optimization as an integrated solution to service providers to increase the TCP flow throughput for TCP connections. This solution enables faster transmission of data for a better user experience. The Inline TCP Optimization solution ensures accelerated TCP flows using a proprietary algorithm that provides efficient and optimal throughput at a given time. A TCP proxy has been integrated with this solution to monitor and control the TCP congestion window for optimal throughput. The Inline TCP Optimization solution also supports split TCP sessions to accommodate wireless requirements and provides feature parity with other existing inline services. Note Optimization only applies to the downlink data on the Gn interface. The Inline TCP Optimization feature is license controlled. Contact your Cisco account representative for detailed information on specific licensing requirements. For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide. How It Works The TCP Optimization feature includes the following: TCP Connection Splicing: The TCP connections are split into two connections; one connection towards Gn and the other connection towards Gi, inside P-GW. The connections are split in a transparent manner in the P-GW so that the UE and the Gi servers are transparent to the connection being split. TCP Proxy ensures seamless movement of data across these two TCP split connections. TCP Optimization is deployed on the Gn interface (towards the UE) of the TCP stack. A user-space TCP stack in P-GW is used. Cisco library for TCP optimization: Provides algorithms that are designed to increase the TCP throughput. Interfaces with the User-space TCP stack (Gn interface) and notifies appropriate events that occur in the TCP connection and takes actions accordingly. Provides well defined APIs to integrate the Cisco Library (for TCP optimization) and StarOS. Note TCP Acceleration is enabled during the start of the TCP flow (when SYN packet is received). It cannot be disabled later during the flow. 166

169 Inline TCP Optimization Configuring Inline TCP Optimization Configuring Inline TCP Optimization Enabling TCP Acceleration under Active Charging Service Use the following configuration to enable TCP Acceleration: configure require active-charging active-charging service service_name tcp-acceleration end NOTES: tcp-acceleration: Enables TCP Acceleration feature under the ACS Configuration mode. Enabling TCP Acceleration under Trigger Action Use the following configuration to enable TCP Acceleration: configure require active-charging active-charging service service_name trigger-action trigger_action_name tcp-acceleration profile profile_name end NOTES: tcp-acceleration: Enables TCP Acceleration feature under the ACS Trigger Action Configuration mode. profile: Identifies the TCP acceleration profile. The profile_name is a string ranging from 1 to 63 characters. Configuring a TCP Acceleration Profile Use the following configuration to configure a TCP Acceleration Profile: configure require active-charging active-charging service service_name [ no ] tcp-acceleration-profile profile_name end NOTES: tcp-acceleration-profile: Configures the TCP Acceleration feature profile for inline TCP optimization. no: Disables the TCP Acceleration profile. 167

170 Configuring TCP Acceleration Profile Parameters Inline TCP Optimization Configuring TCP Acceleration Profile Parameters Use the following commands to configure the TCP acceleration profile parameters: configure require active-charging active-charging serviceservice_name [ no ] tcp-acceleration-profile profile_name buffer-size { [ downlink [ 128KB 256KB 512KB 1024KB 1536KB 2048KB 2560KB 3072KB 3584KB 4096KB ] [ uplink [ 128KB 256KB 512KB 1024KB 1536KB 2048KB 2560KB 3072KB 3584KB 4096KB ] ] ] [ uplink [ 128KB 256KB 512KB 1024KB 1536KB 2048KB 2560KB 3072KB 3584KB 4096KB ] [ downlink [ 128KB 256KB 512KB 1024KB 1536KB 2048KB 2560KB 3072KB 3584KB 4096KB ] ] ] } default buffer-size [ downlink uplink ] initial-cwnd-size window_size default initial-cwnd-size max-rtt max_rtt_value default max-rtt mss mss_value default mss end NOTES: default: Restores default values assigned to its following options. buffer-size: Configures the TCP Proxy buffer size for downlink and uplink data in Kilobytes. initial-cwnd-size: Configures the initial congestion window size is segments. The window_size is an integer ranging from 1 to max-rtt: Configures the maximum RTT value in milliseconds. The max_rtt_value is an integer ranging from 1 to mss: Configures the maximum segment size for TCP in Bytes. The mss_value is an integer ranging from 496 to Configuring Post Processing Rule Name under Trigger Condition Use the following commands to configure the post processing rule names: configure require active-charging active-charging serviceservice_name trigger-condition trigger_condition_name post-processing-rule-name { = contains ends-with starts-with } name [ no ] post-processing-rule-name name end NOTES: 168

171 Inline TCP Optimization Configuring TCP Acceleration Related EDR Attributes post-processing-rule-name: Sets condition for a particular post processing rule. The following operators specifies how the rules are matched: =: Equals contains: Contains ends-with: Ends with. starts-with: Starts with name: Specifies the name of the post processing rule. Configuring TCP Acceleration Related EDR Attributes Use the following commands to configure the EDR attributes: configure require active-charging active-charging service service_name edr-format edr_format_name rule-variable tcp [ sn-tcp-accl sn-tcp-accl-reject-reason sn-tcp-min-rtt sn-tcp-rtt ] priority priority_value end NOTES: rule variable: Restores default values assigned to its following options. tcp: Specifies Transmission Control Protocol (TCP) related fields. sn-tcp-accl: Specifies TCP Acceleration enabled on flow. This is either 0 or 1. sn-tcp-accl-reject-reason: Specifies reason for not accelerating the TCP flow. sn-tcp-min-rtt: Specifies min RTT observed for accelerated TCP flow. sn-tcp-rtt: Specifies smoothed RTT for accelerated TCP flow. priority: Specifies the CSV position of the field (protocol rule) in the EDR. Priority must be an integer from 1 through Monitoring and Troubleshooting This section provides information regarding monitoring and troubleshooting the feature. Show Command(s) and/or Outputs show configuration This section provides information regarding show commands and/or their outputs in support of this feature. On executing the command, the following new fields are displayed for this feature: 169

172 show tcp-acceleration-profile { [ all ] [ name profile-name ] } Inline TCP Optimization tcp-acceleration tcp-accelration profile tap buffer-size downlink size uplink size initial-cwnd-size max-rtt mss show tcp-acceleration-profile { [ all ] [ name profile-name ] } On executing the command, the following new fields are displayed for this feature: TCP Acceleration Profile Name Initial Congestion Window Max RTT MSS Buffer Size (Downlink) Buffer Size (Uplink) Total tcp-acceleration-profile found show active-charging tcp-acceleration info On executing the above command, the following new field(s) are displayed for this feature: TCP Acceleration Library Information Version show active-charging tcp-acceleration statistics sessmgr all On executing the above command, the following new field(s) are displayed for this feature: TCP acceleration Statistics Total Accelerated Flows Current Accelerated Flows Released Accelerated Flows Rejected Accelerated Flows Feature Not Supported RAT Type Not Supported Bearer Not Supported Resource Not Available (Memory) 170

173 Inline TCP Optimization show active-charging flows full all Others show active-charging flows full all On executing the above command, the following new field(s) are displayed for this feature: TCP Acceleration show active-charging trigger-action name trigger_action_name On executing the above command, the following new field(s) are displayed for this feature: TCP Acceleration show active-charging trigger-condition name name On executing the above command, the following new field(s) are displayed for this feature: Post-Processing Rule-name/GOR 171

174 show active-charging trigger-condition name name Inline TCP Optimization 172

175 CHAPTER 24 IPv6 PDN Type Restriction This chapter describes the following topics: Feature Summary and Revision History, on page 173 Feature Changes, on page 174 Command Changes, on page 175 Performance Indicator Changes, on page 175 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) MME ASR 5500 VPC-DI VPC-SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference MME Administration Guide Statistics and Counters Reference Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N

176 Feature Changes IPv6 PDN Type Restriction Revision Details Support is added to enable the MME to allow only IPv4 addresses to a PDN connection. First introduced. Release 21.8 Pre 21.2 Feature Changes This enhancement enables the MME to restrict IPv6 PDN connection in roaming networks. Previous Behavior: MME allowed the UE to include IPv6 address if the UE had requested and subscribed for IPv6 address. New Behavior: MME will not allow the UE to include IPv6 address even if the UE has requested and subscribed for the IPv6 address. The pdn-type-override ipv4-only CLI command is added in the Call Control Profile Configuration mode. MME will ensure that PDN will not receive any IPv6 address either by rejecting with PDN Connectivity Request or by overriding it only with IPv4 address. The following table explains the behavior of MME when pdn-type-override ipv4-only CLI is enabled. UE Requested PDN Type IPv4 IPv4 IPv4 IPv6 IPv6 IPv6 IPv4v6 IPv4v6 IPv4v6 HSS Subscription IPv4v6 IPv4 IPv6 IPv4v6 IPv4 IPv6 IPv4v6 IPv4 IPv6 Behavior PDN is assigned with IPv4 address only. PDN is assigned with IPv4 address only. PDN Reject with cause 32 "Service option not supported". PDN is assigned with IPv4 address only. PDN Reject with cause "Only IPv4 is supported". PDN Reject with cause 32 "Service option not supported". PDN is assigned with IPv4 address only. PDN is assigned with IPv4 address only. PDN Reject with cause 32 "Service option not supported". 174

177 IPv6 PDN Type Restriction Command Changes Command Changes pdn-type-override In the Call Control Profile Configuration mode, the pdn-type-override CLI command is enhanced to enable the MME to allow only IPv4 addresses to a PDN connection. The ipv4-only keyword is new in this release. configure call-control-profile profile_name [ remove ] pdn-type-override ipv4-only end NOTES: remove pdn-type-override ipv4-only: Disables the MME from allowing only IPv4 addresses to a PDN connection. Once the CLI is removed, MME will not restrict the IPv6 addresses to be allocated to the PDN. The default behavior allows PDN to have IPv6 addresses when subscription allows it. Performance Indicator Changes show call-control-profile full all The PDN Type IPv6 Denied field added to the output of this command displays "Configured" or "Not Configured" to indicate whether the MME is enabled to allow only IPv4 addresses to a PDN connection. 175

178 show call-control-profile full all IPv6 PDN Type Restriction 176

179 CHAPTER 25 Limiting Cores on Local File Storage This chapter describes the following topics: Feature Summary and Revision History, on page 177 Feature Changes, on page 178 Command Changes, on page 178 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) All ASR 5500 VPC-DI VPC-SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation ASR 5500 System Administration Guide Command Line Interface Reference VPC-DI System Administration Guide VPC-SI System Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N

180 Feature Changes Limiting Cores on Local File Storage Revision Details The maximum number of core files to retain on the local storage before rotation can be configured in this release. First introduced. Release 21.8 Pre 21.2 Feature Changes Command Changes crash enable The limitation on the maximum number of full cores that can be stored on either /flash or /hd-raid is changed in this release. Previous Behavior: The maximum number of core files was limited to 15 full cores if the destination URL was configured to local storage such as /flash or /hd-raid. The oldest core files would be rotated to retain the latest 15 full cores.this behavior was applicable only to the VPC-DI/VPC-SI platforms. New Behavior: The maximum number of core files to be retained before rotation is configurable using the crash enable CLI command and not limited to 15 full cores. The destination URL storage path to external server using SFTP/HTTP is configured to retain the core files generated. This behavior is applicable to ASR 5500 and VPC-DI/VPC-SI platforms. In the Global Configuration mode, the crash enable CLI command is enhanced to configure the maximum number of core files that can be retained before rotation. The rotate keyword is new in this release. configure crash enable { url crash_url [ rotate num_cores ] } end Notes: url crash_url: Specifies the location to store crash files. crash_url refers to a local or remote file. rotate num_cores: Specifies the number of core dumps to retain on the local storage. num_cores must be an integer from 1 to 256. Default:

181 CHAPTER 26 LTE to Wi-Fi (S2bGTP) Seamless Handover This chapter describes the following topics: Feature Summary and Revision History, on page 179 Feature Description, on page 180 How It Works, on page 180 Configuring LTE to Wi-Fi Seamless Handover, on page 182 Monitoring and Troubleshooting, on page 182 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area P-GW SAEGW Applicable Platform(s) ASR 5500 VPC - DI VPC - SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference P-GW Administration Guide SAEGW Administration Guide Statistics and Counters Reference 179

182 Feature Description LTE to Wi-Fi (S2bGTP) Seamless Handover Revision History Important Revision history details are not provided for features introduced before release 21.2 and N5.1. Revision Details With this release, support has been added for seamless handover of subscribers from LTE to Wi-Fi (S2bGTP). First introduced. Release 21.8 Pre 21.2 Feature Description When handover is initiated from LTE to Wi-Fi, the Delete Bearer Request (DBR) is sent over the LTE tunnel immediately when the Create Session Response (CSR) is sent on the Wi-Fi tunnel. This causes some packet loss because of the IPSec tunnel establishment delay at the epdg. To address the issue of packet loss, an enhancement is introduced, in Release 21.8, that holds both the tunnels (LTE and Wi-Fi) and sends the Delete Bearer Request on LTE tunnel only when uplink data is seen on the Wi-Fi tunnel or on expiry of the configured handover timer (when there is no uplink data), whichever is earlier. As long as the LTE tunnel is active, uplink and downlink data is exchanged on the LTE tunnel. When handover is complete, uplink and downlink data is exchanged on the Wi-Fi tunnel. This prevents packet loss. With this enhancement, the following benefits can be seen: Minimum packet loss during LTE to Wi-Fi (S2bGTP) handover and making the handover seamless (that is, MAKE before BREAK). LTE procedures are handled gracefully over the LTE tunnel when both tunnels are established with the P-GW. Wi-Fi procedures are handled gracefully over the Wi-Fi tunnel when both tunnels are established with the P-GW. When there are two tunnels (LTE and Wi-Fi) established for the same subscriber, GTP-U error indication and GTP-U path failure on the LTE or Wi-Fi tunnel (default or dedicated bearer) are handled properly during the transition period. How It Works LTE to Wi-Fi Handoff The LTE to Wi-Fi (S2bGTP) Seamless Handover works as explained in the following sections. The LTE to Wi-Fi handoff occurs as follows: 1. The P-GW delays sending the DBR to the S-GW until: 180

183 LTE to Wi-Fi (S2bGTP) Seamless Handover LTE to Wi-Fi Handoff CSR expiry is sent to the epdg (default behavior). Uplink data is sent on the Wi-Fi tunnel. Handover timer has expired. If timer expires, the epdg does not send the Modify Bearer Request (MBR) to notify handoff completion. 2. After CSR for LTE to Wi-Fi handoff is received, Control Plane GTPv2 (GTP-C) messages from LTE access are not handled at the P-GW. These messages are blocked at the EGTPC. 3. LTE tunnel carries GTP-U traffic during the transition period. Transition period is defined as time between CSR (for LTE to Wi-Fi handoff is received) and handover completion. MBR for handoff completion is not expected in this scenario. 4. In case of multiple outstanding CCR-Us being supported, all requests before the handoff request are dropped. This is done at IMSA. 5. During the transition period: If Modify Bearer Command (MBC) is received in Wi-Fi, it is rejected with Service-Denied message. If Delete Bearer Command for dedicated bearer is received in LTE, it is discarded. If PCRF sends RAR for policy change, it is processed after handover is complete. New tunnel (that is, Wi-Fi) does not carry any GTP-U traffic. Any GTP-U traffic that is received on the Wi-Fi during the transition period is dropped or ignored. Similarly, any downlink traffic that is received on the Wi-Fi is sent on an older tunnel (that is, LTE tunnel) until DBR is sent on the Wi-Fi tunnel. This is true even when CSR is sent on the Wi-Fi tunnel. Any uplink traffic that is received on the Wi-Fi tunnel before timer expiry triggers the handover completion, and from then on all traffic is forwarded only through the Wi-Fi tunnel. Any pending transactions on LTE access are discarded. For example, if CBR or UBR is sent for LTE access and handoff is initiated before completion of CBR or UBR transaction, then CBR or UBR is ignored at the P-GW. PCRF is not notified about failure. If ASR is received, then call drop occurs and both tunnels go down. If session-release occurs from PCRF, then call is dropped and CSR is sent with cause as no-resources. GTP-U or GTP-C path failure over LTE leads to call drop for LTE access while the Wi-Fi call continues. GTP-U or GTP-C path failure over Wi-Fi leads to call drop. Both tunnels are cleared. If the user moves back to LTE (that is, back to back handoff from LTE to Wi-Fi to LTE) with HO-Ind set to 1 (after guard timer), then the HO is processed successfully and user session is moved to LTE again. If the user moves back to LTE (that is, back to back handoff from LTE to Wi-Fi to LTE) with HO-Ind set to 0, then it leads to context replacement. Old call is cleared on Wi-Fi access with reason as context replacement and call is processed like a new call over LTE. 181

184 Session Recovery and ICSR LTE to Wi-Fi (S2bGTP) Seamless Handover Session Recovery and ICSR During the transition period, old access is considered as stable state and Full Checkpoint is triggered once handover is complete from LTE to Wi-Fi (S2bGTP). This is done for both Session Recovery and ICSR. Configuring LTE to Wi-Fi Seamless Handover The following section provides information about the CLI commands available to enable or disable the feature. Configuring LTE to Wi-Fi Handover Timer Use the following CLI commands to configure LTE to Wi-Fi handover timer. configure context context_name apn apn_name lte-s2bgtp-first-uplink timeout { default no } lte-s2bgtp-first-uplink end NOTES: default: Enables the LTE to Wi-Fi handover completion to occur when the Create Session Response is sent on the Wi-Fi tunnel. no: Disables the feature and handover completion occurs on Create Session Response. lte-s2bgtp-first-uplink timeout: Configures LTE to S2bGTP handover completion timeout in multiples of 100 milliseconds. The valid range is from 100 to The recommended configuration is 1000 milliseconds. By default, the LTE to Wi-Fi handover completion happens when Create Session Response is sent on the Wi-Fi tunnel. However, after handover timeout is configured, the handover is delayed until timeout or on receipt of uplink data on the Wi-Fi tunnel. Monitoring and Troubleshooting This section provides information regarding CLI commands available in support of monitoring and troubleshooting the feature. Show Command(s) and/or Outputs show apn statistics name <name> This section provides information regarding show commands and/or their outputs in support of this feature. The output of this CLI command has been enhanced to display the following new fields for the APN: LTE-to-S2bGTP handover Succeeded on First Uplink Data on S2b tunnel Specifies the number of handovers due to uplink packets. 182

185 LTE to Wi-Fi (S2bGTP) Seamless Handover Bulk Statistics LTE-to-S2bGTP handover Succeeded on Timer Expiry Specifies the number of handovers due to timer expiry. NOTES: The new fields, introduced as part of this feature, are also displayed for the following CLI commands: show pgw-service statistics name service_name verbose show pgw-service statistics all verbose show saegw-service statistics all function pgw verbose Bulk Statistics The following statistics are included in support of this feature. APN Schema The following bulk statistics are added for APN in the APN schema in support of the LTE to Wi-Fi Seamless Handover feature. Bulk Statistics apn-handoverstat-ltetos2bgtpsucc-timerexpiry apn-handoverstat-ltetos2bgtpsucc-uplnkdata Description Number of LTE to S2bGTP handover succeeded on Timer Expiry. Number of LTE to S2bGTP handover succeeded on Uplink Data on the S2b tunnel. P-GW Schema The following bulk statistics are added for P-GW in the P-GW schema in support of the LTE to Wi-Fi Seamless Handover feature. Bulk Statistics handoverstat-ltetos2bgtpsucc-timerexpiry handoverstat-ltetos2bgtpsucc-uplnkdata Description Handover Statistics - Number of LTE to GTP S2b successful handovers on Timer Expiry. Handover Statistics - Number of LTE to GTP S2b successful handovers on Uplink Data on S2b tunnel. SAEGW Schema The following bulk statistics are added for SAEGW in the SAEGW schema in support of the LTE to Wi-Fi Seamless Handover feature. Bulk Statistics pgw-handoverstat-ltetos2bgtpsucc-timerexpiry Description P-GW Handover Statistics - Number of LTE to GTP S2b successful handover on Timer Expiry. 183

186 SAEGW Schema LTE to Wi-Fi (S2bGTP) Seamless Handover Bulk Statistics pgw-handoverstat-ltetos2bgtpsucc-uplnkdata Description P-GW Handover Statistics - Number of LTE to GTP S2b successful handover on Uplink Data on S2b tunnel. 184

187 CHAPTER 27 Monitor VPC-DI Network Feature Summary and Revision History, on page 185 Feature Description, on page 185 How It Works, on page 186 Configuring the Monitor VPC-DI Network Feature, on page 187 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) VPC - DI Feature Default Enabled - Always-on Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference VPC-DI System Administration Guide Statistics and Counters Reference Revision History Revision Details Release First introduced. Pre 21.8 Feature Description In a DI-network, packet loss occurs when the DI-network ports are saturated or the underlying network infrastructure is unreliable. The Monitor VPC-DI network feature enables the identification and quantification of Control Plane and Data Plane packet loss on the VPC-DI system. 185

188 How It Works Monitor VPC-DI Network VPC-DI collects and aggregates the Control Plane and Data Plane monitor data for use in CLI reports and threshold alarms. The feature also provides the ability to set the criteria for the VPC-DI to declare a card fault. Currently, a card fault is raised when a fixed number of High Availability Task (HAT) Control Plane heartbeats between the Active CF and an SF cards are consecutively bounced. The number of consecutive misses can be configured using this feature. This feature adds a secondary Data Plane configuration parameter that can be used to effectively discriminate between the DI-network packet loss and packet processing failure scenarios. How It Works The Control Plane and Data Plane monitors generate two fundamental DI-network traffic types on a fixed or recurring basis and tracks losses. The tracking data is meant to provide a view into DI-network communication loss or disruption. Control Plane packets are typically unicast bi-directional UDP/TCP streams between cards; essentially request and response pairs between StarOS proclets. Data Plane traffic consists of unicast IP protocol 254 packets transferred between cards. This traffic is service port ingress or egress that the StarOs internally transfers to the appropriate application instance (ingress) or service port interface (egress) and is not acknowledged (that is, there are no response packets). For example, an ingress packet arriving on an SF3 port that the Session Manager instance services on SF5, traverses the DI-network from SF3 to SF5. All operational cards (that is, CFs and SFs with an Active or Standby operational state) transmit and receive monitor packets. The monitor traffic is fully meshed wherein all cards transmit monitor packets to all other cards and receive monitor packets from all other cards. Data Plane packets are generated at a rate of 10 per second. Control Plane monitor packets are generated at a rate of 5 per second. The packet headers for both are marked with default priority. StarOS collects and aggregates the monitor transmit, receive, and drop data for all card connections. The show cloud monitor controlplane and show cloud monitor dataplane CLI commands display current 15 second, 5 minute, and 60 minute data. The 5 minute and 60 minute loss percentages are available as variables in the bulkstats mon-di-net schema. The 5 minute and 60 minute loss percentages are also accessible as threshold alarms/traps. Note that low or non-zero drop percentages are normal. Because measurements involve correlation across card pairs that are not perfectly synchronized, a response can arrive in the interval adjacent to the one in which the request was generated. This is reflected as a drop in the request interval. When seen on a sustained basis, higher drop or loss percentages may indicate DI-network configuration or operational issues, traffic overload, or VM or Host issues. The cloud monitor provides the ability to see and characterize DI-network traffic loss; further investigation is typically required to identify the root cause. Limitations The Monitor VPC-DI Network feature has the following limitations. Only supported on the VPC-DI platform. Not license-controlled. 186

189 Monitor VPC-DI Network Configuring the Monitor VPC-DI Network Feature Configuring the Monitor VPC-DI Network Feature The following section provides information about the CLI commands available to enable or disable the feature. Configuring Card Fault Detection Use the following commands to configure secondary card fault detection criteria. This command is configured in the Global Configuration mode. configure high-availability fault-detection card dp-outage seconds end NOTES: default: Restores the default dp-outage value. The default value is 2 seconds. Note that the dp-outage deferral is limited. If the consecutive heartbeat bounces are 5 greater than the configured hb-loss parameter, then card failure is declared regardless of the dp-outage configuration. The dp-outage parameter is restricted to Administrator access on the VPC-DI platform. If this CLI is not configured, then the default dp-outage value is 2 seconds. Configuring Packet Loss Threshold on Control Plane Use the following commands to measure percentage packet loss over the corresponding time interval on the Control plane. The threshold alarm and SNMP trap are raised for any card to card connection that exceeds the configured loss percentage over the indicated time period. This command is configured in the Global Configuration mode. configure [ default ] threshold cp-monitor-5min-loss pct [ clear pct ] end [ default ] threshold poll cp-monitor-5min-loss interval duration configure [default] threshold cp-monitor-60min-loss pct [ clear pct ] end [default] threshold poll cp-monitor-60min-loss interval duration NOTES: default: Clears the configured thresholds for the Control Plane. clear pct : Clears the configured percentage of packet loss. interval duration: Specifies the amount of time (in seconds) that comprises the polling interval. duration must be an integer from 60 through The default is 300 seconds. This command is disabled by default. 187

190 Configuring Packet Loss Threshold on Data Plane Monitor VPC-DI Network Note For supplemental information related to this feature, refer to the Global Configuration Mode Commands section of the Command Line Reference. The following alarms/traps are generated when these thresholds are exceeded: ThreshControlPlaneMonitor5MinsLoss / ThreshClearControlPlaneMonitor5MinsLoss ThreshControlPlaneMonitor60MinsLoss / ThreshControlPlaneMonitor60MinsLoss See the SNMP MIB Reference for more details about these alarms/traps. Configuring Packet Loss Threshold on Data Plane Use the following commands to measure percentage packet loss over the corresponding time interval on the Data plane. The threshold alarm and SNMP trap are raised for any card to card connection that exceeds the configured loss percentage over the indicated time period. This command is configured in the Global Configuration mode. configure [ default ] threshold dp-monitor-5min-loss pct [ clear pct ] end [ default ] threshold poll dp-monitor-5min-loss interval duration configure [default] threshold dp-monitor-60min-loss pct [ clear pct ] end [ default ] threshold poll dp-monitor-60min-loss interval duration NOTES: default: Disables the configured thresholds for the Data Plane. clear pct : Clears the configured packet loss. interval duration: Specifies the amount of time (in seconds) that comprises the polling interval. duration must be an integer from 60 through The default is 300 seconds. This command is disabled by default. Note For supplemental information related to this feature, refer to the Global Configuration Mode Commands section of the Command Line Reference. The following alarms/traps are generated when these thresholds are exceeded: ThreshDataPlaneMonitor5MinsLoss / ThreshClearDataPlaneMonitor5MinsLoss ThreshDataPlaneMonitor60MinsLoss / ThreshDataPlaneMonitor60MinsLoss See the SNMP MIB Reference for more details about these alarms/traps. 188

191 Monitor VPC-DI Network Monitoring and Troubleshooting Monitoring and Troubleshooting This section provides information regarding CLI commands available in support of monitoring and troubleshooting the feature. Show Command(s) and/or Outputs show cloud monitor controlplane show cloud monitor dataplane This section provides information regarding show commands and/or their outputs in support of this feature. This new show command is introduced to display the following output for the most recent Control Plane monitor information. show cloud monitor controlplane Cards 15 Second Interval 5 Minute Interval 60 Minute Interval Src Dst Xmit Recv Miss% Xmit Recv Miss% Xmit Recv Miss% % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % -incomplete % % -incomplete % % -incomplete % % -incomplete % % -incomplete % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % % This new show command is introduced to display the following output for the most recent Data Plane monitor information. show cloud monitor dataplane 189

192 Bulk Statistics Monitor VPC-DI Network Cards 15 Second Interval 5 Minute Interval 60 Minute Interval Src Dst Miss Hit Pct Miss Hit Pct Miss Hit Pct % % % % % -incomplete % % % % % % % % % % % % % % -incomplete % % % % % % % % % % % -incomplete % % -incomplete % % -incomplete % % -incomplete % % -incomplete % % % % % % % % -incomplete % % % % % % % % % % % % % % -incomplete % % % % % % % % % % % % % % -incomplete % % % % % % Bulk Statistics mon-di-net Schema The following statistics are included in support of this feature. The following bulk statistics are added in the mon-di-net schema in support of the Monitor the VPC-DI Network feature. Bulk Statistics cp-loss-5minave cp-loss-60minave dp-loss-5minave Description Indicates the average Control Plane loss in prior 5 minutes. Indicates the average Control Plane loss in prior 60 minutes. Indicates the average Data Plane loss in prior 5 minutes. 190

193 Monitor VPC-DI Network mon-di-net Schema Bulk Statistics dp-loss-60minave Description Indicates the average Data Plane loss in prior 60 minutes. 191

194 mon-di-net Schema Monitor VPC-DI Network 192

195 CHAPTER 28 Multiple IP Versions Support This chapter describes the following topics: Feature Summary and Revision History, on page 193 Feature Description, on page 194 How it Works, on page 194 Configuring Multiple IP Version Support, on page 196 Monitoring and Troubleshooting, on page 197 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area P-GW S-GW SAEGW Applicable Platform(s) ASR 5500 VPC - DI VPC - SI Feature Default Disabled - Configuration Required Related Changes in This Release Not applicable Related Documentation Command Line Interface Reference P-GW Administration Guide S-GW Administration Guide SAEGW Administration Guide 193

196 Feature Description Multiple IP Versions Support Revision History Important Revision history details are not provided for features introduced before release 21.2 and N5.1. Revision Details This feature enables P-GW, S-GW, and SAEGW nodes to support the control messages received on all the transport addresses exchanged during the session setup. First introduced. Release 21.8 Pre 21.2 Feature Description How it Works This feature enables P-GW, S-GW, and SAEGW nodes to support the control messages received on all the transport addresses exchanged during the session setup. Prior to this release P-GW, S-GW, and SAEGW did not support BRCmd, MBCmd, and DBCmd messages on transport other than the transport used for establishing session. A new CLI command has been introduced at the egtp-service level to control the behavior of the BRCmd, MBCmd, and DBCmd messages. This section describes the working of this feature. Following is the sample call flow for MBCmd. The following figure illustrates call flow when the feature is disabled: 194

197 Multiple IP Versions Support How it Works The following figure illustrates the call flow when feature is enabled: When a session is being established, P-GW, S-GW, and SAEGW node uses the IPv6 address as transport. This transport is used for establishing tunnel with peer node. If IPv4 and IPv6 addresses are exchanged in control FTEID then the node should handle MBCmd, BRCmd, and DBCmd messages on IPv4 transport by the nodes. 195

198 Configuring Multiple IP Version Support Multiple IP Versions Support When a session is being established, if IPv4 address is used as a transport and is being used for establishing tunnel with peer node, and if IPv4 and IPv6 addresses are exchanged in control FTEID, then the MBCmd, BRCmd, and DBCmd messages are also handled on the IPv6 transport by the nodes. When a session is being established, if IPv4 and IPv6 addresses are exchanged in data F-TEID by both peers, then the GTP-U data packets get handled on both IPv6 and IPv4 transport. When a session is being established, if IPv4 address is used as a transport, however, C-TEID does not contain IPv4 address, then that message is rejected by the node. The nodes exhibit similar behavior for IPv6 addresses. When a session is being established, if IPv4 and IPv6 addresses are exchanged in data F-TEID by both peers, then GTP-U data packets get handled on IPv6 and IPV4 transport both. The following table displays the message handling behavior in different session establishment scenarios: Table 2: Message Handling Behavior in Different Session Establishment Scenarios Messages Transport Used for Session Establishment C-FTEID Sent During Session Establishment Message Sent on Transport MBR/DSR IPv6 IPv4/IPv6 IPv4 MBC/DBC/BRC IPv6 IPv4/IPv6 IPv4 Change Notification IPv6 IPv4/IPv6 IPv4 Suspend/Resume IPv6 IPv4/IPv6 IPv4 MBR/DSR IPv4 IPv4/IPv6 IPv6 MBC/DBC/BRC IPv4 IPv4/IPv6 IPv6 Change Notification IPv4 IPv4/IPv6 IPv6 Suspend/Resume IPv4 IPv4/IPv6 IPv6 MBR/DSR IPv6 IPv6 IPv4 MBC/DBC/BRC IPv6 IPv6 IPv4 Change Notification IPv6 IPv6 IPv4 Suspend/Resume IPv6 IPv6 IPv4 MBR/DSR IPv4 IPv4 IPv6 MBC/DBC/BRC IPv4 IPv4 IPv6 Change Notification IPv4 IPv4 IPv6 Suspend/Resume IPv4 IPv4 IPv6 Configuring Multiple IP Version Support This section provides information on CLI commands available in support of this feature. 196

199 Multiple IP Versions Support Monitoring and Troubleshooting By default, this feature is enabled. configure context context_name egtp-service service_name [no] gtpc command-messages dual-ip-stack-support end NOTES: no: Disables the feature. command-messages: Configures MBC or DBC or BRC messages on S-GW and P-GW. dual-ip-stack-support: Enables P-GW, S-GW, SAEGW nodes to handle command messages on both IPv4/IPv6 transport, if supported. Monitoring and Troubleshooting This section provides information on how to monitor and troubleshoot the Override Control Enhancement feature. Show Commands and Outputs show configuration show egtp-service all This section provides information on show commands and their corresponding outputs for the Override Control Enhancement feature. The following new fields are added to the output of this command: gtpc command-messages dual-ip-stack-support - Specifies the command messages on both IPv4/IPv6 transport if supported. The following new fields are added to the output of this command: GTPC Command Messages Dual IP Support - Specifies the command messages on both IPv4/IPv6 transport if supported. 197

200 show egtp-service all Multiple IP Versions Support 198

201 CHAPTER 29 NAT64 Support Feature Summary and Revision History, on page 199 Feature Description, on page 199 Configuring NAT64 Support, on page 200 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) P-GW ASR 5500 VPC - DI VPC - SI Default Setting Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference P-GW Administration Guide Revision History Revision Details Release First introduced Feature Description NAT64 is implemented to reuse an existing IPv4 configuration to process DPI for IPv6 addresses. 199

202 Configuring NAT64 Support NAT64 Support All IPv4 based IP addresses are converted to IPv6 by appending a predetermined 96 bit prefix to an existing IPv4 address. The P-GW allows the operator to configure a limited set of prefixes. The configured prefix is used to parse an IPv6 address and to extract the IPv4 address. The extracted IPv4 address is then used in DPI rule matching at the gateway for NAT-based subscribers, while the IPv6 addresses performs charging-based actions. Note This feature is currently applicable to Server-IP only. Configuring NAT64 Support The following configurations are required to implement NAT64: Configuring the Prefix Set Configuring Rulebase for a Prefix Set Note Once the NAT64 support is configured, rule-matching is applicable only to the newly connected flows. The existing flows use the previous rule-matching configuration. Configuring the Prefix Set The prefix-set command is configured in the Active Charging Configuration mode. The configured prefix set is then associated to the Rulebase. Use the following configuration to configure a prefix set: configure require active-charging active-charging service service_name prefix-set prefix_set_name ipv6_prefix_address/mask_bits end NOTES: prefix-set: Configures a list of prefixes that is used for DPI rule matching. ipv6_prefix_address/mask_bits: Specifies the IPv6 address along with the masked bits (96 bits). Configuring Rulebase for a Prefix Set The configured prefix-set is applied in the Rulebase using the following configuration. If the IPv6 address first 96 bits matches with one of the configured prefix in the prefix set then the remaining 32 bits will be converted to IPv4 address and used in rule matching. 200

203 NAT64 Support Configuring Rulebase for a Prefix Set configure require active-charging active-charging service service_name rulebase rulebase_name strip server-ipv6 prefix-len prefix_length prefix-set prefix_set_name end NOTES: strip: Extracts the IPv4 address from the IPv6 address. server-ipv6: Specifies the IPv6 address of the server. prefix-len: Specifies the length of the IPv6 prefix. prefix-set: Specifies the name of the prefix set to be used for rule match. 201

204 Configuring Rulebase for a Prefix Set NAT64 Support 202

205 CHAPTER 30 Non-MCDMA Cores for Crypto Processing This chapter describes the following topics: Feature Summary and Revision History, on page 203 Feature Changes, on page 204 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area epdg Applicable Platform(s) VPC-DI IPSec VPC-SI Feature Default Enabled - Always-on Related Changes in This Release Not Applicable Related Documentation epdg Administration Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release All non-mcdma IFTASK cores present in the system are used for crypto processing First introduced. Pre

206 Feature Changes Non-MCDMA Cores for Crypto Processing Feature Changes The cores in the VPC-DI/VPC-SI platforms are used for crypto processing to limit the throughput while using software path for encryption/decryption. In this release, all non-mcdma IFTASK cores present in the system are used instead of using the four cores for crypto processing. Previous Behavior: The core allocation for a particular SA was done based on its IPSec policy number and distributed among four or lesser number of cores for crypto processing. New Behavior: In this release, the SA index will be used to distribute the sessions across all non-mcdma cores present in the system for crypto processing. The following configuration is added to limit the number of cores to be used for crypto. IFTASK_MAX_CRYPTO_CORES=<percentage> By default, all non-mcdma cores will be used. The value is configured in percentage of the maximum number of IFTASK cores present in the system. This configuration is added in the /boot1/param.cfg file under the debug shell of each SF before reload. Customer Impact: The performance will be proportionally improved with the number of non-mcdma IFTASK cores present in the system. 204

207 CHAPTER 31 Override Control Enhancement This chapter describes the following topics: Feature Summary and Revision History, on page 205 Feature Changes, on page 206 Monitoring and Troubleshooting, on page 206 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area P-GW S-GW SAEGW Applicable Platform(s) ASR 5500 VPC - DI VPC - SI Feature Default Disabled - Configuration Required Related Changes in This Release Not applicable Related Documentation ECS Administration Guide Stats and Counters Reference Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N

208 Feature Changes Override Control Enhancement Revision Details The Override Control (OC) feature, introduced in an earlier release, allowed you to dynamically modify the parameters of static or predefined rules with parameters sent by the PCRF over the Gx interface. This feature allows you to merge rule level or charging action level override control with Wildcard OC. First introduced. Release 21.8 Pre 21.2 Feature Changes The Override Control (OC) feature, introduced in an earlier release, allowed you to dynamically modify the parameters of static or predefined rules with parameters sent by the PCRF over the Gx interface. OC allows you to specify the overridden parameters with the ability to exclude certain rules. The PCRF sends these overrides as Override-Control grouped AVP in a CCA or RAR message. Currently, if a rule or charging-action level and Wildcard level OC parameters are received in a single message, then the rule level or charging-action level OC parameters are applied without merging any of the parameters with the Wildcard OC parameters. A new Diameter AVP "Override-Control-Merge-Wildcard" is added to the grouped AVP "Override-Charging-Action-Parameters" and included in the 'dpca-custom8' dictionary. This AVP is required to indicate that an OC needs to be merged with a Wildcard OC. On receiving this new AVP, the gateway merges the parameters of received OC with the Wildcard OC. The merged OC is applied to the rules matching rule-name/ca-name criteria of the received OC. If the Wildcard OC is not present, then the received OC is applied as it is. Important While applying OC on the rules of a rulebase, if the rule is present in the Exclude-Rule list of Wildcard OC then for such rule, unmerged or original rule level or charging-action level OC is applied. Monitoring and Troubleshooting This section provides information on how to monitor and troubleshoot the Override Control Enhancement feature. Show Commands and Outputs This section provides information on show commands and their corresponding outputs for the Override Control Enhancement feature. 206

209 Override Control Enhancement show active charging sessions full all show active charging sessions full all The following new fields are added to the output of this command: Merge with Wildcard Received Succeeded Failed show active-charging subscribers callid callid_name override-control The following new fields are added to the output of this command: Merge with Wildcard: TRUE/FALSE show active-charging rulebase statistics name statistics_name The following new fields are added to the output of this command: Merge with Wildcard Received Succeeded Failed 207

210 show active-charging rulebase statistics name statistics_name Override Control Enhancement 208

211 CHAPTER 32 Packet Count in G-CDR This chapter describes the following topics: Feature Summary and Revision History, on page 209 Feature Description, on page 210 How It Works, on page 210 Configuring Packet Count in G-CDR, on page 210 Monitoring and Troubleshooting, on page 210 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) P-GW ASR 5500 VPC - DI VPC - SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference P-GW Administration Guide Revision History Revision Details Release First introduced

212 Feature Description Packet Count in G-CDR Feature Description When an IoT UE is attached, they send a message, as needed, and go into Power Saving Mode (PSM) until the time they have to transmit the next message. The IoT UE does not detach (that is, session termination) after every message. By assessing the number of such messages through CDRs generated for the IoT UE session, the Operator can implement billing for IoT devices by including the packet counts in offline billing records. Important This feature is applicable to custom24 GTPP dictionary. How It Works As part of this feature, two new attributes are introduced for the packet count: datapacketsfbcdownlink and datapacketsfbcuplink. These two attributes are CLI-controlled and visible only when the CLI is enabled. The existing attributes are not modified or removed. Configuring Packet Count in G-CDR This section provides information about the CLI commands available in support of the feature. Enabling Packet Count in G-CDR Use the following commands to enable or disable sending of packet counts in G-CDR under the GTPP Server Group Configuration mode. configure context context_name gtpp group group_name [ no ] gtpp attribute packet-count end NOTES: no: Disables sending of uplink and downlink packet count in G-CDR. packet-count: Specifying this option includes the optional field of "datapacketfbcuplink" and "datapacketfbcdownlink" in the CDR. By default, the gtpp attribute packet-count CLI command is disabled. Monitoring and Troubleshooting This section provides information regarding CLI commands available for monitoring and troubleshooting the feature. 210

213 Packet Count in G-CDR Show Command(s) and/or Outputs Show Command(s) and/or Outputs show configuration show gtpp group name <name> This section provides information regarding show commands and/or their outputs in support of this feature. The output of this CLI command has been enhanced to display the following new field when the feature is enabled: gtpp attribute packet-count The output of this CLI command has been enhanced to display the following new field when the feature is enabled: Packet count present 211

214 show gtpp group name <name> Packet Count in G-CDR 212

215 CHAPTER 33 S6B-bypass Support for emps Session This chapter describes the following topics: Feature Summary and Revision History, on page 213 Feature Changes, on page 214 Command Changes, on page 214 Performance Indicator Changes, on page 215 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area P-GW Applicable Platform(s) ASR 5500 SAE-GW VPC - DI VPC - SI Default Setting Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference P-GW Administration Guide SAE-GW Administration Guide Revision History Revision Details Release S6B bypass is implemented during S6B authorization/re-authorization failures 21.8 for emps Sessions 213

216 Feature Changes S6B-bypass Support for emps Session Revision Details First introduced. Release 21.3 Feature Changes Enhanced Multimedia Priority Services (emps) sessions can be created successfully in case of S6B authorization/re-authorization failures by using locally configured S6B values. To ensure continuous emps sessions, S6B bypass is implemented during S6B authorization/re-authorization failures. The S6B bypass is implemented in the following scenarios: For advance priority users when processing the Create Session Request (WPS ARP). When P-GW performs authorization and re-authorization for existing emps sessions. A new keyword emps is added as part of the failure-handling-template configuration in the AAA Server Group Configuration mode to provide S6B bypass only for emps subscribers during S6B failures. The operator should configure the emps keyword with the failure-handling-template to provide failure handling for emps subscribers. If failure-handling-template is configured without emps keyword, failure handling is applied to all subscribers. Note Failure handling for emps subscribers in AAA Group is implemented only with the failure-handling template mechanism. The S6B-bypass for an emps session functionality includes the following: Once P-GW assumes bypass for an emps session, it doesn t not perform authorization/re-authorization for subsequent scenarios. In a new call establishment or in a WIFI to LTE hand-off scenario, the S6B authorization/re authorization AAR messages for emps is sent on the basis of default bearer ARP only because the type of dedicated bearer; whether emps or not, is unknown. S6B authorized/reauthorized request initiation will occur before the PCRF communication for creating a dedicated bearer. Changes to the configuration during run-time is applied to all S6B transactions initiated after the configuration change The S6B-bypass Support for emps Sessions is license controlled. Contact your Cisco account representative for detailed information on specific licensing requirements. For information on installing and verifying licenses, refer to the Managing License Keys section of the Software Management Operations chapter in the System Administration Guide Command Changes This section describes the CLI configuration required to enable S6B-bypass Support for emps Sessions 214

217 S6B-bypass Support for emps Session emps emps The emps keyword implements the failure-handling template for emps subscribers during S6B authorization/re-authorization failures. Use the following configuration to enable failure-handling for emps subscribers: configure context context_name aaa groupgroup_name diameter authentication failure-handling-template template_name emps NOTES: [ no ] diameter authentication failure-handling-template emps end failure-handling template: Associates a previously created failure handling template with the authentication application in the AAA group. The template_name specifies the name for a pre-configured failure handling template. The template_name must be an alphanumeric string of 1 through 63 characters. By default, the template is not associated in the AAA group. emps: This keyword specifies the failure-handling behavior for emps Sessions applicable during S6B authorization and re-authorization. no: Disassociates the failure-handling template with the AAA group authentication. Performance Indicator Changes This section provides information regarding show commands and/or their outputs in support of this feature. show diameter aaa-statistics all On executing the above command the following new field is displayed for this feature: FH Behavior (emps) Continue With Retry Without Retry Retry and Terminate Retry and Terminate Retry Term without STR Termination Terminate Terminate without STR 215

218 Bulk Statistics S6B-bypass Support for emps Session Bulk Statistics The following bulk statistics are added in the Diameter-Auth Schema for the S6B Bypass Support for emps Sessions feature fh-continue-retry-emp: Indicates the number of times failure handling action continue is taken using the emps template.. fh-continue-wo-retry-emps: Indicates the number of times the failure handling action continue without retry is taken using emps template. fh-retry-and-term-emps: Indicates the number of times failure handling retry and terminate is taken using emps template. fh-retry-and-term-wo-str-emps: Indicates the number of times failure handling retry and terminate without STR is taken using emps template. fh-terminate-emps: Indicates the number of times failure handling terminate is taken using emps template. fh-terminate-wo-str-emps: Indicates the number of times failure handling terminate without STR is taken using emps template. 216

219 CHAPTER 34 Short Message Service This chapter describes the Short Message Service (SMS) feature in the following topics: Feature Summary and Revision History, on page 217 Feature Description, on page 218 How It Works, on page 218 Configuring SMS Support, on page 224 Monitoring and Troubleshooting, on page 226 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Applicable Platform(s) MME ASR 5500 VPC-DI VPC-SI Feature Default Disabled - Configuration Required Related Changes in This Release Not Applicable Related Documentation Command Line Interface Reference MME Administration Guide Statistics and Counters Reference Revision History Revision Details Release First introduced

220 Feature Description Short Message Service Feature Description Important The Short Message Service (SMS) feature is not fully qualified in this release. It is available only for testing purposes. For more information, contact your Cisco Account representative. The Short Message Service (SMS) is a means of sending messages of limited size to and from GSM/UMTS/EPS devices. SMS is a Store and Forward service, where messages are first sent to an entity called the Short Message Service Center (SMSC) and then forwarded to the recipient instead of transmitting directly to the destination. If the recipient is not connected, the message is saved in the SMSC and when the receiver becomes available, the network will contact the SMSC and forward the SMS. Thus, a GSM/UMTS/EPS PLMN supports the transfer of short messages between service centers and UEs. SMS is delivered over LTE through the following methods: SMS over SGs: The LTE UE device sends and retrieves circuit switched (CS) based SMS messages through the SGs interface. This method is already supported by the MME. SMS over IP: SIP based SMS messages are carried through IMS. The SMS to be transmitted is encapsulated in the SIP message. This method is not supported in this release. SMS in MME: SMS in MME delivers SMS services over the SGd interface to the SMSC. This method is intended for networks that do not deploy GERAN or UTRAN. This method is supported in this release. How It Works The SGd interface enables the transfer of short messages between the MME and the SMSC using Diameter protocol. SCTP is used as the transport protocol. The Short Message Control Protocol (SM-CP) and Short Message Relay Protocol (SM-RP) are traditional SMS protocols between MSC/VLR and UE. The SMS will be sent by the MME bypassing the MSC/VLR. SM-CP transmits the SMS and protects against loss caused by changing the dedicated channel. SM-RP manages the addressing and references. With the new interface configuration towards SMSC, MME will setup an SCTP association with the peer SMSC and the Diameter capability exchange will be performed. Limitations The SMS feature has the following limitations: Queueing multiple MT messages per subscriber is not supported. SMS will not be processed when the MME common procedure is ongoing. Authentication procedure on receiving MO or MT SMS is not supported. Collision scenarios such as context release while processing the SMS is not supported. 218

221 Short Message Service Flows Abort indication from MME application while processing the SMS is not supported. MNRF flag in MME is not supported. Long SMS is not supported. Multiple SMSC-service association is not supported. Mapping the SMSC-address to a Diameter endpoint is not supported. TC1N, TR1N, TR2N, and MT QUEUE timers are set to default values of 5 seconds, 30 seconds, 30 seconds, and 30 seconds respectively. The configurable timer values under SMSC service are not supported. Heuristics paging for SGd SMS is not supported. Statistics for SGd triggered paging are not supported. SMSC service statistics are supported but the statistics will not be pegged in this release. Flows This section describes the call flows related to the SMS feature. Obtaining UE capability for SMS SMS Capability with HSS If the UE requests "SMS-only" in the Additional Update Type IE of combined attach and the network accepts the Attach Request for EPS services and "SMS-only", the network will indicate "SMS-only" in the Additional Update Result IE. If the SMS services are provided by SGd in the MME, the network will provide a TMSI and non-broadcast LAI in the Attach Accept message. A UE supporting SMS in MME needs to perform a registration with the HSS. The following call flow illustrates the request for registration with the HSS. 219

222 HSS-initiated Removal of Registration for SMS Short Message Service Figure 10: SMS Capability with HSS Step 1 2 Description The UE initiates combined Attach Update or combined TAU/LAU to an MME. The MME sends an Update Location Request message to the HSS with the following data: SMS bit set in Feature-List in Supported-Features AVP. The Feature-List ID will be set to 2. "SMS-only" indication bit set in ULR-Flags AVP. MME address for MT-SMS routing in MME-Number-for-MT-SMS AVP. "SMS-only" indication set in SMS-Register-Request AVP HSS registers the UE for SMS support in MME. If the HSS accepts to register the MME identity as an MSC identity for terminating SMS services, then the HSS cancels the MSC/VLR registration from the HSS. For successful registrations, HSS sends a Location Update Answer (indication that the MME has registered for SMS) message to the MME. HSS sets the "MME Registered for SMS" bit in ULA-Flags AVP. HSS-initiated Removal of Registration for SMS The following procedure is applied when the HSS needs to indicate to the MME that it is no longer registered for SMS. 220

223 Short Message Service MO Forward Short Message Procedure Figure 11: Removal of Registration for SMS Step Description An event will trigger the cancellation of the MME being registered for SMS. For example, removal of the SMS subscription for the UE, CS location update, and so on. The HSS sends an Insert Subscriber Data Request (Remove SMS registration) message to the MME to inform that it is no more registered for SMS in MME. The MME sets the "MME Registered for SMS" parameter as not registered for SMS and the "SMS Subscription Data" is considered by the MME as invalid. It acknowledges with an Insert Subscriber Data Answer message to the HSS. MO Forward Short Message Procedure The MO Forward Short Message procedure is used between the serving MME and the SMSC to forward mobile originated short messages from a mobile user to a service center. MME checks the SMS related subscription data and forwards the short message. 221

224 MO Forward Short Message Procedure Short Message Service Figure 12: MO Forward Short Message Procedure Step Description The UE sends mobile originated SMS to MME in the Uplink NAS Transport message. MME will encapsulate the SMS in CP-DATA+RP-DATA. The message will be encoded into MO-Forward-Short-Message-Request (OFR) message and sent to SMSC. MME acknowledges the received SMS by sending CP-ACK to UE in the Downlink NAS Transport message. SMSC processes the received OFR message and responds backs with MO-Forward-Short-Message-Answer (OFA) message to MME. MME forwards the acknowledgement from SMSC in CP-DATA+RP-ACK to UE. UE acknowledges the SMS delivery by sending CP-ACK to MME in the Uplink NAS Transport message. 222

225 Short Message Service MT Forward Short Message Procedure MT Forward Short Message Procedure The MT Forward Short Message procedure is used between the SMSC and the serving MME to forward mobile terminated short messages. When receiving the MT Forward Short Message Request, the MME checks if the user is known. If it is an unknown user, an Experimental-Result-Code set to DIAMETER_ERROR_USER_UNKNOWN is returned. The MME attempts to deliver the short message to the UE. If the delivery of the short message to the UE is successful, the MME returns a Result-Code set to DIAMETER_SUCCESS. If the UE is not reachable via the MME, the MME sets the MNRF flag and returns an Experimental-Result-Code set to DIAMETER_ERROR_ABSENT_USER. If the delivery of the mobile terminated short message failed because the memory capacity exceeded, UE error, or UE not SM equipped, the MME returns an Experimental-Result-Code set to DIAMETER_ERROR_SM_DELIVERY_FAILURE with a SM Delivery Failure Cause indication. Figure 13: MT Forward Short Message 223

226 Standards Compliance Short Message Service Step Description The SMSC sends mobile terminated SMS to MME in the MT-Forward-Short-Message-Request (TFR) message. If the UE is in IDLE mode then MME initiates paging and establishes an S1AP connection provided UE replies with paging response. Once the UE is in CONNECTED mode, MME forwards the SMS in CP-DATA+RP-DATA to UE using the Downlink NAS Transport message. The UE acknowledges the received message by sending CP-ACK in the Uplink NAS Transport message. The UE processes the received SMS and sends CP-DATA+RP-ACK to MME. The MME sends the MT-Forward-Short-Message-Answer (TFA) command to SMSC and forwards CP-ACK to the UE in the Downlink NAS Transport message. Standards Compliance The SMS feature complies with the following standards: 3GPP TS version : Non-Access-Stratum (NAS) protocol for Evolved Packet System (EPS); Stage 3 3GPP TS version : Evolved Packet System (EPS); Mobility Management Entity (MME) and Serving GPRS Support Node (SGSN) related interfaces based on Diameter protocol 3GPP TS version : Diameter based protocols to support Short Message Service (SMS) capable Mobile Management Entities (MMEs) Configuring SMS Support This section provides information on the CLI commands to configure the SMSC service for SMS support in MME. Creating and Configuring SMSC Service Use the following configuration to enable the SMSC service and configure the parameters in SMSC service to support MO/MT SMS delivery between SMSC, MME, and UE. configure context context_name smsc-service smsc_svc_name diameter { dictionary standard endpoint endpoint_name } mme-address mme_address tmsi tmsi_value non-broadcast mcc mcc_value mnc mnc_value lac lac_value default diameter dictionary 224

227 Short Message Service Configuring MME Preference for SMS NOTES: no { diameter endpoint mme-address tmsi } end context context_name: Creates or specifies an existing context and enters the Context Configuration mode. context_name specifies the name of a context entered as an alphanumeric string of 1 to 79 characters. smsc-service smsc_svc_name: Creates and configures an SMSC Peer service to allow communication with SMSC peer. smsc_svc_name specifies the name of the SMSC service as an alphanumeric string of 1 to 63 characters. Entering this command in the Context mode results in the following prompt: [context_name]host_name(config-smsc-service)# diameter { dictionary standard endpoint endpoint_name }: Configures the Diameter interface to be associated with the SMSC service. dictionary standard: Configures the standard SGd dictionary. endpoint endpoint_name: Enables Diameter to be used for accounting and specifies which Diameter endpoint to use. endpoint_name must be an alphanumeric string of 1 to 63 characters. mme-address mme_address: Configures the MME address to send SMS on the SGd interface. mme_address specifies the MME address (ISDN identity) as an integer from 1 to 15. tmsi tmsi_value non-broadcast mcc mcc_value mnc mnc_value lac lac_value: Configures the TMSI to be sent to UE. tmsi_value specifies the 4-byte M-TMSI as an integer from 1 to non-broadcast: Configures the non-broadcast Location Area Identifier (LAI). mcc mcc_value: Configures the mobile country code (MCC) portion of non-broadcast LAI for the SMSC service as an integer from 100 through 999. mnc mnc_value: Configures the mobile network code (MNC) portion of non-broadcast LAI for the SMSC service as a 2- or 3-digit integer from 00 through 999. lac lac_value: Configures the location area code (LAC) value as an integer from 1 to default: Configures the standard Diameter SGd dictionary by default. no: Disables the specified configuration. Verifying the Configuration Use the following command to verify the configuration for all SMSC services or a specified SMSC service: show smsc-service { all name smsc_svc_name statistics { all name smsc_svc_name summary } } Configuring MME Preference for SMS Use the following configuration to configure the MME preference for SMS and SMSC address. configure call-control-profile profile_name 225

228 Associating SMSC Service with MME Service Short Message Service sms-in-mme { preferred [ smsc-address smsc_address ] smsc-address smsc_address } no sms-in-mme { preferred [ smsc-address ] smsc-address } end NOTES: call-control-profile profile_name: Creates an instance of a call control profile. profile_name specifies the name of a call control profile entered as an alphanumeric string of 1 to 64 characters. sms-in-mme { preferred [ smsc-address smsc_address ] smsc-address smsc_address }: Configures the SMS capability (SGd interface for SMS) in MME. preferred: Configures the SMS preference in MME. smsc-address smsc_address: Configures the SMSC address (ISDN identity) for the MME to send SMS on the SGd interface. smsc_address must be an integer from 1 to 15. no: Deletes the specified configuration. Associating SMSC Service with MME Service Use the following configuration to associate an SMSC service with the MME service. configure context context_name mme-service service_name associate smsc-service smsc_svc_name [ context ctx_name ] end NOTES: context context_name: Creates or specifies an existing context and enters the Context Configuration mode. context_name specifies the name of a context entered as an alphanumeric string of 1 to 79 characters. mme-service service_name: Creates an MME service or configures an existing MME service in the current context. service_name specifies the name of the MME service as an alphanumeric string of 1 to 63 characters. associate smsc-service smsc_svc_name: Associates an SMSC service with the MME service. smsc_svc_name specifies the name for a pre-configured SMSC service to associate with the MME service as an alphanumeric string of 1 to 63 characters. context ctx_name: Identifies a specific context name where the named service is configured. If this keyword is omitted, the named service must exist in the same context as the MME service. ctx_name must be an alphanumeric string of 1 to 63 characters. Monitoring and Troubleshooting This section provides information on the show commands and bulk statistics available for the SMS Support feature. 226

229 Short Message Service Show Commands and/or Outputs Show Commands and/or Outputs show call-control-profile full all This section provides information regarding show commands and their outputs for the SMS Support feature. The output of this command includes the following fields: SMS in MME Displays the configured value (preferred / not-preferred) for SMS in MME. SMSC Address Displays the configured SMSC address. show mme-service all The following new fields are added to the output of this command to display the SMSC statistics. SMSC Context Displays the name of the context in which SMSC service is configured. SMSC Service Displays the name of the SMSC service associated with the MME service. show smsc-service name <smsc_svc_name> The following fields are added to the output of this command: Service name Displays the name of the configured SMSC service. Context Displays the name of the configured context. Status Displays the status of the SMSC service. Diameter endpoint Displays the configured Diameter endpoint name. Diameter dictionary Displays the configured Diameter dictionary. Tmsi Displays the configured TMSI value. Non-broadcast-Lai Displays the configured non-broadcast MCC, MNC, and LAC values. MME-address Displays the configured MME address. show smsc-service statistics all The following fields are added to the output of this command: Session Stats: Total Current Sessions Displays the total number of current SMSC sessions. Sessions Failovers Displays the number of SMSC session failovers. Total Starts Displays the total number of SMSC session starts. Total Session Updates Displays the total number of SMSC session updates. Total Terminated Displays the total number of terminated SMSC sessions. Message Stats: 227

230 show smsc-service statistics all Short Message Service Total Messages Rcvd Displays the total number of messages received. Total Messages Sent Displays the total number of messages sent. OF Request Displays the total number of OF requests. OF Answer Displays the total number of OF answers. OFR Retries Displays the total number of OFR retries. OFR Timeouts Displays the total number of OFR timeouts. OFA Dropped Displays the total number of OFA dropped. TF Request Displays the total number of TF requests. TF Answer Displays the total number of TF answers. TFR Retries Displays the total number of TFR retries. TFA Timeouts Displays the total number of TFA timeouts. TFA Dropped Displays the total number of TFA dropped requests. AL Request Displays the total number of AL requests. AL Answer Displays the total number of AL answers. ALR Retries Displays the total number of ALR retries. ALR Timeouts Displays the total number of ALR timeouts. ALA Dropped Displays the total number of ALA dropped. Message Error Stats: Unable To Comply Displays the total number of message errors containing the result code "Unable To Comply". User Unknown Displays the total number of message errors containing the result code "User Unknown". User Absent Displays the total number of message errors containing the result code "User Absent". User Illegal Displays the total number of message errors containing the result code "User Illegal". SM Delivery Failure Displays the total number of message errors containing the result code "SM Delivery Failure". User Busy for MT SMS Displays the total number of message errors containing the result code "User Busy for MT SMS". Other Errors Displays the total number of message errors containing the result code "Other Errors". Bad Answer Stats: Auth-Application-Id Displays the absence or unexpected value in Auth-Application-Id AVP. Session-Id Displays the absence or unexpected value in Session-Id AVP. 228

231 Short Message Service show smsc-service statistics summary Origin-Host Displays the absence of Origin-Host AVP. Origin Realm Displays the absence of Origin-Realm AVP. Parse-Message-Errors Displays the total number of parse errors in the message. Parse-Mscc-Errors Displays the total number of parse errors in MSCC AVP. Miscellaneous Displays the total number of other miscellaneous errors. show smsc-service statistics summary The following fields are added to the output of this command: SMSC Session Stats: Total Current Sessions Displays the total number of current SMSC sessions. Sessions Failovers Displays the total number of SMSC session failovers. Total Starts Displays the total number of SMSC session starts. Total Session Updates Displays the total number of SMSC session updates. Total Terminated Displays the total number of terminated SMSC sessions. 229

232 show smsc-service statistics summary Short Message Service 230

233 CHAPTER 35 SNMP IF-MIB and Entity-MIB Support for DI-Network Interface Feature Summary and Revision History, on page 231 Feature Changes, on page 232 Command Changes, on page 232 SNMP MIB Object Changes, on page 232 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) VPC-DI Feature Default Enabled - Always-on Related Changes in This Release Not applicable Related Documentation SNMP MIB Reference Guide Statistics and Counters Reference Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release From this release, the SNMP IF-MIB and Entity-MIB now include the DI-network 21.8 interface statistics. 231

234 Feature Changes SNMP IF-MIB and Entity-MIB Support for DI-Network Interface Revision Details First introduced. Release Pre 21.2 Feature Changes Command Changes show port dinet QVPC-DI platform provides Service and Management information statistics. It does not include any Distributed Instance network (DI-network) interface statistics information. Previous Behavior: Currently, IF-MIB and Entity-MIBs are not supported for the DI-network interface on the QVPC-DI platform. New Behavior: From Release 21.8, IF-MIB and Entity-MIBs are supported for the DI-network interface on the QVPC-DI platform. Customer Impact: These MIBs provide additional debug information about the DI-network interface. This new show command is introduced with the following fields to display the DI-network port statistics. counters SLOT/CPU/NPU utilization SLOT/CPU/NPU bps pps verbose SNMP MIB Object Changes This section displays a sample of the MIB information for the DI-network port details along with the management and service port details. DiNet Virtual Ethernet - 1/0 Mgmt Virtual Ethernet - 1/1 DiNet Virtual Ethernet - 2/0 Mgmt Virtual Ethernet - 2/1 232

235 SNMP IF-MIB and Entity-MIB Support for DI-Network Interface SNMP MIB Object Changes DiNet Virtual Ethernet - 4/0 Srvc Virtual Ethernet - 4/10 Srvc Virtual Ethernet - 4/11 Note For information on SNMP MIBs changes for a specific release, refer to the SNMP MIB Changes in Release xx chapter of the appropriate version of the Release Change Reference. 233

236 SNMP MIB Object Changes SNMP IF-MIB and Entity-MIB Support for DI-Network Interface 234

237 CHAPTER 36 SNMP MIB Changes in StarOS 21.8 and USP 6.2 This chapter identifies SNMP MIB objects, alarms and conformance statements added to, modified for, or deprecated from the StarOS 21.5 and Ultra Services Platform (USP) 6.2 software releases. SNMP MIB Object Changes for 21.8, on page 235 SNMP MIB Alarm Changes for 21.8, on page 236 SNMP MIB Conformance Changes for 21.8, on page 237 SNMP MIB Object Changes for 6.2, on page 237 SNMP MIB Alarm Changes for 6.2, on page 238 SNMP MIB Conformance Changes for 6.2, on page 239 SNMP MIB Object Changes for 21.8 This section provides information on SNMP MIB alarm changes in release Important For more information regarding SNMP MIB alarms in this section, see the SNMP MIB Reference for this release. New SNMP MIB Object This section identifies new SNMP MIB alarms available in release The following alarms are new in this release: starsxinterfacetype starsxselfaddr starsxpeeraddr starsxpeernewrectimestamp starsxpeeroldrectimestamp starsxfailurecause 235

238 SNMP MIB Alarm Changes for 21.8 SNMP MIB Changes in StarOS 21.8 and USP 6.2 Modified SNMP MIB Object This section identifies SNMP MIB alarms modified in release The following alarms have been modified in this release: None in this release. Deprecated SNMP MIB Object This section identifies SNMP MIB alarms that are no longer supported in release The following alarms have been deprecated in this release: None in this release. SNMP MIB Alarm Changes for 21.8 This section provides information on SNMP MIB alarm changes in release Important For more information regarding SNMP MIB alarms in this section, see the SNMP MIB Reference for this release. New SNMP MIB Alarms This section identifies new SNMP MIB alarms available in release The following alarms are new in this release: starsxpathfailure starsxpathfailureclear starthreshdataplanemonitor5minsloss starthreshcleardataplanemonitor5minsloss starthreshdataplanemonitor60minsloss starthreshcleardataplanemonitor60minsloss starthreshcontrolplanemonitor5minsloss starthreshclearcontrolplanemonitor5minsloss starthreshcontrolplanemonitor60minsloss starthreshclearcontrolplanemonitor60minsloss Modified SNMP MIB Alarms This section identifies SNMP MIB alarms modified in release The following alarms have been modified in this release: 236

239 SNMP MIB Changes in StarOS 21.8 and USP 6.2 SNMP MIB Conformance Changes for 21.8 None in this release. Deprecated SNMP MIB Alarms This section identifies SNMP MIB alarms that are no longer supported in release The following alarms have been deprecated in this release: None in this release. SNMP MIB Conformance Changes for 21.8 This section provides information on SNMP MIB alarm changes in release Important For more information regarding SNMP MIB alarms in this section, see the SNMP MIB Reference for this release. New SNMP MIB Conformance This section identifies new SNMP MIB alarms available in release The following alarms are new in this release: None in this release. Modified SNMP MIB Conformance This section identifies SNMP MIB alarms modified in release The following alarms have been modified in this release: None in this release. Deprecated SNMP MIB Conformance This section identifies SNMP MIB alarms that are no longer supported in release The following alarms have been deprecated in this release: None in this release. SNMP MIB Object Changes for 6.2 This section provides information on SNMP MIB object changes in the Ultra M MIB corresponding to release

240 SNMP MIB Alarm Changes for 6.2 SNMP MIB Changes in StarOS 21.8 and USP 6.2 Important For more information regarding SNMP MIB objects in this section, see the Ultra M Solutions Guide for this release. New SNMP MIB Objects This section identifies new SNMP MIB objects available in release 6.2. The following objects are new in this release: cultramsiteid Modified SNMP MIB Objects This section identifies SNMP MIB objects modified in release 6.2. The following objects have been modified in this release: cultramfaultindex The object ID was changed from 1 to 2. cultramnfvidenity The object ID was changed from 2 to 3. cultramfaultdomain The object ID was changed from 3 to 4. cultramfaultsource The object ID was changed from 4 to 5. cultramfaultcreationtime The object ID was changed from 5 to 6. cultramfaultseverity The object ID was changed from 6 to 7. cultramfaultcode The object ID was changed from 7 to 8. cultramfaultdescription The object ID was changed from 8 to 9. Deprecated SNMP MIB Objects This section identifies SNMP MIB objects that are no longer supported in release 6.2. The following objects have been deprecated in this release: None in this release. SNMP MIB Alarm Changes for 6.2 This section provides information on SNMP MIB alarm changes in the Ultra M MIB corresponding to release 6.2. Important For more information regarding SNMP MIB alarms in this section, see the Ultra M Solutions Guide for this release. 238

241 SNMP MIB Changes in StarOS 21.8 and USP 6.2 SNMP MIB Conformance Changes for 6.2 New SNMP MIB Alarms This section identifies new SNMP MIB alarms available in release 6.2. The following alarms are new in this release: None in this release. Modified SNMP MIB Alarms This section identifies SNMP MIB alarms modified in release 6.2. The following alarms have been modified in this release: None in this release. Deprecated SNMP MIB Alarms This section identifies SNMP MIB alarms that are no longer supported in release 6.2. The following alarms have been deprecated in this release: None in this release. SNMP MIB Conformance Changes for 6.2 This section provides information on SNMP MIB conformance statement changes in the Ultra M MIB corresponding to release 6.2. Important For more information regarding SNMP MIB conformance statements in this section, see the Ultra M Solutions Guide for this release. New SNMP MIB Conformance Statements This section identifies new SNMP MIB conformance statements available in release 6.2. The following conformance statements are new in this release: None in this release. Modified SNMP MIB Conformance Statements This section identifies SNMP MIB conformance statements that are modified in release 6.2. The following conformance statements have been modified in this release: None in this release. Deprecated SNMP MIB Conformance Statements This section identifies SNMP MIB conformance statements that are no longer supported in release

242 SNMP MIB Conformance Changes for 6.2 SNMP MIB Changes in StarOS 21.8 and USP 6.2 The following conformance statements have been deprecated in this release: None in this release. 240

243 CHAPTER 37 UAS and UEM Login Security Enhancements Feature Summary and Revision History, on page 241 Feature Description, on page 241 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area Ultra Automation Services (UAS) Applicable Platform(s) UGP running on the Ultra M Solution Feature Default Enabled Always-on Related Features in this Release Not Applicable Related Documentation Ultra Services Platform Deployment Automation Guide Ultra M Solutions Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release First introduced. 6.2 Feature Description For UAS and UEM components, the following login security restrictions are supported: 241

244 Feature Description UAS and UEM Login Security Enhancements You will be locked out of the system for 10 minutes upon the third incorrect attempt to login to a UAS or UEM VM. Should you need/want to change your password, the new password must be different than any of the last five previously configured passwords. 242

245 CHAPTER 38 UEM Patch Upgrade Process Feature Summary and Revision History, on page 243 Feature Description, on page 244 UEM Upgrade Workflow, on page 244 Initiating the UEM Patch Upgrade, on page 248 Limitations, on page 250 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled - Configuration required Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced

246 Feature Description UEM Patch Upgrade Process Feature Description Important This feature is not fully qualified in this release. It is available only for testing purposes. For more information, contact your Cisco Accounts representative. In releases prior to 6.2, the USP-based VNF would have to be completely terminated in order to perform an upgrade of the UEM. Furthermore, the UEM patch upgrade functionality that existed was limited and manual. With this release, the UEM can optionally be upgraded as part of a rolling patch upgrade process in order to preserve the operational state of the VNF, UAS, and VNFM deployments. UEM Upgrade Workflow In the rolling patch upgrade process, each of the VMs in the UEM Zookeeper cluster (master, slave, and standby) is upgraded one at a time. By default, the upgrade attempts to upgrade the slave VM first and the Zookeeper-elected leader VM last as illustrated in Figure 15: UEM Patch Upgrade Process Flow, on page

247 UEM Patch Upgrade Process UEM Upgrade Workflow Figure 14: UEM VM Upgrade Order Important The UEM patch upgrade process is supported for Ultra M deployments that leverage the Hyper-Converged architecture and for stand-alone AutoVNF deployments. Figure 15: UEM Patch Upgrade Process Flow, on page 246 illustrates the UEM patch upgrade process for Ultra M deployments. For stand-alone AutoVNF deployments, the upgrade software image is uploaded to the onboarding server (step 1) and the upgrade command is executed from AutoVNF (step 3). 245

248 UEM Upgrade Workflow UEM Patch Upgrade Process Figure 15: UEM Patch Upgrade Process Flow 1. The new USP ISO containing the UEM upgrade image is onboarded to the Ultra M Manager node. 2. Update the deployment network service description (NSD) to identify the new package. Package information is defined in the VNF package descriptor (vnf-packaged) as follows: <---SNIP---> vnf-packaged <upgrade_package_descriptor_name> location <package_url> validate-signature false configuration staros external-url /home/ubuntu/system.cfg <---SNIP---> The package must then be referenced in the virtual descriptor unit (VDU) pertaining to the UEM: <---SNIP---> vdu em vdu-type element-manager login-credential em_login scm scm image vnf-package vnf-rack vnf-rack1 vnf-package primary <upgrade_package_descriptor_name> vnf-package secondary <previous_package_descriptor_name> <---SNIP---> 246

249 UEM Patch Upgrade Process UEM Upgrade Workflow Important The secondary image is used as a fallback in the event an issue is encountered through the upgrade process. If no secondary image is specified, the upgrade process will stop and generate an error log. 3. The rolling upgrade request is triggered through AutoDeploy which initiates the process with AutoVNF. 4. AutoVNF obtains the UEM HA VIP from the Oper data and communicates with the corresponding UEM to determine the IP addresses of the eth0 interface for each VM in the UEM cluster (master, slave, and standby). This information is maintained in a file on the VM named ip.txt. AutoVNF then uses the address information to communicate with each UEM to determine their Zookeeper state (master, slave, and standby). The upgrade order is illustrated in Figure 14: UEM VM Upgrade Order, on page 245. The rest of this procedure assumes that that the standby UEM VM is the Zookeeper-elected leader. 5. AutoVNF triggers the shutdown of the slave UEM VM via the VNFM. 6. The VNFM works with VIM to remove the slave UEM VM. 7. AutoVNF waits until the VNFM confirms that the slave UEM VM has been completely terminated. 8. AutoVNF initiates the deployment of a new UEM VM via the VNFM using the upgrade image. 9. The VNFM works with the VIM to deploy the new UEM VM. Important If the ESC does not receive the SERVICE UPDATE notification for the newly added VM instances, the upgrade will fail and require a manual intervention. If ESC state (service/vm state ) is not ACTIVE, then the upgrade will not proceed. You need to manually verify the logs to determine the reason for the inactive state. 10. The slave UEM VM synchronizes data with the master UEM VM. 11. AutoVNF waits until the VNFM confirms that the new VM has been deployed and is in slave mode. If AutoVNF detects that there is an issue with the VM, it re-initiates the UEM VM with the previous image if it was identified as a secondary image in the UEM VDU. If no issues are detected, AutoVNF proceeds with the upgrade process. 12. Steps 4, on page 247 to 10, on page 247 are repeated for the UEM VM that is currently the master. Once the master goes down, the slave UEM becomes the master. If an issue is encountered during the upgrade of the second UEM VM (e.g. the master UEM VM in this scenario), then the process stops completely and AutoVNF upstart logs are generated. 13. Repeat steps 4, on page 247 to 8, on page 247 for the standby VM. In this case, the UEM is re-deployed as the standby VM. 247

250 Initiating the UEM Patch Upgrade UEM Patch Upgrade Process Initiating the UEM Patch Upgrade UEM patch upgrades are initiated through a remote procedure call (RPC) executed from the ConfD command line interface (CLI) or via a NETCONF API. Via the CLI To perform an upgrade using the CLI, log in to AutoDeploy (Ultra M deployments) or AutoVNF (stand-alone AutoVNF deployments) as the ConfD CLI admin user and execute the following command: update-sw nsd-id <nsd_name> rolling { true false } vnfd <vnfd_name> vnf-package <pkg_id> NOTES: <nsd_name> and <vnfd_name> are the names of the network service descriptor (NSD) file and VNF descriptor (VNFD) (respectively) in which the VNF component (VNFC) for the UEM VNF component is defined. If the rolling false operator is used, the upgrade terminates the entire deployment. In this scenario, the vnfd<vnfd_name> operator should not be included in the command. If it is included, a transaction ID for the upgrade is generated and failed. The AutoVNF upstart log reflects this status. <pkg_id> is the name of the USP ISO containing the upgraded UEM VM image. Ensure that the upgrade package is defined as a VNF package descriptor within the NSD and that it is specified as the primary package in the UEM VDU configuration. Ensure that the current (pre-upgrade) package is specified as the secondary package in the UEM VDU configuration in order to provide rollback support in the event of errors. Via the NETCONF API Operation: nsd:update-sw Namespace: xmlns:nsd=" Parameters: Parameter Name Required Type Description nsd M string NSD name rolling M boolean Specifies if the rolling is enabled (true) /disabled (false) vnfd M string VNFD name, mandatory in case of rolling upgrade 248

251 UEM Patch Upgrade Process Via the NETCONF API Parameter Name Required Type Description package M string Package descriptor name that should be used to update the vnfd instance mentioned by vnfd NOTES: If the rolling false operator is used, the upgrade terminates the entire deployment. In this scenario, the vnfd<vnfd_name> operator should not be included in the command. If it is included, a transaction ID for the upgrade is generated and failed. The AutoVNF upstart log reflects this status. Ensure that the upgrade package is defined as a VNF package descriptor within the NSD and that it is specified as the primary package in the UEM VDU configuration. Ensure that the current (pre-upgrade) package is specified as the secondary package in the UEM VDU configuration in order to provide rollback support in the event of errors. Example RPC <nc:rpc message-id="urn:uuid:bac690a2-08af-4c9f c907d6e12ba" xmlns=" <nsd-id>fremont-autovnf</nsd-id> <vim-identity>vim1</vim-identity> <vnfd xmlns=" <vnfd-id>esc</vnfd-id> <vnf-type>esc</vnf-type> <version>6.0</version> <configuration> <boot-time>1800</boot-time> <set-vim-instance-name>true</set-vim-instance-name> </configuration> <external-connection-point> <vnfc>esc</vnfc> <connection-point>eth0</connection-point> </external-connection-point> <high-availability>true</high-availability> <vnfc> <vnfc-id>esc</vnfc-id> <health-check> <enabled>false</enabled> </health-check> <vdu> <vdu-id>esc</vdu-id> </vdu> <connection-point> <connection-point-id>eth0</connection-point-id> <virtual-link> <service-vl>mgmt</service-vl> </virtual-link> </connection-point> <connection-point> <connection-point-id>eth1</connection-point-id> <virtual-link> <service-vl>orch</service-vl> </virtual-link> </connection-point> </vnfc> </vnfd> <nsd 249

252 Limitations UEM Patch Upgrade Process </nsd> <vim xmlns=" <vim-id>vim1</vim-id> <api-version>v2</api-version> <auth-url> <user>vim-admin-creds</user> <tenant>abcxyz</tenant> </vim> <secure-token xmlns=" <secure-id>vim-admin-creds</secure-id> <user>abcxyz</user> <password>******</password> </secure-token> <vdu xmlns=" <vdu-id>esc</vdu-id> <vdu-type>cisco-esc</vdu-type> <flavor> <vcpus>2</vcpus> <ram>4096</ram> <root-disk>40</root-disk> <ephemeral-disk>0</ephemeral-disk> <swap-disk>0</swap-disk> </flavor> <login-credential>esc_login</login-credential> <netconf-credential>esc_netconf</netconf-credential>  <vnf-rack>abcxyz-vnf-rack</vnf-rack> <vnf-package> <primary>usp_6_2t</primary> <secondary>usp_throttle</secondary> </vnf-package> <volume/> </vdu> <secure-token xmlns=" <secure-id>esc_login</secure-id> <user>admin</user> <password>******</password> </secure-token> <secure-token xmlns=" <secure-id>esc_netconf</secure-id> <user>admin</user> <password>******</password> </secure-token> <vnf-packaged xmlns=" <vnf-package-id>usp_throttle</vnf-package-id> <location> <validate-signature>false</validate-signature> <configuration> <name>staros</name> <external-url> </configuration> </vnf-packaged> </config> Limitations The following limitations exist with the UEM upgrade feature: 250

253 UEM Patch Upgrade Process Limitations This functionality is only available after upgrading to the 6.2 release. The rolling UEM patch upgrade process can only be used to upgrade to new releases that have a compatible database schema. As new releases become available, Cisco will provide information as to whether or not this functionality can be used to perform the upgrade. For Ultra M deployments, AutoDeploy and AutoIT must be upgraded before using this functionality. Upgrading these products will terminate the VNF deployment. For stand-alone AutoVNF deployments, AutoVNF must be upgraded before using this functionality. Upgrading these products will terminate the VNF deployment. Make sure there are no additional operations running while performing an upgrade/rolling upgrade process. Upgrade/rolling upgrade procedure should be done only in a maintenance window. 251

254 Limitations UEM Patch Upgrade Process 252

255 CHAPTER 39 Ultra M Manager Integration with AutoIT Feature Summary and Revision History, on page 253 Feature Changes, on page 253 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled - Configuration Required Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced. N6.0 Though this feature was introduced in N6.0, it was not fully qualified. It is now fully 6.2 qualified as of this release. Feature Changes In previous software releases, the Ultra M Manager was distributed as an RPM bundle which was distributed both within the USP ISO and as a separate RPM. 253

256 Feature Changes Ultra M Manager Integration with AutoIT In this release, the functionality enabled through the Ultra M Manager is now part of AutoIT and AutoVNF. Once these UAS modules are installed, health monitoring features and functionality are available for configuration and use. Important The integrated Ultra M Manager functionality is currently supported only with Ultra M UGP VNF deployments based on OSP 10 and that leverage the Hyper-Converged architecture. The Ultra M Manager RPM is still distributed separately and is intended only for use in specific deployment scenarios. Contact your local sales or support representative for more information. Event Aggregation Functionality With its integration into AutoIT, the Ultra M Manager event aggregation (fault management) configuration process has also changed. Previously, Ultra M Manager event aggregation configuration was performed through the ultram_cfg.yaml file. As of this release, this functionality is now configured through NETCONF API-based remote procedure calls invoked via AutoIT or through a network service descriptor (NSD) configuration file activated through AutoDeploy. In either scenario, the parameters related to this functionality are defined by/within the fault management descriptor (FMD). This API is used to configure: Fault domains: The specific components within the Ultra M solution for which fault management is to be performed. During configuration, the individual domains for which you wish to perform event aggregation can be specified. Alternatively, if no domains are specified, then the default behavior is to monitor all domains. Event severity: Configures the lowest-level severity of the events that are to be monitored. The severity can be specified as one of the following (arranged highest to lowest severity): emergency -- System level FAULT impacting multiple VNFs/Services critical -- Critical Fault specific to VNF/Service major -- component level failure within VNF/service. alert -- warning condition for a service/vnf may eventually impact service. informational -- informational only does not impact service Configuring the lowest-level severity means that events of that severity and higher are monitored. For example, if a severity of major is configured, then major, critical and emergency events are monitored. If no severity is specified, informational is used as the default. SNMP descriptor: Configures parameters related to the version(s) of SNMP that are supported. SNMP v2c and v3 are supported. VIM parameters: Configures VIM parameters and the specific aspects of OpenStack to be monitored: Modules (e.g. ceph, cinder, nova, etc.) Controller services (e.g. cinder, glance, heat, etc.) 254

257 Ultra M Manager Integration with AutoIT Feature Changes Compute services (e.g. ceph-mon.target, ceph-radosgw.target, ceph.target, etc.) OSD compute services (e.g. ceph-mon.target, ceph-radosgw.target, ceph.target, etc.) Individual modules and services can be configured. If no specific modules are configured, then event aggregation is enabled for all modules. Important Controller, compute, and OSD compute services cannot be configured unless the systemctl module is enabled. Though the FMD configuration can be included in the NSD configuration file, it is recommended that the configuration for this functionality be maintained in a separate, FMD-specific NSD configuration file. Refer to the Cisco Ultra Services Platform NETCONF API Guide for more information on configuring parameters related to the FMD. Ultra M MIB Changes A new object called cultramsiteid was added to the cultramfaulttable in the Ultra M MIB. The addition of this object resulted in the object IDs for all other objects in cultramfaulttable being incremented by 1. Refer to SNMP MIB Changes in StarOS 21.8 and USP 6.2, on page 235 for details on this change. Syslog Proxy Functionality Syslog proxy functionality is supported at the following levels: Syslogging for UCS Hardware Syslogs for the UCS servers that comprise the Ultra M solution are proxied through AutoIT. The server list is based on the configuration specified in the VIM Orchestrator and VIM NSD configuration file. As such, syslog proxy functionality for the hardware must be performed after the VIM has been deployed. Syslogging for OpenStack services AutoIT can be configured to serve as a proxy for OpenStack service syslogs. The list of servers on which OpenStack is running is based on the configuration specified in the VIM Orchestrator and VIM NSD configuration file. As such, syslog proxy functionality for the hardware must be performed after the VIM has been deployed. Syslogging is automatically enabled for the following services: Nova Cinder Keystone Glance Ceph monitor (Controller nodes only) Ceph OSD (OSD Compute nodes only) 255

258 Feature Changes Ultra M Manager Integration with AutoIT Syslogging for UAS Modules Each UAS software module can be configured to send logs and syslogs to one or more external collection servers. AutoDeploy and AutoIT Logs and syslogs are sent directly to one or more external syslog collection servers configured when these modules are first installed. The configured collection servers are also the receivers for UCS server hardware and OpenStack services for which AutoIT is a proxy. The following logs are sent: AutoDeploy: AutoIT: /var/log/upstart/autodeploy.log /var/log/syslog /var/log/upstart/autoit.log /var/log/syslog In order to support syslogging functionality, additional operators were added to the boot_uas.py script used to install these modules: --syslog-ip<ext_syslog_server_address> --port<syslog_port_number> --severity<syslog_severity_to_send> Multiple collection server addresses can be configured. Furthermore, the following additional AutoIT installation parameters have been added to the Provisioning Network Details: Provisional Network HA VIP: The VIP address to be assigned to AutoIT's provisional network interface. AutoVNF AutoVNF serves as the syslog proxy for the VNFM, UEM, and CF VNF components (VNFCs). It also sends its own logs to the same external syslog collection server: /var/log/upstart/autovnf.log /var/log/syslog Syslogging for the AutoVNF module is configured through the AutoVNF VNFC configuration within the VNF Rack and VNF NSD configuration file. Syslogging for VNFM, UEM, and CF VNFCs AutoVNF can be configured as the syslog proxy for the following VNFM, UEM, and CF VNF component (VNFC) logs: 256

259 Ultra M Manager Integration with AutoIT Feature Changes VNFM (ESC): /var/log/messages Note escmanager and mona logs are not configured as part of syslog automation. ESC can be manually configured to send these logs to the syslog proxy or to an external syslog collection server. UEM: /var/log/em/vnfm-proxy/vnfm-proxy /var/log/em/ncs/ncs-java-vm /var/log/em/zookeeper/zookeeper /var/log/syslog CF: All syslogs configured within the StarOS. Syslogging for the VNFM, UEM, and CF is configured through their respective VNFC configurations within the VNF Rack and VNF NSD configuration file. UCS Server Utility Integration with AutoIT The Ultra M Manager provided utilities to simplify the process of upgrading the UCS server software (firmware) within the Ultra M solution. The integration of Ultra M Manager into AutoIT include full support for these utilities. Additionally, there is no difference in function or use except that the utilities are executed from the AutoIT VM. Refer to the Ultra M Solutions Guide for more information. 257

260 Feature Changes Ultra M Manager Integration with AutoIT 258

261 CHAPTER 40 Ultra M Manager SNMP Fault Suppression Feature Summary and Revision History, on page 259 Feature Changes, on page 260 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled - Configuration Required Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Important Revision history details are not provided for features introduced before releases 21.2 and N5.1. Revision Details Release First introduced in a FCS release. (It had been previously released in the ER.)

262 Feature Changes Ultra M Manager SNMP Fault Suppression Feature Changes In past releases, the fault suppression feature was made available through the Ultra M Manager utility (Ultra M Manager RPM software). In this release,the fault suppression functionality is automated and managed through the NETCONF API. Fault suppression functionality is configured through a fault management descriptor (FMD) configuration file that is comprised of the required NETCONF parameters. Depending on the configuration of this functionality, the faults can be suppressed at the following levels: UCS server: UCS cluster: All events for all UCS nodes are suppressed. UCS fault object distinguished names (DNs): All events for one or more specified UCS object DNs within are suppressed. UCS faults: One or more specified UCS faults are suppressed. Important Fault suppression can be simultaneously configured at both the UCS object DN and fault levels. UAS and VNF components: UAS component cluster: All events for all UAS components are suppressed. UAS component events: One or more specified UAS component events are suppressed. When faults are suppressed, event monitoring occurs as usual and the log report file shows the faults. However, suppressed faults are not reported over SNMP. Within the log file, suppress faults are preceded by the word Skipping. Refer to the Ultra M Solutions Guide for details on configuring SNMP Fault Suppression. 260

263 CHAPTER 41 USP Software Version Updates Feature Summary and Revision History, on page 261 Feature Changes, on page 261 Feature Summary and Revision History Summary Data Applicable Product(s) or Functional Area All Applicable Platform(s) UGP Feature Default Disabled - Configuration Required Related Features in this Release Not Applicable Related Documentation Ultra Gateway Platform System Administration Guide Ultra M Solutions Guide Ultra Services Platform Deployment Automation Guide Revision History Revision Details Release First introduced. 6.2 Feature Changes Cisco ESC Software Version Update The Cisco Elastic Services Controller (ESC) product is used as the virtual network function manager (VNFM) within the Ultra Services Platform. 261

5G NSA for MME. Feature Summary and Revision History

5G NSA for MME. Feature Summary and Revision History Feature Summary and Revision History, on page 1 Feature Description, on page 2 How It Works, on page 5 Configuring, on page 10 Monitoring and Troubleshooting, on page 13 Feature Summary and Revision History

More information

5G Non Standalone for SAEGW

5G Non Standalone for SAEGW This chapter describes the 5G Non Standalone (NSA) feature in the following sections: Feature Summary and Revision History, on page 1 Feature Description, on page 2 How It Works, on page 3 Configuring

More information

5G NSA(Non-Standalone Architecture)

5G NSA(Non-Standalone Architecture) This chapter describes the following topics: Feature Summary and Revision History, page 1 Feature Description, page 2 How It Works, page 2 Configuring DCNR, page 5 Monitoring and Troubleshooting, page

More information

Dedicated Core Networks on MME

Dedicated Core Networks on MME This chapter describes the Dedicated Core Networks feature in the following sections: Feature Summary and Revision History, page 1 Feature Description, page 2 How It Works, page 5 Configuring DECOR on

More information

Dedicated Core Networks on MME

Dedicated Core Networks on MME This chapter describes the Dedicated Core Networks feature in the following sections: Feature Summary and Revision History, on page 1 Feature Description, on page 2 How It Works, on page 4 Configuring

More information

Release Change Reference, StarOS Release 21.9/Ultra Services Platform Release 6.3

Release Change Reference, StarOS Release 21.9/Ultra Services Platform Release 6.3 Release Change Reference, StarOS Release 21.9/Ultra Services Platform Release 6.3 First Published: 2018-07-31 Last Modified: 2018-11-26 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

5G Non Standalone. Feature Summary and Revision History

5G Non Standalone. Feature Summary and Revision History This chapter describes the (NSA) feature in the following sections: Feature Summary and Revision History, on page 1 Feature Description, on page 2 Feature Summary and Revision History Summary Data Applicable

More information

5G NSA for SGSN. Feature Summary and Revision History

5G NSA for SGSN. Feature Summary and Revision History Feature Summary and Revision History, on page 1 Feature Description, on page 2 How It Works, on page 3 Configuring 5G Non Standalone in SGSN, on page 6 Monitoring and Troubleshooting, on page 7 Feature

More information

NB-IoT RAT and Attach Without PDN Connectivity Support

NB-IoT RAT and Attach Without PDN Connectivity Support NB-IoT RAT and Attach Without PDN Connectivity Support This feature chapter describes the MME support for the CIoT optimizations attach without PDN connectivity and NB-IoT RAT type. Feature Summary and

More information

LTE to Wi-Fi (S2bGTP) Seamless Handover

LTE to Wi-Fi (S2bGTP) Seamless Handover This chapter describes the following topics: Feature Summary and Revision History, page 1 Feature Description, page 2 How It Works, page 2 Configuring LTE to Wi-Fi Seamless Handover, page 4 Monitoring

More information

S11U Interface Support on S-GW for CIoT Devices

S11U Interface Support on S-GW for CIoT Devices SU Interface Support on S-GW for CIoT Devices Feature Summary and Revision History, page Feature Description, page 2 How It Works, page 4 Standards Compliance, page 9 Configuring SU Interface Support on

More information

HLCOM Support. Feature Summary and Revision History

HLCOM Support. Feature Summary and Revision History Feature Summary and Revision History, page 1 Feature Description, page 2 How It Works, page 3 Standards Compliance, page 11 Limitations and Restrictions, page 11 Monitoring and Troubleshooting, page 11

More information

GTP-based S2b Interface Support on the P-GW and SAEGW

GTP-based S2b Interface Support on the P-GW and SAEGW GTP-based S2b Interface Support on the P-GW and SAEGW This chapter describes the GTP-based S2b interface support feature on the standalone P-GW and the SAEGW. Feature, page 1 How the S2b Architecture Works,

More information

This chapter describes the support of Non-IP PDN on P-GW and S-GW.

This chapter describes the support of Non-IP PDN on P-GW and S-GW. This chapter describes the support of Non-IP PDN on P-GW and S-GW. Feature Summary and Revision History, page 1 Feature Description, page 2 How It Works, page 2 Configuring Non-IP PDN, page 8 Monitoring

More information

Small Data over NAS, S11-U and SGi Interfaces

Small Data over NAS, S11-U and SGi Interfaces The MME support for small data transmission over NAS, S11-U and SGi interfaces is described in this chapter. Feature Summary and Revision History, page 1 Feature Description, page 2 How it Works, page

More information

edrx Support on the MME

edrx Support on the MME This feature describes the Extended Discontinuous Reception (edrx) support on the MME in the following sections: Feature Summary and Revision History, page 1 Feature Description, page 2 How edrx Works,

More information

IxLoad LTE Evolved Packet Core Network Testing: enodeb simulation on the S1-MME and S1-U interfaces

IxLoad LTE Evolved Packet Core Network Testing: enodeb simulation on the S1-MME and S1-U interfaces IxLoad LTE Evolved Packet Core Network Testing: enodeb simulation on the S1-MME and S1-U interfaces IxLoad is a full-featured layer 4-7 test application that provides realworld traffic emulation testing

More information

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0

Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 Cisco Evolved Programmable Network System Test Topology Reference Guide, Release 5.0 First Published: 2017-05-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Direct Tunnel for 4G (LTE) Networks

Direct Tunnel for 4G (LTE) Networks This chapter briefly describes support for direct tunnel (DT) functionality over an S12 interface for a 4G (LTE) network to optimize packet data traffic. Cisco LTE devices (per 3GPP TS 23.401 v8.3.0) supporting

More information

LTE EPC Emulators v10.0 Release Notes - Page 1 of 15 -

LTE EPC Emulators v10.0 Release Notes - Page 1 of 15 - LTE EPC Emulators v10.0 Release Notes - Page 1 of 15 - Version 10.0.0.7 Release Date: Feb 24, 2014 Components 1. LTE Emulators : MME (with internal HSS), SGW and PGW (with internal PCRF) 1. LTE Emulators

More information

MME Changes in Release 20

MME Changes in Release 20 This chapter identifies features and functionality added to, modified for, or deprecated from the MME in StarOS 20 software releases. Corrections have been made in the 20.1 content. The following has been

More information

Cisco FindIT Plugin for Kaseya Quick Start Guide

Cisco FindIT Plugin for Kaseya Quick Start Guide First Published: 2017-10-23 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Certkiller 4A0-M02 140q

Certkiller 4A0-M02 140q Certkiller 4A0-M02 140q Number: 4A0-M02 Passing Score: 800 Time Limit: 120 min File Version: 16.5 http://www.gratisexam.com/ 4A0-M02 Alcatel-Lucent Mobile Gateways for the LTE Evolved Packet Core Added

More information

CPS UDC MoP for Session Migration, Release

CPS UDC MoP for Session Migration, Release CPS UDC MoP for Session Migration, Release 13.1.0 First Published: 2017-08-18 Last Modified: 2017-08-18 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Media Services Proxy Command Reference

Media Services Proxy Command Reference Media Services Proxy Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco IOS HTTP Services Command Reference

Cisco IOS HTTP Services Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: July 2017 Release 2.5.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel:

More information

HSS and PCRF Based P-CSCF Restoration Support

HSS and PCRF Based P-CSCF Restoration Support This feature enables support for HSS-based and PCRF-based P-CSCF restoration that helps to minimize the time a UE is unreachable for terminating calls after a P-CSCF failure. Feature Description, page

More information

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x

Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x Cisco Nexus 1000V for KVM Interface Configuration Guide, Release 5.x First Published: August 01, 2014 Last Modified: November 09, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

Application Launcher User Guide

Application Launcher User Guide Application Launcher User Guide Version 1.0 Published: 2016-09-30 MURAL User Guide Copyright 2016, Cisco Systems, Inc. Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Cisco Unified Communications Manager Device Package 8.6(2)( ) Release Notes

Cisco Unified Communications Manager Device Package 8.6(2)( ) Release Notes Cisco Unified Communications Manager Device Package 8.6(2)(26169-1) Release Notes First Published: August 31, 2015 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Migration and Upgrade: Frequently Asked Questions

Migration and Upgrade: Frequently Asked Questions First Published: May 01, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-10-13 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

E. The enodeb performs the compression and encryption of the user data stream.

E. The enodeb performs the compression and encryption of the user data stream. Volume: 140 Questions Question No: 1 Which of the following statements is FALSE regarding the enodeb? A. The enodebs maybe interconnect TEID with each other via anx2 interface. B. The enodeb is an element

More information

Closed Subscriber Groups

Closed Subscriber Groups Feature Description, page 1 How It Works, page 1 Configuring, page 6 Monitoring and Troubleshooting, page 7 Feature Description The MME provides support for (CSG). This enables the MME to provide access

More information

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://www.certqueen.com

IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://www.certqueen.com IT Certification Exams Provider! Weofferfreeupdateserviceforoneyear! h ps://www.certqueen.com Exam : 4A0-M02 Title : Alcatel-Lucent Mobile Gateways for the LTE Evolved Packet Core Version : Demo 1 / 7

More information

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T

IP Routing: ODR Configuration Guide, Cisco IOS Release 15M&T Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server

Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server Considerations for Deploying Cisco Expressway Solutions on a Business Edition Server December 17 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA95134-1706 USA http://www.cisco.com

More information

Cisco Jabber IM for iphone Frequently Asked Questions

Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions Cisco Jabber IM for iphone Frequently Asked Questions Frequently Asked Questions 2 Basics 2 Connectivity 3 Contacts 4 Calls 4 Instant Messaging 4 Meetings 5 Support and Feedback

More information

Non-IP Data Over SCEF

Non-IP Data Over SCEF This chapter describes the transfer of Non-IP data over SCEF using Cellular Internet of Things (CIoT) technology. This feature is discussed in the following sections: Feature Summary and Revision History,

More information

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6

NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 NNMi Integration User Guide for CiscoWorks Network Compliance Manager 1.6 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Cisco 1000 Series Connected Grid Routers QoS Software Configuration Guide

Cisco 1000 Series Connected Grid Routers QoS Software Configuration Guide Cisco 1000 Series Connected Grid Routers QoS Software Configuration Guide January 17, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

S-GW Event Reporting

S-GW Event Reporting This chapter describes the record content and trigger mechanisms for S-GW event reporting. When enabled the S-GW writes a record of session events and sends the resulting event files to an external file

More information

Ultra IoT C-SGN Guide, StarOS Release 21.5

Ultra IoT C-SGN Guide, StarOS Release 21.5 First Published: 2017-11-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

IP Addressing: Fragmentation and Reassembly Configuration Guide, Cisco IOS XE Release 3S (Cisco ASR 1000)

IP Addressing: Fragmentation and Reassembly Configuration Guide, Cisco IOS XE Release 3S (Cisco ASR 1000) IP Addressing: Fragmentation and Reassembly Configuration Guide, Cisco IOS XE Release 3S (Cisco ASR 1000) Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x)

Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) Direct Upgrade Procedure for Cisco Unified Communications Manager Releases 6.1(2) 9.0(1) to 9.1(x) First Published: May 17, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose,

More information

HSS-based P-CSCF Restoration

HSS-based P-CSCF Restoration The home subscriber server-based (HSS) Proxy Call Session Control Function (P-CSCF) Restoration is an optional mechanism during a P-CSCF failure. It applies only when the UE is using 3GPP access technologies.

More information

Cause Code #66. Feature Description

Cause Code #66. Feature Description Feature Description, page 1 How It Works, page 2 Configuring PDP Activation Restriction and Cause Code Values, page 2 Monitoring and Troubleshooting the Cause Code Configuration, page 7 Feature Description

More information

Cisco IOS Flexible NetFlow Command Reference

Cisco IOS Flexible NetFlow Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2

Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2 Cisco Connected Mobile Experiences REST API Getting Started Guide, Release 10.2 First Published: August 12, 2016 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

P-GW Service Configuration Mode Commands

P-GW Service Configuration Mode Commands Service Configuration Mode Commands The (PDN Gateway) Service Configuration Mode is used to create and manage the relationship between specified services used for either GTP or PMIP network traffic. Exec

More information

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances

Recovery Guide for Cisco Digital Media Suite 5.4 Appliances Recovery Guide for Cisco Digital Media Suite 5.4 Appliances September 17, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Cisco Unified Communications Self Care Portal User Guide, Release

Cisco Unified Communications Self Care Portal User Guide, Release Cisco Unified Communications Self Care Portal User Guide, Release 10.0.0 First Published: December 03, 2013 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

DAY 2. HSPA Systems Architecture and Protocols

DAY 2. HSPA Systems Architecture and Protocols DAY 2 HSPA Systems Architecture and Protocols 1 LTE Basic Reference Model UE: User Equipment S-GW: Serving Gateway P-GW: PDN Gateway MME : Mobility Management Entity enb: evolved Node B HSS: Home Subscriber

More information

Cisco IOS HTTP Services Command Reference

Cisco IOS HTTP Services Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes Cisco UCS Performance Manager Release Notes First Published: November 2017 Release 2.5.1 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Long Term Evolution - Evolved Packet Core S1 Interface Conformance Test Plan

Long Term Evolution - Evolved Packet Core S1 Interface Conformance Test Plan Long Term Evolution - Evolved Packet Core S1 Interface Conformance Test Plan Table of Contents 1 SCOPE... 10 2 REFERENCES... 10 3 ABBREVIATIONS... 11 4 OVERVIEW... 14 5 TEST CONFIGURATION... 16 5.1 NETWORK

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.1

Cisco Terminal Services (TS) Agent Guide, Version 1.1 First Published: 2017-05-03 Last Modified: 2017-12-19 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Cisco UCS Performance Manager Release Notes

Cisco UCS Performance Manager Release Notes First Published: October 2014 Release 1.0.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408

More information

Version LTE Emulators v10.2 Release Notes - Page 1 of 16 - Release Date: Aug 28, Resolved Issues

Version LTE Emulators v10.2 Release Notes - Page 1 of 16 - Release Date: Aug 28, Resolved Issues Version 10.2.0.15 Release Date: Aug 28, 2015 Resolved Issues LTE Emulators v10.2 Release Notes - Page 1 of 16-11336 MME does not release previous S1 association when UE Context Release Request procedure

More information

Tetration Cluster Cloud Deployment Guide

Tetration Cluster Cloud Deployment Guide First Published: 2017-11-16 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

CPS UDC SNMP and Alarms Guide, Release

CPS UDC SNMP and Alarms Guide, Release CPS UDC SNMP and Alarms Guide, Release 13.1.0 First Published: 2017-08-18 Last Modified: 2017-08-18 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Installation and Configuration Guide for Visual Voic Release 8.5

Installation and Configuration Guide for Visual Voic Release 8.5 Installation and Configuration Guide for Visual Voicemail Release 8.5 Revised October 08, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Location Services. Location Services - Feature Description

Location Services. Location Services - Feature Description LoCation Services (LCS) on the MME and SGSN is a 3GPP standards-compliant feature that enables the system (MME or SGSN) to collect and use or share location (geographical position) information for connected

More information

Cisco IOS First Hop Redundancy Protocols Command Reference

Cisco IOS First Hop Redundancy Protocols Command Reference Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

P-GW Service Configuration Mode Commands

P-GW Service Configuration Mode Commands P-GW Service Configuration Mode Commands The P-GW (PDN Gateway) Service Configuration Mode is used to create and manage the relationship between specified services used for either GTP or PMIP network traffic.

More information

IP Addressing: Fragmentation and Reassembly Configuration Guide

IP Addressing: Fragmentation and Reassembly Configuration Guide First Published: December 05, 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Cisco Terminal Services (TS) Agent Guide, Version 1.0

Cisco Terminal Services (TS) Agent Guide, Version 1.0 First Published: 2016-08-29 Last Modified: 2018-01-30 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387)

More information

Power Saving Mode (PSM) in UEs

Power Saving Mode (PSM) in UEs This feature describes the Power Saving Mode (PSM) support on the MME in the following sections: Feature Summary and Revision History, page 1 Feature Description, page 2 How It Works, page 4 Configuring

More information

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference

Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference Cisco Nexus 7000 Series NX-OS Virtual Device Context Command Reference July 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408

More information

Load Balance MME in Pool

Load Balance MME in Pool Load Balance MME in Pool Document ID: 119021 Contributed by Saurabh Gupta and Krishna Kishore DV, Cisco TAC Engineers. Jun 19, 2015 Contents Introduction S10 Interface and Configuration S10 Interface Description

More information

Exam Questions 4A0-M02

Exam Questions 4A0-M02 Exam Questions 4A0-M02 Alcatel-Lucent Mobile Gateways for the LTE Evolved Packet Core https://www.2passeasy.com/dumps/4a0-m02/ 1.Which of the following statements is FALSE regarding the enodeb? A. The

More information

SGSN-MME Combo Optimization

SGSN-MME Combo Optimization This section describes Combo Optimization available for a co-located SGSN-MME node. It also provides detailed information on the following: Feature Description, page 1 How It Works, page 2 Configuring

More information

Cisco ASR 9000 Series Aggregation Services Router Netflow Command Reference, Release 4.3.x

Cisco ASR 9000 Series Aggregation Services Router Netflow Command Reference, Release 4.3.x Cisco ASR 9000 Series Aggregation Services Router Netflow Command Reference, Release 4.3.x First Published: 2012-12-01 Last Modified: 2013-05-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman

More information

Cisco IOS Optimized Edge Routing Command Reference

Cisco IOS Optimized Edge Routing Command Reference First Published: 2007-01-29 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE

More information

CPS MOG API Reference, Release

CPS MOG API Reference, Release CPS MOG API Reference, Release 13.1.0 First Published: 2017-08-18 Last Modified: 2017-08-18 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

CE Mode-B Device Support

CE Mode-B Device Support This chapter describes the CE Mode-B support for emtc devices on the MME in the following topics: Feature Summary and Revision History, page 1 Feature Description, page 2 How it Works, page 2 Configuring

More information

Cisco Evolved Programmable Network Implementation Guide for Large Network with End-to-End Segment Routing, Release 5.0

Cisco Evolved Programmable Network Implementation Guide for Large Network with End-to-End Segment Routing, Release 5.0 Cisco Evolved Programmable Network Implementation Guide for Large Network with End-to-End Segment Routing, Release 5.0 First Published: 2017-06-22 Americas Headquarters Cisco Systems, Inc. 170 West Tasman

More information

Cisco Unified Communications Manager Device Package 10.5(1)( ) Release Notes

Cisco Unified Communications Manager Device Package 10.5(1)( ) Release Notes Cisco Unified Communications Manager Device Package 10.5(1)(11008-1) Release Notes First Published: September 02, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

MSF Architecture for 3GPP Evolved Packet System (EPS) Access MSF-LTE-ARCH-EPS-002.FINAL

MSF Architecture for 3GPP Evolved Packet System (EPS) Access MSF-LTE-ARCH-EPS-002.FINAL MSF Architecture for 3GPP Evolved Packet System (EPS) Access MSF-LTE-ARCH-EPS-002.FINAL MultiService Forum Architecture Agreement Contribution Number: Document Filename: Working Group: Title: Editor: Contact

More information

Cisco Jabber for Android 10.5 Quick Start Guide

Cisco Jabber for Android 10.5 Quick Start Guide Cisco Jabber for Android 10.5 Quick Start Guide Revised: August 21, 2014, Cisco Jabber Welcome to Cisco Jabber. Use this guide to set up the app and use some key features. After setup, learn more by viewing

More information

Cisco IOS XR Carrier Grade NAT Command Reference for the Cisco CRS Router, Release 5.2.x

Cisco IOS XR Carrier Grade NAT Command Reference for the Cisco CRS Router, Release 5.2.x Cisco IOS XR Carrier Grade NAT Command Reference for the Cisco CRS Router, 5.2.x First Published: 2016-07-01 Last Modified: 2014-10-01 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San

More information

NetFlow Configuration Guide

NetFlow Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution

Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution Cisco Nexus 7000 Series Switches Configuration Guide: The Catena Solution First Published: 2016-12-21 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4

IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 IP Addressing: IPv4 Addressing Configuration Guide, Cisco IOS Release 12.4 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000

More information

Embedded Packet Capture Configuration Guide

Embedded Packet Capture Configuration Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883 THE SPECIFICATIONS AND INFORMATION

More information

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0

Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Quick Start Guide for Cisco Prime Network Registrar IPAM 8.0 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information

Cisco ASR 5000 Series Statistics and Counters Reference - Errata

Cisco ASR 5000 Series Statistics and Counters Reference - Errata Cisco ASR 5000 Series Statistics and Counters Reference - Errata Version 12.x Last Updated October 31, 2011 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

Single Radio Voice Call Continuity

Single Radio Voice Call Continuity Voice over IP (VoIP) subscribers anchored in the IP Multimedia Subsystem (IMS) network can move out of an LTE coverage area and continue the voice call over the circuit-switched (CS) network through the

More information

egtp Service Configuration Mode Commands

egtp Service Configuration Mode Commands The egtp Service Configuration Mode is used to create and manage Evolved GPRS Tunneling Protocol (egtp) interface types and associated parameters. Command Modes Exec > Global Configuration > Context Configuration

More information

Provisioning an Ethernet Private Line (EPL) Virtual Connection

Provisioning an Ethernet Private Line (EPL) Virtual Connection Provisioning an Ethernet Private Line (EPL) Virtual Connection Cisco EPN Manager 2.0 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE

More information

show ims-authorization

show ims-authorization This chapter describes the outputs of the command. policy-control statistics, page 1 policy-gate status full, page 12 policy-gate counters all, page 13 servers, page 14 service name, page 15 service name

More information

Location Services. Location Services - Feature Description

Location Services. Location Services - Feature Description LoCation Services (LCS) on the MME and SGSN is a 3GPP standards-compliant feature that enables the system (MME or SGSN) to collect and use or share location (geographical position) information for connected

More information

SAML SSO Okta Identity Provider 2

SAML SSO Okta Identity Provider 2 SAML SSO Okta Identity Provider SAML SSO Okta Identity Provider 2 Introduction 2 Configure Okta as Identity Provider 2 Enable SAML SSO on Unified Communications Applications 4 Test SSO on Okta 4 Revised:

More information

Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007

Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007 Cisco TEO Adapter Guide for Microsoft System Center Operations Manager 2007 Release 2.3 April 2012 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

- Page 1 of 12 -

- Page 1 of 12 - PGW Functional Tester 11.0.0 Release Notes - Page 1 of 12 - Introduction The PGW Functional Tester is an automated test suite for testing the correctness of an implementation of LTE PDN Gateway (PGW) according

More information

Wireless Clients and Users Monitoring Overview

Wireless Clients and Users Monitoring Overview Wireless Clients and Users Monitoring Overview Cisco Prime Infrastructure 3.1 Job Aid Copyright Page THE SPECIFICATIONS AND INFORMATION REGARDING THE PRODUCTS IN THIS MANUAL ARE SUBJECT TO CHANGE WITHOUT

More information

AsyncOS 11.0 API - Getting Started Guide for Security Appliances

AsyncOS 11.0 API - Getting Started Guide for  Security Appliances AsyncOS 11.0 API - Getting Started Guide for Email Security Appliances First Published: 2017-12-27 Last Modified: -- Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706

More information

Test-king QA

Test-king QA Test-king.600-212.70.QA Number: 600-212 Passing Score: 800 Time Limit: 120 min File Version: 6.1 http://www.gratisexam.com/ Provide the highest amount of valid questions with correct answers. This VCE

More information

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide

Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Cisco Prime Network Registrar IPAM 8.3 Quick Start Guide Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS

More information