Contents. Configuring LLDP 2

Similar documents
Contents. Configuring LLDP 2

Contents. Configuring LLDP 2

Contents. Configuring LLDP 2

Table of Contents 1 LLDP Configuration 1-1

Configuring LLDP, LLDP-MED, and Wired Location Service

Configuring LLDP and LLDP-MED

Configuring LLDP and LLDP-MED

Configuring LLDP, LLDP-MED, and Location Service

Configuring LLDP, LLDP-MED, and Wired Location Service

View the Link Layer Discovery Protocol (LLDP) Local Information on a Switch

VLAN Features on Hanlong IP Phones

HP 5820X & 5800 Switch Series Layer 2 - LAN Switching. Configuration Guide. Abstract

HP 6125XLG Blade Switch

About VLAN IEEE 802.1Q. Voice VLAN

Network Configuration Example

Configure Link Layer Discovery Protocol

Contents. QoS overview 1

IEEE 802.1Q. Voice VLAN

HP MSR Router Series. Layer 2 LAN Switching Command Reference(V7)

Contents. Configuring EVI 1

IEEE 802.1Q. Voice VLAN

Using Link Layer Discovery Protocol in Multivendor Networks

Using Link Layer Discovery Protocol in Multivendor Networks

Configure Global Link Layer Discovery Protocol (LLDP) Settings on a Switch through the Command Line Interface (CLI)

HP FlexFabric 5700 Switch Series

HPE FlexFabric 7900 Switch Series

Grandstream Networks, Inc. VLAN (Virtual Local Area Network) Guide

Contents. EVPN overview 1

HP 6125G & 6125G/XG Blade Switches

HP 5920 & 5900 Switch Series

Abstract. Avaya Solution & Interoperability Test Lab

Configuring Priority Flow Control

Configure Link Layer Discovery Protocol (LLDP) Properties on a Switch

Sections Describing Standard Software Features

Sections Describing Standard Software Features

HP Switch Series

HP Routing Switch Series

HP 5120 SI Switch Series

HP 5130 EI Switch Series

HP A3100 v2 Switch Series

HP FlexFabric 5930 Switch Series

HP 5920 & 5900 Switch Series

H3C S5820X&S5800 Series Ethernet Switches

NOTE: The S9500E switch series supports HDLC encapsulation only on POS interfaces. Enabling HDLC encapsulation on an interface

DHCP H3C Low-End Ethernet Switches Configuration Examples. Table of Contents

H3C S7500E Switch Series

HP A5120 EI Switch Series Layer 2 - LAN Switching. Command Reference. Abstract

Configuring global CAR 73 Overview 73 Configuring aggregate CAR 73 Configuration procedure 73 Configuration example 73

The features and functions of the D-Link Smart Managed Switch can be configured through the web-based management interface.

Silvertel. Data Link Layer Classification. DLL classification Procedure

HP A3100 v2 Switch Series

Ethernet Switching Protocols

HP 3600 v2 Switch Series

HP Routing Switch Series

Monitoring the Cisco Unified IP Phone Remotely

To access the web page for Cisco Desktop Collaboration Experience, perform these steps.

Configuring Priority Flow Control

HP 3600 v2 Switch Series

H3C S5130-HI Switch Series

Configuring VPLS. VPLS overview. Operation of VPLS. Basic VPLS concepts

Link Layer Discovery Protocol (LLDP) Media Endpoint Discovery (MED) Port Settings Configuration on ESW2-350G Switches

HP 5920 & 5900 Switch Series

H3C S5500-HI Switch Series

Ethernet interface commands

DHCP Configuration Examples H3C S7500 Series Ethernet Switches Release Table of Contents

Configuring MLD. Overview. MLD versions. How MLDv1 operates. MLD querier election

Configuring APIC Accounts

Configuring VLANs. Understanding VLANs CHAPTER

H3C WX3000E Series Wireless Switches

H3C S10500 Switch Series

HP 5130 EI Switch Series

Ethernet interface configuration commands

The Cisco Unified IP Phones does not support web access on its IPv6 address.

H3C S6300 Switch Series

Table of Contents 1 QoS Overview QoS Policy Configuration Priority Mapping Configuration 3-1

Cisco IP Phone Installation

Cisco Discovery Protocol Version 2

HP 3100 v2 Switch Series

H3C S12500-X Switch Series

Configuring QoS CHAPTER

Figure 7-1 Unicast Static FDB window

Configuring VLANs. Understanding VLANs CHAPTER

Cisco Discovery Protocol Version 2

Configuring priority marking 63 Priority marking overview 63 Configuring priority marking 63 Priority marking configuration example 64

HPE FlexFabric 5940 Switch Series

HP FlexFabric 5700 Switch Series

Configuring VLANs. Understanding VLANs CHAPTER

Configuring Port Channels

Configuring Interfaces and Circuits

Configuring Quality of Service

Please follow the steps to configure this Web Smart switch. Step 1: Use a twisted pair cable to connect this switch to your PC.

Product features. Applications

H3C S9800 Switch Series

Dell EMC Networking Deploying Data Center Bridging (DCB)

Table of Contents 1 QoS Overview QoS Policy Configuration Priority Mapping Configuration 3-1

H3C S9500 QoS Technology White Paper

HP MSR Router Series. IPX Configuration Guide(V5) Part number: Software version: CMW520-R2513 Document version: 6PW

Configuring Port-Based Traffic Control

Configuring OpenFlow 1

Configuring Port Channels

Transcription:

Contents Configuring LLDP 2 Overview 2 Basic concepts 2 Working mechanism 8 Protocols and standards 9 LLDP configuration task list 9 Performing basic LLDP configurations 10 Enabling LLDP 10 Configuring the LLDP bridge mode 10 Setting the LLDP operating mode 11 Setting the LLDP reinitialization delay 11 Enabling LLDP polling 12 Configuring the advertisable TLVs 12 Configuring the management address and its encoding format 14 Setting other LLDP parameters 15 Setting an encapsulation format for LLDP frames 16 Configuring CDP compatibility 16 Configuration prerequisites 17 Configuration procedure 17 Configuring DCBX 18 DCBX configuration task list 19 Enabling LLDP and DCBX TLV advertising 19 Configuring APP parameters 20 Configuring ETS parameters 22 Configuring PFC parameters 23 Configuring the 802.1p-to-local priority mapping on the local switch 24 Configuring the DCBX version 25 Configuring LLDP trapping and LLDP-MED trapping 26 Displaying and maintaining LLDP 27 LLDP configuration examples 27 Basic LLDP configuration example 27 DCBX configuration example 32 1

Configuring LLDP Overview In a heterogeneous network, a standard configuration exchange platform makes sure different types of network devices from different vendors can discover one another and exchange configuration. The Link Layer Discovery Protocol (LLDP) is specified in IEEE 802.1AB. The protocol operates on the data link layer to exchange device information between directly connected devices. With LLDP, a device sends local device information as TLV (type, length, and value) triplets in LLDP Data Units (LLDPDUs) to the directly connected devices. Local device information includes its system capabilities, management IP address, device ID, and port ID. The device stores the device information in LLDPDUs from the LLDP neighbors in a standard MIB. For more information about MIBs, see Network Management and Monitoring Configuration Guide. LLDP enables a network management system to quickly detect and identify Layer 2 network topology changes. Basic concepts LLDP agent An LLDP agent is a mapping of an entity where LLDP runs. Multiple LLDP agents can run on the same interface. LLDP agents are divided into the following types: Nearest bridge agent. Nearest non-tpmr bridge agent. Nearest customer bridge agent. A Two-port MAC Relay (TPMR) is a type of bridge that has only two externally-accessible bridge ports. It supports a subset of the functions of a MAC bridge. A TPMR is transparent to all frame-based media-independent protocols except for the following: Protocols destined to it. Protocols destined to reserved MAC addresses that the relay function of the TPMR is configured not to forward. LLDP exchanges packets between neighbor agents and creates and maintains neighbor information for them. Figure 1 shows the neighbor relationships for these LLDP agents. LLDP has two bridge modes: customer bridge (CB) and service bridge (SB). 2

Figure 1 LLDP neighbor relationships LLDP frame formats LLDP sends device information in LLDP frames. LLDP frames are encapsulated in Ethernet II or SNAP frames. LLDP frame encapsulated in Ethernet II Figure 2 Ethernet II-encapsulated LLDP frame Table 1 Fields in an Ethernet II-encapsulated LLDP frame Field Destination MAC address Source MAC address Type Description MAC address to which the LLDP frame is advertised. LLDP specifies different multicast MAC addresses as destination MAC addresses for LLDP frames destined for agents of different types. This helps distinguish between LLDP frames sent and received by different agent types on the same interface. The destination MAC address is fixed to one of the following multicast MAC addresses: 0x0180-C200-000E for LLDP frames destined for nearest bridge neighbor. 0x0180-C200-0000 for LLDP frames destined for nearest customer bridge neighbor. 0x0180-C200-0003 for LLDP frames destined for nearest non-tpmr bridge agents. MAC address of the sending port. Ethernet type for the upper-layer protocol. It is 0x88CC for LLDP. 3

Field Data FCS Description LLDPDU. An LLDP frame contains only one LLDPDU. Frame check sequence, a 32-bit CRC value used to determine the validity of the received Ethernet frame. LLDP frame encapsulated in SNAP Figure 3 SNAP-encapsulated LLDP frame Table 2 Fields in a SNAP-encapsulated LLDP frame Field Destination MAC address Source MAC address Type Data FCS Description MAC address to which the LLDP frame is advertised. It is the same as that for Ethernet II-encapsulated LLDP frames. MAC address of the sending port. SNAP type for the upper-layer protocol. It is 0xAAAA-0300-0000-88CC for LLDP. LLDPDU. An LLDP frame contains only one LLDPDU. Frame check sequence, a 32-bit CRC value used to determine the validity of the received Ethernet frame. LLDPDUs LLDP uses LLDPDUs to exchange information. An LLDPDU comprises multiple TLVs. Each TLV carries a type of device information, as shown in Figure 4. Figure 4 LLDPDU encapsulation format An LLDPDU can carry up to 32 types of TLVs. Mandatory TLVs include Chassis ID TLV, Port ID TLV, Time to Live TLV, and End of LLDPDU TLV. Other TLVs are optional. 4

TLVs A TLV is an information element that contains the type, length, and value fields. LLDPDU TLVs include the following categories: Basic management TLVs Organizationally (IEEE 802.1 and IEEE 802.3) specific TLVs LLDP-MED (Link Layer Discovery Protocol Media Endpoint Discovery) TLVs Basic management TLVs are essential to device management. Organizationally specific TLVs and LLDP-MED TLVs are used for enhanced device management. They are defined by standardization or other organizations and are optional for LLDPDUs. Basic management TLVs Table 3 lists the basic management TLV types. Some of them are mandatory for LLDPDUs. Table 3 Basic management TLVs Type Description Remarks Chassis ID Port ID Time to Live End of LLDPDU Port Description System Name System Description System Capabilities Management Address Specifies the bridge MAC address of the sending device. Specifies the ID of the sending port. If the LLDPDU carries LLDP-MED TLVs, the port ID TLV carries the MAC address of the sending port. Otherwise, the port ID TLV carries the port name. Specifies the life of the transmitted information on the receiving device. Marks the end of the TLV sequence in the LLDPDU. Specifies the description for the sending port. Specifies the assigned name of the sending device. Specifies the description for the sending device. Identifies the primary functions of the sending device and the enabled primary functions. Specifies the following elements: The management address of the local device. The interface number and object identifier (OID) associated with the address. Mandatory. Optional. IEEE 802.1 organizationally specific TLVs Table 4 IEEE 802.1 organizationally specific TLVs Type Port VLAN ID Port And Protocol VLAN ID Description Specifies the port VLAN identifier (PVID). Indicates whether the device supports protocol VLANs and, if so, what VLAN IDs these protocols will be associated with. 5

Type VLAN Name Protocol Identity DCBX Link Aggregation Management VID VID Usage Digest ETS Configuration ETS Recommendation PFC APP QCN Description Specifies the textual name of any VLAN to which the port belongs. Indicates protocols supported on the port. Data center bridging exchange protocol. Indicates whether the port supports link aggregation, and if yes, whether link aggregation is enabled. Management VLAN ID. VLAN ID usage digest. Enhanced Transmission Selection configuration. ETS recommendation. Priority-based Flow Control. Application protocol. Quantized Congestion Notification. NOTE: Devices support only port VLAN ID TLV, port and protocol VLAN ID TLV, VLAN name TLV, link aggregation TLV and management VID TLV. Layer 3 Ethernet ports support only link aggregation TLVs. IEEE 802.3 organizationally specific TLVs Table 5 IEEE 802.3 organizationally specific TLVs Type MAC/PHY Configuration/Status Power Via MDI Maximum Frame Size Description Contains the bit-rate and duplex capabilities of the sending port, support for autonegotiation, enabling status of autonegotiation, and the current rate and duplex mode. Contains the power supply capabilities of the port: Port class (PSE or PD). Power supply mode. Whether PSE power supply is supported. Whether PSE power supply is enabled. Whether pair selection can be controlled. Power supply type. Power source. Power priority. PD requested power. PSE allocated power. Indicates the supported maximum frame size. It is now the MTU of the port. 6

Type Power Stateful Control Energy-Efficient Ethernet Description Indicates the power state control configured on the sending port, including the following: Power supply mode of the PSE/PD. PSE/PD priority. PSE/PD power. Indicates Energy Efficient Ethernet (EEE). NOTE: The power stateful control TLV is defined in IEEE P802.3at D1.0 and is not supported in later versions. H3C devices send this type of TLVs only after receiving them. LLDP-MED TLVs LLDP-MED TLVs provide multiple advanced applications for voice over IP (VoIP), such as basic configuration, network policy configuration, and address and directory management. LLDP-MED TLVs provide a cost-effective and easy-to-use solution for deploying voice devices in Ethernet. LLDP-MED TLVs are shown in Table 6. Table 6 LLDP-MED TLVs Type LLDP-MED Capabilities Network Policy Extended Power-via-MDI Hardware Revision Firmware Revision Software Revision Serial Number Manufacturer Name Model Name Asset ID Location Identification Description Allows a network device to advertise the LLDP-MED TLVs that it supports. Allows a network device or terminal device to advertise the VLAN ID of a port, the VLAN type, and the Layer 2 and Layer 3 priorities for specific applications. Allows a network device or terminal device to advertise power supply capability. This TLV is an extension of the Power Via MDI TLV. Allows a terminal device to advertise its hardware version. Allows a terminal device to advertise its firmware version. Allows a terminal device to advertise its software version. Allows a terminal device to advertise its serial number. Allows a terminal device to advertise its vendor name. Allows a terminal device to advertise its model name. Allows a terminal device to advertise its asset ID. The typical case is that the user specifies the asset ID for the endpoint to facilitate directory management and asset tracking. Allows a network device to advertise the appropriate location identifier information for a terminal device to use in the context of location-based applications. 7

Management address NOTE: If the MAC/PHY configuration/status TLV is not advertisable, none of the LLDP-MED TLVs will be advertised even if they are advertisable. If the LLDP-MED capabilities TLV is not advertisable, the other LLDP-MED TLVs will not be advertised even if they are advertisable. The network management system uses the management address of a device to identify and manage the device for topology maintenance and network management. The management address is encapsulated in the management address TLV. Working mechanism LLDP operating modes An LLDP agent can operate in one of the following modes: TxRx mode An LLDP agent in this mode can send and receive LLDP frames. Tx mode An LLDP agent in this mode can only send LLDP frames. Rx mode An LLDP agent in this mode can only receive LLDP frames. Disable mode An LLDP agent in this mode cannot send or receive LLDP frames. Each time the LLDP operating mode of an LLDP agent changes, its LLDP protocol state machine reinitializes. A configurable reinitialization delay prevents frequent initializations caused by frequent changes to the operating mode. If you configure the reinitialization delay, an LLDP agent must wait for the specified amount of time to initialize LLDP after the LLDP operating mode changes. Transmitting LLDP frames An LLDP agent operating in TxRx mode or Tx mode sends LLDP frames to its directly connected devices both periodically and when the local configuration changes. To prevent LLDP frames from overwhelming the network during times of frequent changes to local device information, LLDP uses the token bucket mechanism to rate limit LLDP frames. For more information about the token bucket mechanism, see ACL and QoS Configuration Guide. LLDP automatically enables the fast LLDP frame transmission mechanism in either of the following cases: A new LLDP frame is received and carries device information new to the local device. The LLDP operating mode of the LLDP agent changes from Disable or Rx to TxRx or Tx. The fast LLDP frame transmission mechanism successively sends the specified number of LLDP frames at a configurable fast LLDP frame transmission interval. This mechanism helps LLDP neighbors discover the local device as soon as possible. Then, the normal LLDP frame transmission interval resumes. 8

Receiving LLDP frames An LLDP agent operating in TxRx mode or Rx mode confirms the validity of TLVs carried in every received LLDP frame. If the TLVs are valid, the LLDP agent saves the information and starts an aging timer. When the TTL value in the Time to Live TLV carried in the LLDP frame becomes zero, the information ages out immediately. Protocols and standards IEEE 802.1AB-2005, Station and Media Access Control Connectivity Discovery IEEE 802.1AB-2009, Station and Media Access Control Connectivity Discovery ANSI/TIA-1057, Link Layer Discovery Protocol for Media Endpoint Devices DCB Capability Exchange Protocol Specification Rev 1.00 DCB Capability Exchange Protocol Base Specification Rev 1.01 IEEE Std 802.1Qaz-2011, Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment 18: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes LLDP configuration task list Tasks at a glance Performing basic LLDP configuration: (Required.) Enabling LLDP (Optional.) Configuring the LLDP bridge mode (Optional.) Setting the LLDP operating mode (Optional.) Setting the LLDP reinitialization delay (Optional.) Enabling LLDP polling (Optional.) Configuring the advertisable TLVs (Optional.) Configuring the management address and its encoding format (Optional.) Setting other LLDP parameters (Optional.) Setting an encapsulation format for LLDP frames (Optional.) Configuring CDP compatibility (Optional.) Configuring DCBX (Optional.) Configuring LLDP trapping and LLDP-MED trapping 9

Performing basic LLDP configurations Enabling LLDP To make LLDP take effect on specific ports, you must enable LLDP both globally and on these ports. To use LLDP together with OpenFlow, you must enable LLDP globally on OpenFlow switches. To prevent LLDP from affecting topology discovery of OpenFlow controllers, H3C recommends that you disable LLDP on ports of OpenFlow instances. For more information about OpenFlow, see OpenFlow Configuration Guide. To enable LLDP: 1. Enter system view. system-view 2. Enable LLDP globally. lldp global enable By default, LLDP is disabled globally. 3. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. interface interface-type interface-number 4. Enable LLDP. lldp enable By default, LLDP is enabled on a port. Configuring the LLDP bridge mode The following LLDP bridge modes are available: Customer bridge mode In customer bridge mode, LLDP supports nearest bridge agents, nearest non-tpmr bridge agents, and nearest customer bridge agents. LLDP processes the LLDP frames with destination MAC addresses for these agents and transparently transmits the LLDP frames with other destination MAC addresses in the VLAN. Service bridge mode In service bridge mode, LLDP supports nearest bridge agents and nearest non-tpmr bridge agents. LLDP processes the LLDP frames with destination MAC addresses for these agents and transparently transmits the LLDP frames with other destination MAC addresses in the VLAN. To configure the LLDP bridge mode: 1. Enter system view. system-view 10

2. Configure LLDP to operate in service bridge mode. lldp mode service-bridge By default, LLDP operates in customer bridge mode. Setting the LLDP operating mode 1. Enter system view. system-view 2. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. 3. Set the LLDP operating mode. interface interface-type interface-number In Layer 2/Layer 3 Ethernet interface view: lldp [ agent { nearest-customer nearest-nontpmr } ] admin-status { disable rx tx txrx } In Layer 2/Layer 3 aggregate interface view: lldp agent { nearest-customer nearest-nontpmr } admin-status { disable rx tx txrx } By default: The nearest bridge agent operates in txrx mode. The nearest customer bridge agent and nearest non-tpmr bridge agent operate in disable mode. In Ethernet interface view, if no agent type is specified, the command configures the operating mode for nearest bridge agents. In aggregate interface view, you can configure the operating mode only for nearest customer bridge agents and nearest non-tpmr bridge agents. Setting the LLDP reinitialization delay When the LLDP operating mode changes on a port, the port initializes the protocol state machines after an LLDP reinitialization delay. By adjusting the delay, you can avoid frequent initializations caused by frequent changes to the LLDP operating mode on a port. To set the LLDP reinitialization delay for ports: 1. Enter system view. system-view 2. Set the LLDP reinitialization delay. lldp timer reinit-delay delay The default setting is 2 seconds. 11

Enabling LLDP polling With LLDP polling enabled, the switch periodically searches for local configuration changes. When the switch detects a configuration change, it sends LLDP frames to inform neighboring devices of the change. To enable LLDP polling: 1. Enter system view. system-view 2. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. interface interface-type interface-number 3. Enable LLDP polling and set the polling interval. In Layer 2/Layer 3 Ethernet interface view: lldp [ agent { nearest-customer nearest-nontpmr } ] check-change-interval interval In Layer 2/Layer 3 aggregate interface view: lldp agent { nearest-customer nearest-nontpmr } check-change-interval interval By default, LLDP polling is disabled. Configuring the advertisable TLVs 1. Enter system view. system-view 2. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. interface interface-type interface-number 12

3. Configure the advertisable TLVs (in Layer 2 Ethernet interface view). 4. Configure the advertisable TLVs (in Layer 3 Ethernet interface view). lldp tlv-enable { basic-tlv { all port-description system-capability system-description system-name management-address-tlv [ ip-address ] } dot1-tlv { all port-vlan-id link-aggregation protocol-vlan-id [ vlan-id ] vlan-name [ vlan-id ] management-vid [ mvlan-id ] } dot3-tlv { all mac-physic max-frame-size power } med-tlv { all capability inventory network-policy power-over-ethernet location-id { civic-address device-type country-code { ca-type ca-value }&<1-10> elin-address tel-number } } } lldp agent { nearest-customer nearest-nontpmr } tlv-enable { basic-tlv { all port-description system-capability system-description system-name management-address-tlv [ ip-address ] } dot1-tlv { all port-vlan-id link-aggregation } } lldp tlv-enable { basic-tlv { all port-description system-capability system-description system-name management-address-tlv [ ip-address ] } dot1-tlv { all link-aggregation } dot3-tlv { all mac-physic max-frame-size power } med-tlv { all capability inventory power-over-ethernet location-id { civic-address device-type country-code { ca-type ca-value }&<1-10> elin-address tel-number } } } lldp agent { nearest-nontpmr nearest-customer } tlv-enable { basic-tlv { all port-description system-capability system-description system-name management-address-tlv [ ip-address ] } dot1-tlv { all link-aggregation } } By default: Nearest bridge agents can advertise all types of LLDP TLVs except the DCBX TLV, location identification TLV, port and protocol VLAN ID TLVs, VLAN name TLVs, and management VLAN ID TLVs. Nearest non-tpmr bridge agents advertise no TLVs. Nearest customer bridge agents can advertise basic TLVs and IEEE 802.1 organizationally specific TLVs (only link aggregation TLV and port VLAN ID TLV). By default: Nearest bridge agents can advertise all types of LLDP TLVs (only link aggregation TLV in 802.1 organizationally specific TLVs) except network policy TLV and location identification TLV. Nearest non-tpmr bridge agents advertise no TLVs. Nearest customer bridge agents can advertise basic TLVs and IEEE 802.1 organizationally specific TLVs (only link aggregation TLV). 13

5. Configure the advertisable TLVs (in Layer 2 aggregate interface view). 6. Configure the advertisable TLVs (in Layer 3 aggregate interface view). lldp agent { nearest-customer nearest-nontpmr } tlv-enable { basic-tlv { all management-address-tlv [ ip-address ] port-description system-capability system-description system-name } dot1-tlv { all port-vlan-id } } lldp tlv-enable dot1-tlv { protocol-vlan-id [ vlan-id ] vlan-name [ vlan-id ] management-vid [ mvlan-id ] } lldp agent { nearest-nontpmr nearest-customer } tlv-enable basic-tlv { all management-address-tlv [ ip-address ] port-description system-capability system-description system-name } By default: Nearest non-tpmr bridge agents advertise no TLVs. Nearest customer bridge agents can advertise basic TLVs and IEEE 802.1 organizationally specific TLVs (only port VLAN ID TLV). Nearest bridge agents are not supported on Layer 2 aggregate interfaces. By default: Nearest non-tpmr bridge agents advertise no TLVs. Nearest customer bridge agents can advertise only basic TLVs. Nearest bridge agents are not supported on Layer 2 aggregate interfaces. Configuring the management address and its encoding format LLDP encodes management addresses in numeric or string format in management address TLVs. By default, management addresses are encoded in numeric format. If a neighbor encodes its management address in string format, configure the encoding format of the management address as string on the connecting port. This guarantees normal communication with the neighbor. To configure a management address to be advertised and its encoding format on a port: 1. Enter system view. system-view 2. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. interface interface-type interface-number 14

3. Allow LLDP to advertise the management address in LLDP frames and configure the advertised management address. 4. Configure the encoding format of the management address as character string. In Layer 2/Layer 3 Ethernet interface view: lldp [ agent { nearest-customer nearest-nontpmr } ] tlv-enable basic-tlv management-address-tlv [ ip-address ] In Layer 2/Layer 3 aggregate interface view: lldp agent { nearest-customer nearest-nontpmr } tlv-enable basic-tlv management-address-tlv [ ip-address ] In Layer 2/Layer 3 Ethernet interface view: lldp [ agent { nearest-customer nearest-nontpmr } ] management-address-format string In Layer 2/Layer 3 aggregate interface view: lldp agent { nearest-customer nearest-nontpmr } management-address-format string By default: Nearest bridge agents and nearest customer bridge agents can advertise the management address in LLDP frames. Nearest non-tpmr bridge agents cannot advertise the management address in LLDP frames. By default, the encoding format of the management address is numeric. Setting other LLDP parameters The Time to Live TLV carried in an LLDPDU determines how long the device information carried in the LLDPDU can be saved on a recipient device. By setting the TTL multiplier, you can configure the TTL of locally sent LLDPDUs. The TTL is expressed by using the following formula: TTL = Min (65535, (TTL multiplier LLDP frame transmission interval + 1)) As the expression shows, the TTL can be up to 65535 seconds. TTLs greater than 65535 will be rounded down to 65535 seconds. To change LLDP parameters: 1. Enter system view. system-view 2. Set the TTL multiplier. lldp hold-multiplier value The default setting is 4. 3. Set the LLDP frame transmission interval. lldp timer tx-interval interval The default setting is 30 seconds. 15

4. Set the token bucket size for sending LLDP frames. 5. Set the number of LLDP frames sent each time fast LLDP frame transmission is triggered. lldp max-credit credit-value The default setting is 5. lldp fast-count count The default setting is 4. 6. Set an interval for fast LLDP frames transmission. lldp timer fast-interval interval The default setting is 1 second. Setting an encapsulation format for LLDP frames LLDP frames can be encapsulated in the following formats: Ethernet II With Ethernet II encapsulation configured, an LLDP port sends LLDP frames in Ethernet II frames. SNAP With SNAP encapsulation configured, an LLDP port sends LLDP frames in SNAP frames. Earlier versions of LLDP require the same encapsulation format on both ends to process LLDP frames. To successfully communicate with a neighboring device running LLDP of earlier versions, the local device must be configured with the same encapsulation format. To set the encapsulation format for LLDP frames to SNAP: 1. Enter system view. system-view 2. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. 3. Set the encapsulation format for LLDP frames to SNAP. interface interface-type interface-number In Layer 2/Layer 3 Ethernet interface view: lldp [ agent { nearest-customer nearest-nontpmr } ] encapsulation snap In Layer 2/Layer 3 aggregate interface view: lldp agent { nearest-customer nearest-nontpmr } encapsulation snap By default, Ethernet II encapsulation format applies. Configuring CDP compatibility To make your device work with Cisco IP phones, you must enable CDP compatibility. If your LLDP-enabled device cannot recognize CDP packets, it does not respond to the requests of Cisco IP phones for the voice VLAN ID configured on the device. As a result, a requesting Cisco IP phone sends voice traffic without any tag to your device. Your device cannot differentiate the voice traffic from other types of traffic. 16

CDP compatibility enables your device to receive and recognize CDP packets from a Cisco IP phone and respond with CDP packets carrying TLVs with the voice VLAN configuration. According to TLVs with the voice VLAN configuration, the IP phone automatically configures the voice VLAN. As a result, the voice traffic is confined in the configured voice VLAN and is differentiated from other types of traffic. The switch does not support the voice VLAN function. Therefore, after you configure CDP compatibility, the switch can only establish neighbor relationships with Cisco devices, but cannot advertise voice VLAN information. Configuration prerequisites Before you configure CDP compatibility, complete the following tasks: Globally enable LLDP. Enable LLDP on the port connecting to an IP phone. Configure LLDP to operate in TxRx mode on the port. Configuration procedure CDP-compatible LLDP operates in one of the following modes: TxRx CDP packets can be transmitted and received. Disable CDP packets cannot be transmitted or received. To make CDP-compatible LLDP take effect on a port, follow these steps: 1. Enable CDP-compatible LLDP globally. 2. Configure CDP-compatible LLDP to operate in TxRx mode on the port. The maximum TTL value that CDP allows is 255 seconds. To make CDP-compatible LLDP work correctly with Cisco IP phones, configure the LLDP frame transmission interval to be no more than 1/3 of the TTL value. To enable LLDP to be compatible with CDP: 1. Enter system view. system-view 2. Enable CDP compatibility globally. 3. Enter Layer 2/Layer 3 Ethernet interface view. 4. Configure CDP-compatible LLDP to operate in TxRx mode. lldp compliance cdp interface interface-type interface-number lldp compliance admin-status cdp txrx By default, CDP compatibility is disabled globally. By default, CDP-compatible LLDP operates in disable mode. 17

Configuring DCBX Data Center Ethernet (DCE), also known as Converged Enhanced Ethernet (CEE), is enhancement and expansion of traditional Ethernet local area networks for use in data centers. DCE uses the Data Center Bridging Exchange Protocol (DCBX) to negotiate and remotely configure the bridge capability of network elements. DCBX has the following self-adaptable versions: DCB Capability Exchange Protocol Specification Rev 1.00. DCB Capability Exchange Protocol Base Specification Rev 1.01. IEEE Std 802.1Qaz-2011 (Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment 18: Enhanced Transmission Selection for Bandwidth Sharing Between Traffic Classes). DCBX offers the following functions: Discovers the peer devices' capabilities and determines whether devices at both ends support these capabilities. Detects configuration errors on peer devices. Remotely configures the peer device if the peer device accepts the configuration. NOTE: H3C devices support only the remote configuration function. Figure 5 DCBX application scenario DCBX TLV Access switch Server with FCoE card Data center network DCBX enables lossless packet transmission on DCE networks. As shown in Figure 5, DCBX applies to an FCoE-based data center network, and operates on an access switch. DCBX enables the switch to control the server or storage adapter, and simplifies the configuration and guarantees configuration consistency. DCBX extends LLDP by using the IEEE 802.1 organizationally specific TLVs (DCBX TLVs) to transmit DCBX data, including: In DCBX Rev 1.00 and DCBX Rev 1.01: Application Protocol (APP). Enhanced Transmission Selection (ETS). Priority-based Flow Control (PFC). 18

In IEEE Std 802.1Qaz-2011: ETS Configuration. ETS Recommendation. PFC. APP. H3C devices can send these types of DCBX information to a server or storage adapter supporting FCoE. However, H3C devices cannot accept these types of DCBX information. DCBX configuration task list Tasks at a glance (Required.) Enabling LLDP and DCBX TLV advertising (Required.) Configuring APP parameters Configuring ETS parameters: (Required.) Configuring the 802.1p-to-local priority mapping in the ETS parameters (Required.) Configuring a queue scheduling profile (Required.) Configuring PFC parameters (Optional.) Configuring the 802.1p-to-local priority mapping on the local switch (Optional.) Configuring the DCBX version Enabling LLDP and DCBX TLV advertising To enable the device to advertise APP, ETS, and PFC data through an interface, perform the following tasks: Enable LLDP globally. Enable LLDP and DCBX TLV advertising on the interface. To enable LLDP and DCBX TLV advertising: 1. Enter system view. system-view 2. Enable LLDP globally. lldp global enable 3. Enter Layer 2 Ethernet interface view. 4. Enable LLDP. lldp enable interface interface-type interface-number By default, LLDP is disabled globally. By default, LLDP is enabled on an interface. 19

5. Enable the interface to advertise DCBX TLVs. lldp tlv-enable dot1-tlv dcbx By default, DCBX TLV advertising is disabled on an interface. Configuring APP parameters The device negotiates with the server adapter by using the APP parameters to achieve the following purposes: Control the 802.1p priority values of the protocol packets that the server adapter sends. Identify traffic based on the 802.1p priority values. For example, the device can use the APP parameters to negotiate with the server adapter to set 802.1p priority 3 for all FCoE and FIP frames. When the negotiation succeeds, all FCoE and FIP frames that the server adapter sends to the device carry the 802.1p priority 3. Configuration restrictions and guidelines When you configure APP parameters, follow these restrictions and guidelines: An Ethernet frame header ACL identifies application protocol packets by frame type. An IPv4 advanced ACL identifies application protocol packets by TCP/UDP port number. DCBX Rev 1.00 identifies application protocol packets only by data frame type and advertises TLVs with protocol number 0x8906 (FCoE) only. DCBX Rev 1.01 has the following attributes: Supports identifying application protocol packets by both data frame type and TCP/UDP port number. Does not restrict the protocol number or IP port number for advertising TLVs. Can advertise up to 77 TLVs according to the remaining length of the current packet. In a QoS policy, you can configure multiple class-behavior associations. A packet might be configured with multiple 802.1p priority marking or mapping actions, and the one configured first takes effect. Configuration procedure 1. Enter system view. system-view 20

2. Create an Ethernet frame header ACL or an IPv4 advanced ACL and enter ACL view. 3. Create a rule for the ACL. acl number acl-number [ name acl-name ] [ match-order { auto config } ] For the Ethernet frame header ACL: rule [ rule-id ] permit type protocol-type ffff For the IPv4 advanced ACL: rule [ rule-id ] permit { tcp udp } destination-port eq port An Ethernet frame header ACL number is in the range of 4000 to 4999. An IPv4 advanced ACL number is in the range of 3000 to 3999. DCBX Rev 1.00 supports only Ethernet frame header ACLs. DCBX Rev 1.01 and IEEE Std 802.1Qaz-2011 support both Ethernet frame header ACLs and IPv4 advanced ACLs. Create rules according to the type of the ACL previously created. 4. Return to system view. quit 5. Create a class, specify the operator of the class as OR, and enter class view. 6. Use the specified ACL as the match criterion of the class. traffic classifier classifier-name operator or if-match acl acl-number 7. Return to system view. quit 8. Create a traffic behavior and enter traffic behavior view. 9. Configure the behavior to mark packets with an 802.1p priority. traffic behavior behavior-name remark dot1p 8021p 10. Return to system view. quit 11. Create a QoS policy and enter QoS policy view. 12. Associate the class with the traffic behavior in the QoS policy, and apply the association to DCBX. qos policy policy-name classifier classifier-name behavior behavior-name mode dcbx 13. Return to system view. quit 21

14. Apply the QoS policy. (Method 1) To the outgoing traffic of all ports: qos apply policy policy-name global outbound (Method 2) To the outgoing traffic of a Layer 2 Ethernet interface: a. Enter Layer 2 Ethernet interface view: interface interface-type interface-number b. Apply the QoS policy to the outgoing traffic: qos apply policy policy-name outbound Configurations made in system view take effect on all ports. Configurations made in Layer 2 Ethernet interface view take effect on the interface. For more information about the acl, rule, traffic classifier, if-match, traffic behavior, remark dot1p, qos policy, classifier behavior, qos apply policy global, and qos apply policy commands, see ACL and QoS Command Reference. Configuring ETS parameters ETS provides committed bandwidth. To avoid packet loss caused by congestion, the device performs the following tasks: Uses ETS parameters to negotiate with the server adapter. Controls the server adapter's transmission speed of the specified type of traffic. Guarantees that the transmission speed is within the committed bandwidth of the interface. To configure ETS parameters, you must configure the 802.1p-to-local priority mapping in the ETS parameters and then configure a queue scheduling profile. Configuring the 802.1p-to-local priority mapping in the ETS parameters 1. Enter system view. system-view 2. Enter the view of the 802.1p-to-local priority mapping table for the outgoing traffic. 3. Configure the priority mapping table to map the specified 802.1p priority values to a local precedence value. qos map-table outbound dot1p-lp import import-value-list export export-value For information about the default priority mapping tables, see ACL and QoS Configuration Guide. 22

For more information about the qos map-table and import commands, see ACL and QoS Command Reference. Configuring a queue scheduling profile You can configure a queue scheduling profile to allocate bandwidth. For more information about the following commands, see ACL and QoS Command Reference. To configure a queue scheduling profile: 1. Enter system view. system-view 2. Create a queue scheduling profile and enter its view. qos qmprofile profile-name By default, no user-defined queue scheduling profile exists. 3. Configure queue scheduling parameters for the queue scheduling profile. Configure SP queuing: queue queue-id sp Configure WRR queuing: queue queue-id wrr group 1 byte-count schedule-value By default, a queue scheduling profile uses SP queuing for all queues. You can configure only one queuing type for a queue. In a queue scheduling profile, you can configure different queuing types for different queues. 4. Return to system view. quit 5. Enter Layer 2 Ethernet interface view. 6. Apply the queue scheduling profile to the interface. interface interface-type interface-number qos apply qmprofile profile-name By default, the queues of an interface use SP queuing. You can apply only one queue scheduling profile to an interface. Configuring PFC parameters To prevent packets with an 802.1p priority value from being dropped, enable PFC for the 802.1p priority value. This feature reduces the sending rate of packets carrying this priority when network congestion occurs. The device uses PFC parameters to negotiate with the server adapter and to enable PFC for the specified 802.1p priorities on the server adapter. To configure PFC parameters: 1. Enter system view. system-view 23

2. Enter Layer 2 Ethernet interface view. 3. Enable PFC in auto mode on the Ethernet interface. 4. Enable PFC for the specified 802.1p priorities. interface interface-type interface-number priority-flow-control auto priority-flow-control no-drop dot1p dot1p-list By default, PFC is disabled. To advertise the PFC data, you must enable PFC in auto mode. By default, PFC is disabled for all 802.1p priorities. For more information about the priority-flow-control and priority-flow-control no-drop dot1p commands, see Interface Command Reference. Configuring the 802.1p-to-local priority mapping on the local switch The ETS parameters that the switch sends to the peer end contain the 802.1p-to-local priority mapping and queue scheduling parameters. The queue scheduling parameters take effect on the interfaces of the switch. However, the 802.1p-to-local priority mapping in the ETS parameters does not take effect on the local switch. As a result, the switch does not schedule packets according to the ETS parameters sent to the peer end. To solve this problem, configure the same 802.1p-to-local priority mapping table on the local switch as that in the ETS parameters. You can configure the 802.1p-to-local priority mapping in priority mapping table method. Configuration restrictions and guidelines When you configure the 802.1p-to-local priority mapping on the local switch, follow these restrictions and guidelines: The FCoE packets that each peer device sends to the local switch must carry the same 802.1p priority value. The FCoE packets and the non-fcoe packets that the local switch receives must carry different 802.1p priority values. In IRF mode, make sure all IRF member devices operate in FCoE mode (FCF, NPV, or Transit). Otherwise, packets might fail to be forwarded based on the configured 802.1p-to-local priority mapping table during the cross-device forwarding. Configuring the 802.1p priority mapping in priority mapping table method 1. Enter system view. system-view 2. Enter the view of the 802.1p-to-local priority mapping table for the incoming traffic. qos map-table inbound dot1p-lp 24

3. Configure the priority mapping table to map the specified 802.1p priority values to a local precedence value. 4. Enter the view of the incoming interface. 5. Configure the incoming interface to trust the 802.1p priorities carried in incoming packets. import import-value-list export export-value interface interface-type interface-number qos trust dot1p For more information about the default priority mapping tables, see ACL and QoS Configuration Guide. Configuring the DCBX version DCBX has three versions: DCBX Rev 1.00, DCBX Rev 1.01, and IEEE Std 802.1Qaz-2011 (standard version). An H3C switch supports autonegotiation of the three versions with the peer and uses the standard version as the initial version for negotiation. When an H3C switch is connected to a DCBX-enabled peer device, the following apply: The H3C switch will change its DCBX version to match that on the peer device if the peer device does not support autonegotiation. The standard version will be the negotiated result if the peer device meets the following requirements: Supports autonegotiation. Uses the standard version as the initial version for negotiation. The standard version or the initial version the peer device uses will be the negotiated result if the peer device meets the following requirements: Supports autonegotiation. Uses a DCBX version other than the standard version as the initial version for negotiation. When the negotiated result is not the expected one, you can configure the expected DCBX version. To view DCBX version, use the display lldp local-information command. Then, the Oper version field of the DCBX Control subtlv info part in the output shows the DCBX version. Configuration prerequisites Before you configure the DCBX version, complete the following tasks: Enable LLDP globally and configure the interface to advertise DCBX TLVs. Configure the APP parameters, ETS parameters, or PFC parameters to be advertised on the interface. 25

Configuration procedure 1. Enter system view. system-view 2. Enter Layer 2 Ethernet interface view. 3. Configure the DCBX version. interface interface-type interface-number dcbx version { rev100 rev101 standard } By default, the DCBX version is autonegotiated by two interfaces, with the standard version as the initial version for negotiation at the local end. Configuring LLDP trapping and LLDP-MED trapping LLDP trapping or LLDP-MED trapping notifies the network management system of events such as newly detected neighboring devices and link failures. To prevent excessive LLDP traps from being sent when the topology is unstable, set a trap transmission interval for LLDP. To configure LLDP trapping and LLDP-MED trapping: 1. Enter system view. system-view 2. Enter Layer 2/Layer 3 Ethernet or aggregate interface view. 3. Enable LLDP trapping. 4. Enable LLDP-MED trapping (in Layer 2/Layer 3 Ethernet interface view). interface interface-type interface-number In Layer 2/Layer 3 Ethernet interface view: lldp [ agent { nearest-customer nearest-nontpmr } ] notification remote-change enable In Layer 2/Layer 3 aggregate interface view: lldp agent { nearest-customer nearest-nontpmr } notification remote-change enable lldp notification med-topology-change enable By default, LLDP trapping is disabled. By default, LLDP-MED trapping is disabled. 5. Return to system view. quit 6. (Optional.) Set the LLDP trap and LLDP-MED trap transmission interval. lldp timer notification-interval interval The default setting is 30 seconds. 26

Displaying and maintaining LLDP Execute display commands in any view. Task Display local LLDP information. Display the information contained in the LLDP TLVs sent from neighboring devices. Display LLDP statistics. Display LLDP status of a port. Display types of advertisable optional LLDP TLVs. Command display lldp local-information [ global interface interface-type interface-number ] display lldp neighbor-information [ [ [ interface interface-type interface-number ] [ agent { nearest-bridge nearest-customer nearest-nontpmr } ] [ verbose ] ] list [ system-name system-name ] ] display lldp statistics [ global [ interface interface-type interface-number ] [ agent { nearest-bridge nearest-customer nearest-nontpmr } ] ] display lldp status [ interface interface-type interface-number ] [ agent { nearest-bridge nearest-customer nearest-nontpmr } ] display lldp tlv-config [ interface interface-type interface-number ] [ agent { nearest-bridge nearest-customer nearest-nontpmr } ] LLDP configuration examples Basic LLDP configuration example By default, Ethernet, VLAN, and aggregate interfaces are down. To configure such an interface, bring the interface up by executing the undo shutdown command. Network requirements As shown in Figure 6, the NMS and Switch A are located in the same Ethernet network. Enable LLDP globally on Switch A and Switch B to perform the following tasks: Monitor the link between Switch A and Switch B on the NMS. Monitor the link between Switch A and the MED device on the NMS. Figure 6 Network diagram 27

Configuration procedure 1. Configure Switch A: # Enable LLDP globally. <SwitchA> system-view [SwitchA] lldp global enable # Enable LLDP on Ten-GigabitEthernet 4/0/1. By default, LLDP is enabled on ports. [SwitchA] interface Ten-GigabitEthernet4/0/1 [SwitchA-Ten-GigabitEthernet4/0/1] lldp enable # Set the LLDP operating mode to Rx. [SwitchA-Ten-GigabitEthernet4/0/1] lldp admin-status rx [SwitchA-Ten-GigabitEthernet4/0/1] quit # Enable LLDP on Ten-GigabitEthernet 4/0/2. By default, LLDP is enabled on ports. [SwitchA] interface Ten-GigabitEthernet4/0/2 [SwitchA-Ten-GigabitEthernet4/0/2] lldp enable # Set the LLDP operating mode to Rx. [SwitchA-Ten-GigabitEthernet4/0/2] lldp admin-status rx [SwitchA-Ten-GigabitEthernet4/0/2] quit 2. Configure Switch B: # Enable LLDP globally. <SwitchB> system-view [SwitchB] lldp global enable # Enable LLDP on Ten-GigabitEthernet 4/0/1. By default, LLDP is enabled on ports. [SwitchB] interface Ten-GigabitEthernet4/0/1 [SwitchB-Ten-GigabitEthernet4/0/1] lldp enable # Set the LLDP operating mode to Tx. [SwitchB-Ten-GigabitEthernet4/0/1] lldp admin-status tx [SwitchB-Ten-GigabitEthernet4/0/1] quit Verifying the configuration # Verify that: Ten-GigabitEthernet 4/0/1 of Switch A connects to a MED device. Ten-GigabitEthernet 4/0/2 of Switch A connects to a non-med device. Both ports operate in Rx mode, and they can receive LLDP frames but cannot send LLDP frames. [SwitchA] display lldp status Global status of LLDP: Enable Bridge mode of LLDP: customer-bridge The current number of LLDP neighbors: 2 The current number of CDP neighbors: 0 LLDP neighbor information last changed time: 0 days, 0 hours, 4 minutes, 40 seconds Transmit interval : 30s Fast transmit interval : 1s Transmit credit max : 5 28

Hold multiplier : 4 Reinit delay : 2s Trap interval : 30s Fast start times : 4 LLDP status information of port 1 [Ten-GigabitEthernet4/0/1]: LLDP agent nearest-bridge: Port status of LLDP : Enable Admin status : RX_Only Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 1 Number of MED neighbors : 1 Number of CDP neighbors : 0 Number of sent optional TLV : 21 Number of received unknown TLV : 0 LLDP agent nearest-customer: Port status of LLDP : Enable Admin status : Disable Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 16 Number of received unknown TLV : 0 LLDP status information of port 2 [Ten-GigabitEthernet4/0/2]: LLDP agent nearest-bridge: Port status of LLDP : Enable Admin status : RX_Only Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 1 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 21 Number of received unknown TLV : 3 LLDP agent nearest-nontpmr: Port status of LLDP Admin status Trap flag MED trap flag : Enable : Disable 29

Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 1 Number of received unknown TLV : 0 LLDP agent nearest-customer: Port status of LLDP : Enable Admin status : Disable Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 16 Number of received unknown TLV : 0 # Remove the link between Switch A and Switch B. # Verify that Ten-GigabitEthernet 4/0/2 of Switch A does not connect to any neighboring devices. [SwitchA] display lldp status Global status of LLDP: Enable The current number of LLDP neighbors: 1 The current number of CDP neighbors: 0 LLDP neighbor information last changed time: 0 days, 0 hours, 5 minutes, 20 seconds Transmit interval : 30s Fast transmit interval : 1s Transmit credit max : 5 Hold multiplier : 4 Reinit delay : 2s Trap interval : 30s Fast start times : 4 LLDP status information of port 1 [Ten-GigabitEthernet4/0/1]: LLDP agent nearest-bridge: Port status of LLDP : Enable Admin status : RX_Only Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 1 Number of MED neighbors : 1 Number of CDP neighbors : 0 Number of sent optional TLV : 0 Number of received unknown TLV : 5 LLDP agent nearest-nontpmr: 30

Port status of LLDP : Enable Admin status : Disable Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 1 Number of received unknown TLV : 0 LLDP status information of port 2 [Ten-GigabitEthernet4/0/2]: LLDP agent nearest-bridge: Port status of LLDP : Enable Admin status : RX_Only Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 0 Number of received unknown TLV : 0 LLDP agent nearest-nontpmr: Port status of LLDP : Enable Admin status : Disable Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 1 Number of received unknown TLV : 0 LLDP agent nearest-customer: Port status of LLDP : Enable Admin status : Disable Trap flag MED trap flag Polling interval : 0s Number of LLDP neighbors : 0 Number of MED neighbors : 0 Number of CDP neighbors : 0 Number of sent optional TLV : 16 Number of received unknown TLV : 0 31