OpenFabrics Alliance Interoperability Logo Group (OFILG) Dec 2011 Logo Event Report

Similar documents
OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2012 Logo Event Report

UNH-IOL 121 Technology Drive, Suite 2 Durham, NH OpenFabrics Interoperability Logo Group (OFILG)

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2014 Logo Event Report

OpenFabrics Alliance

OpenFabrics Alliance Interoperability Logo Group (OFILG) January 2014 Logo Event Report

Interoperability Logo Group (OFILG) July 2017 Logo Report

Enclosed are the results from OFA Logo testing performed on the following devices under test (DUTs):

UNH-IOL 121 Technology Drive, Suite 2 Durham, NH OpenFabrics Interoperability Logo Group (OFILG)

OpenFabrics Alliance Interoperability Logo Group (OFILG) January 2016 Logo Event Report

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2014 Logo Event Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2013 Logo Event Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Logo Group (OFILG) January 2014 Logo Event Report

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2011 Logo Event Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

Data Center Bridging Consortium

OFED Storage Protocols

40 and 100 Gigabit Ethernet Consortium Interoperability Test Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

Wireless LAN Consortium abgn Infrastructure Interoperability Test Suite v4.4 Report

Gigabit Ethernet Consortium Point-to-Point Interoperability Test Suite v2.3 Report

OFA Interoperability Working Group

UNH-IOL MIPI Alliance Test Program

Infiniband and RDMA Technology. Doug Ledford

Windows OpenFabrics (WinOF)

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

Wireless LAN Consortium Wireless WPA AP MAC Test Suite v2.4 Report

UNH-IOL MIPI Alliance Test Program

Checklist for Selecting and Deploying Scalable Clusters with InfiniBand Fabrics

InfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide

Introduction to High-Speed InfiniBand Interconnect

InfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide

Gigabit Ethernet Consortium Clause 36 PCS Conformance Test Suite v2.1 Report

QLogic in HPC Vendor Update IDC HPC User Forum April 16, 2008 Jeff Broughton Sr. Director Engineering Host Solutions Group

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

HPC Customer Requirements for OpenFabrics Software

OpenFabrics Interface WG A brief introduction. Paul Grun co chair OFI WG Cray, Inc.

OSPF NSSA Operations Test Report Revision 1.7. InterOperability Lab 121 Technology Drive, Suite 2 Durham NH,

InfiniBand Administration Tools (IBADM)

Open Networking Testing Service Open Network Systems Interoperability Test Report

2-Port 40 Gb InfiniBand Expansion Card (CFFh) for IBM BladeCenter IBM BladeCenter at-a-glance guide

Open Networking Testing Service Open Network Systems Interoperability Test Report

Implementing Storage in Intel Omni-Path Architecture Fabrics

Bridge Functions Consortium Spanning Tree Protocol Operations Test Suite Version 2.0

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

QuickSpecs. HP InfiniBand Options for HP BladeSystems c-class. Overview

RIPv2 Interoperability Test Report Revision 1.1

Mellanox InfiniBand Training IB Professional, Expert and Engineer Certifications

InfiniBand and Mellanox UFM Fundamentals

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

10 Gigabit Ethernet Consortium 10GBASE-R PCS Test Suite version 0.4

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

UNH-IOL NVMe Testing Service

Fermi Cluster for Real-Time Hyperspectral Scene Generation

Introduction to Infiniband

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09

UNH IOL NVMe Test Consortium

Open Networking Consortium Open Network Systems Interoperability Test Report

UNH-IOL NVMe Test Consortium

UNH-IOL NVMe Test Consortium

CERN openlab Summer 2006: Networking Overview

Workshop on High Performance Computing (HPC) Architecture and Applications in the ICTP October High Speed Network for HPC

Open Networking Consortium Open Network Systems Interoperability Test Report

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

UNH-IOL Open Networking Consortium

Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Reducing Network Contention with Mixed Workloads on Modern Multicore Clusters

HP BLc QLogic 4X QDR InfiniBand Switch Release Notes. Firmware Version

Performance monitoring in InfiniBand networks

Server Networking e Virtual Data Center

Creating an agile infrastructure with Virtualized I/O

DB2 purescale: High Performance with High-Speed Fabrics. Author: Steve Rees Date: April 5, 2011

Oracle Solaris - The Best Platform to run your Oracle Applications

To Infiniband or Not Infiniband, One Site s s Perspective. Steve Woods MCNC

Recent Topics in the IBTA and a Look Ahead

Open Fabrics Workshop 2013

Benchmarking Clusters with High Core-Count Nodes

InfiniPath Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs. Table of Contents

ETHERNET OVER INFINIBAND

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

Transcription:

OpenFabrics Alliance Interoperability Logo Group (OFILG) Dec 2011 Logo Event Report UNH-IOL 121 Technology Drive, Suite 2 Durham, NH 03824 - +1-603-862-0090 OpenFabrics Interoperability Logo Group (OFILG) ofalab@iol.unh.edu Cover Page Amit Krig Mellanox Technologies, Ltd. Beit Mellanox Yokneam, Israel 20692 Date: 16 Mar 2012 Report Revision: 1.2 OFED Version on Compute Nodes: 1.5.4 Operating System on Compute Nodes: SL 6.1 Enclosed are the results from OFA Logo testing performed on the following devices under test (DUTs): Mellanox MCX353A-FCBT Mellanox MCX354A-FCBT FDR devices are not yet supported by the logo program, but were tested at the December 2011 event and are listed in the Beta section for informative purposes. This is because the IBTA FDR Specification was not finalized in time for this event and none of the applicable test plan procedures had ever been run using FDR products. The test suite referenced in this report is available at the IOL website. Release 1.4 (2011-Oct-25) was used. http://www.iol.unh.edu/services/testing/ofa/testsuites/ofa-iwg_interoperability_test_plan-v1.40.pdf The logo document referenced in this report is available at the IOL website. Release 1.14 (2011-Mar-01) was used. http://www.iol.unh.edu/services/testing/ofa/logoprogram/ofa-unh-iol_logo_program-v1.14.pdf The Following Table highlights the Mandatory test results required for the OpenFabrics Interoperability Logo for the DUT per the Test Plan referenced above and the current OpenFabrics Interoperability Logo Program (OFILP). Additional beta testing was performed using the DUT than is reflected in this report. A separate report will outline those results. Mandatory Test Summary Test Procedures IWG Test Status Result/Notes 10.1: Link Initialization Mandatory Refer to Comments 10.2: IB Fabric Initialization Mandatory PASS 10.3: IPoIB Connected Mode Mandatory PASS 10:4: IPoIB Datagram Mode Mandatory PASS 10.5: SM Failover and Handover Mandatory PASS 10.6: SRP Mandatory PASS 12.1: TI iser Mandatory Not Available 12.2: TI NFS over RDMA Mandatory PASS 12.3: TI RDS Mandatory PASS 12.4: TI SDP Mandatory PASS 12.5: TI udapl Mandatory PASS 12.6: TI RDMA Basic Interop Mandatory PASS 12.8: TI RDMA Stress Mandatory PASS 12.11: TI MPI Open Mandatory PASS 12.12: TI MPI OSU Mandatory PASS Summary of all results follows on the second page of this report. For Specific details regarding issues, please see the corresponding test result. Testing Completed 05 Jan 2012 Review Completed 16 March 2012 Nickolas Wood ndv2@iol.unh.edu Bob Noseworthy ren@iol.unh.edu

Result Summary The Following table summarizes all results from the event pertinent to this IB device class Test Procedures IWG Test Status Result/Notes 10.1: Link Initialization Mandatory Refer to Comments 10.2: IB Fabric Initialization Mandatory PASS 10.3: IPoIB Connected Mode Mandatory PASS 10:4: IPoIB Datagram Mode Mandatory PASS 10.5: SM Failover and Handover Mandatory PASS 10.6: SRP Mandatory PASS 10.7: Ethernet Gateway Beta Not Tested 10.8: FibreChannel Gateway Beta Not Tested 12.1: TI iser Mandatory Not Available 12.2: TI NFS over RDMA Mandatory PASS 12.3: TI RDS Mandatory PASS 12.4: TI SDP Mandatory PASS 12.5: TI udapl Mandatory PASS 12.6: TI RDMA Basic Interoperability Mandatory PASS 12.8: TI RDMA Stress Mandatory PASS 12.10: TI MPI Intel Beta Not Tested 12.11: TI MPI Open Mandatory PASS 12.12: TI MPI OSU Mandatory PASS Digital Signature Information This document was signed using an Adobe Digital Signature. A digital signature helps to ensure the authenticity of the document, but only in this digital format. For information on how to verify this document s integrity proceed to the following site: http://www.iol.unh.edu/certifydoc/certificates_and_fingerprints.php If the document status still indicated Validity of author NOT confirmed, then please contact the UNH-IOL to confirm the document s authenticity. To further validate the certificate integrity, Adobe 9.0 should report the following fingerprint information: MD5 Fingerprint: B4 7E 04 FE E8 37 D4 D2 1A EA 93 7E 00 36 11 F3 SHA-1 Fingerprint: 50 E2 CB 10 21 32 33 56 4A FC 10 4F AD 24 6D B3 05 22 7C C0 OpenFabrics Interoperability Logo Group 2 UNH-IOL Report Revision: 1.2

Report Revision History OFA Logo Event Report Dec 2011 v1.0 Initial working copy v1.1 Revised working copy v1.2 Post arbitration resolution update Configuration Files Description Scientific Linux 6.1 Configuration File OFED 1.5.4 Configuration File Attachment Result Key The following table contains possible results and their meanings: Result: PASS PASS with Comments FAIL Warning Informative Refer to Comments Not Applicable Not Available Borderline Not Tested Description: The Device Under Test (DUT) was observed to exhibit conformant behavior. The DUT was observed to exhibit conformant behavior however an additional explanation of the situation is included. The DUT was observed to exhibit non-conformant behavior. The DUT was observed to exhibit behavior that is not recommended. Results are for informative purposes only and are not judged on a pass or fail basis. From the observations, a valid pass or fail could not be determined. An additional explanation of the situation is included. The DUT does not support the technology required to perform this test. Due to testing station limitations or time limitations, the tests could not be performed. The observed values of the specific parameters are valid at one extreme and invalid at the other. Not tested due to the time constraints of the test period. OpenFabrics Interoperability Logo Group 3 UNH-IOL Report Revision: 1.2

DUT and Test Setup Information Figure 1: The IB fabric configuration utilized for any tests requiring a multi-switch configuration is shown below. DUT #1 Details Manufacturer: Mellanox Firmware Revision: 2.10.1050 Model: MCX353A-FCBT Hardware Revision: 0 Speed: FDR Located in Host: Hati Firmware MD5sum: add85cf4b5d8c8d19c1c14a2f0cb8777 Additional Comments / Notes: DUT #2 Details Manufacturer: Mellanox Firmware Revision: 2.10.1050 Model: MCX354A-FCBT Hardware Revision: 0 Speed: FDR Located in Host: Titan Firmware MD5sum: c7457b82cdc1c1d6a16bba0d084fe496 Additional Comments / Notes: OpenFabrics Interoperability Logo Group 4 UNH-IOL Report Revision: 1.2

Mandatory Tests IB Device Test Results: 10.1: Link Initialization Results Part #1: Refer to Comments Discussion: The Mellanox FDR HCAs were unable to properly link with the DDN S2A9900 and the NetApp XBB2 SRP targets using the firmware initially provided to the UNH IOL by Mellanox. A subsequent firmware was provided by Mellanox that fixed the problem however said new firmware was provided after all testing was completed. Only a link initialization spot check was performed with the new firmware. Link Partner MCX353A-FCBT MCX354A-FCBT QLogic 12200 (Switch) QDR PASS PASS Mellanox SX6025 (Switch) FDR PASS PASS Mellanox SX6036 (Switch) FDR PASS PASS Mellanox IS-5030 (Switch) QDR PASS PASS Mellanox BX5020 (Gateway) QDR PASS PASS DataDirect Networks SFA10000 (SRP Target) QDR PASS PASS DataDirect Networks S2A9900 (SRP Target) DDR Refer to Comments Refer to Comments NetApp Pikes Peak (SRP Target) QDR PASS PASS NetApp XBB2 (SRP Target) DDR Refer to Comments Refer to Comments Host: Themis HCA: MHQH29C-XTR (QDR) PASS PASS Host: Pan HCA: MHQH19B-XTR (QDR) PASS PASS Host: Hati HCA: MCX353A-FCBT (FDR) NA PASS Host: Titan HCA: MCX354A-FCBT (FDR) PASS NA 10.2: Fabric Initialization All subnet managers used while testing with OFED 1.5.4 were able to correctly configure the selected topology. 10.3: IPoIB Connected Mode Part A ping B SFTP C SCP IPoIB ping, SFTP, and SCP transactions completed successfully between all HCAs; each HCA acted as both a client and a server for all tests. OpenFabrics Interoperability Logo Group 5 UNH-IOL Report Revision: 1.2

10.4: IPoIB Datagram Mode Part A ping B SFTP C SCP IPoIB ping, SFTP, and SCP transactions completed successfully between all HCAs; each HCA acted as both a client and a server for all tests. 10.5: SM Failover and Handover SM Pairings OpenSM OpenSM OFED 1.5.4 OFED 1.5.4 OpenSM was able to properly handle SM priority and state rules. Result PASS 10.6: SRP SRP communications between all HCAs and all SRP targets succeeded while the above mentioned SMs were in control of the fabric. 12.1 TI iser Not Tested Not Tested Not Tested Not Tested Not Tested This test was not performed as there are no devices that support the iser test procedure present in event topology. 12.2: TI NFS over RDMA Connectathon was used to test NFS over RDMA; each HCA acted as both a client and a server. OpenFabrics Interoperability Logo Group 6 UNH-IOL Report Revision: 1.2

12.3: TI RDS Part A ping B stress The reliable datagram socket protocol was tested between all HCAs; all communications completed successfully. 12.4: TI SDP Part A netperf B SFTP C SCP All communications using the SDP protocol completed successfully; each HCA acted as both a client and a server for all tests. 12.5: TI udapl All communications using DAPL were seen to complete successfully as described in the referenced test plan; each HCA acted as both a client and a server for all tests. 12.6: TI RDMA Basic Interoperability All devices were shown to correctly exchange core RDMA operations across a simple network path under nominal (unstressed) conditions; each HCA acted as both a client and a server for all tests. 12.8: TI RDMA Stress All IB switches were seen to properly handle a large load as indicated by the successfully completion of control communications between two HCAs while all other HCAs in the fabric were used to generate traffic in order to put a high load on the switch. Each HCA aced as both a client and a server for the control connection. OpenFabrics Interoperability Logo Group 7 UNH-IOL Report Revision: 1.2

12.11: TI MPI Open Part A B Complete heterogeneity; 1 MPI process per OFED 1.5.4 deployed system as described in the cluster topology (red and purple system icons), IB device vendor agnostic. 12.12: TI MPI OSU Part A B Complete heterogeneity; 1 MPI process per OFED 1.5.4 deployed system as described in the cluster topology (red and purple system icons), IB device vendor agnostic. OpenFabrics Interoperability Logo Group 8 UNH-IOL Report Revision: 1.2

Beta Tests IB Device Test Results: 10.7: IB Ethernet Gateway Not Tested Not Tested Not Tested Not Tested Not Tested This test was not performed as there are no devices that support the Ethernet Gateway test procedure present in event topology. 10.8 IB FibreChannel Gateway Not Tested Not Tested Not Tested Not Tested Not Tested This test was not performed as there are no devices that support the FibreChannel Gateway test procedure present in event topology. 12.10: MPI Intel Not Tested Not Tested Not Tested Not Tested Not Tested This test was not performed as the binaries for Intel MPI are not present on the compute nodes present in event topology. OpenFabrics Interoperability Logo Group 9 UNH-IOL Report Revision: 1.2