Windows OpenFabrics (WinOF)

Similar documents
Windows OpenFabrics (WinOF) Update

OFED Storage Protocols

OpenFabrics Interface WG A brief introduction. Paul Grun co chair OFI WG Cray, Inc.

QuickSpecs. HP InfiniBand Options for HP BladeSystems c-class. Overview

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

Infiniband and RDMA Technology. Doug Ledford

OpenFabrics Alliance Interoperability Logo Group (OFILG) Dec 2011 Logo Event Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

QLogic in HPC Vendor Update IDC HPC User Forum April 16, 2008 Jeff Broughton Sr. Director Engineering Host Solutions Group

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

Checklist for Selecting and Deploying Scalable Clusters with InfiniBand Fabrics

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

Enclosed are the results from OFA Logo testing performed on the following devices under test (DUTs):

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2012 Logo Event Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

ETHERNET OVER INFINIBAND

HPC Customer Requirements for OpenFabrics Software

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

InfiniBand Linux Operating System Software Access Layer

Novell Infiniband and XEN

Mellanox WinOF VPI Release Notes

Interoperability Logo Group (OFILG) July 2017 Logo Report

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

Generic RDMA Enablement in Linux

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

InfiniBand Administration Tools (IBADM)

Multifunction Networking Adapters

Mellanox InfiniBand Training IB Professional, Expert and Engineer Certifications

CERN openlab Summer 2006: Networking Overview

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2014 Logo Event Report

Study. Dhabaleswar. K. Panda. The Ohio State University HPIDC '09

PERFORMANCE ACCELERATED Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Productivity and Efficiency

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

Management Scalability. Author: Todd Rimmer Date: April 2014

Birds of a Feather Presentation

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

NTRDMA v0.1. An Open Source Driver for PCIe NTB and DMA. Allen Hubbe at Linux Piter 2015 NTRDMA. Messaging App. IB Verbs. dmaengine.h ntb.

Future Routing Schemes in Petascale clusters

OpenFabrics Alliance

Introduction to High-Speed InfiniBand Interconnect

2008 International ANSYS Conference

Workshop on High Performance Computing (HPC) Architecture and Applications in the ICTP October High Speed Network for HPC

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2013 Logo Event Report

InfiniPath Drivers and Software for QLogic QHT7xxx and QLE7xxx HCAs. Table of Contents

Implementing Storage in Intel Omni-Path Architecture Fabrics

SR-IOV Support for Virtualization on InfiniBand Clusters: Early Experience

OFA Interoperability Working Group

Performance of Mellanox ConnectX Adapter on Multi-core Architectures Using InfiniBand. Abstract

Ethernet: The High Bandwidth Low-Latency Data Center Switching Fabric

InfiniBand OFED Driver for. VMware Virtual Infrastructure (VI) 3.5. Installation Guide

Application Acceleration Beyond Flash Storage

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

InfiniBand OFED Driver for. VMware Infrastructure 3. Installation Guide

DB2 purescale: High Performance with High-Speed Fabrics. Author: Steve Rees Date: April 5, 2011

OpenFabrics Alliance Interoperability Logo Group (OFILG) January 2014 Logo Event Report

BUILDING A BLOCK STORAGE APPLICATION ON OFED - CHALLENGES

An Introduction to the OpenFabrics Interface. #OFAUserGroup Paul Grun Cray w/ slides stolen (with pride) from Sean Hefty

Open Fabrics Workshop 2013

PARAVIRTUAL RDMA DEVICE

Fermi Cluster for Real-Time Hyperspectral Scene Generation

Benchmarking Clusters with High Core-Count Nodes

RoCE vs. iwarp Competitive Analysis

FROM HPC TO THE CLOUD WITH AMQP AND OPEN SOURCE SOFTWARE

Memory Management Strategies for Data Serving with RDMA

CP2K Performance Benchmark and Profiling. April 2011

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2014 Logo Event Report

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

Containing RDMA and High Performance Computing

Comparative Performance Analysis of RDMA-Enhanced Ethernet

Reducing Network Contention with Mixed Workloads on Modern Multicore Clusters

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

RDMA in Embedded Fabrics

RDMA Container Support. Liran Liss Mellanox Technologies

USING OPEN FABRIC INTERFACE IN INTEL MPI LIBRARY

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Creating an agile infrastructure with Virtualized I/O

OpenFabrics Alliance Interoperability Working Group (OFA-IWG)

Ubuntu Linux Inbox Driver User Manual

10-Gigabit iwarp Ethernet: Comparative Performance Analysis with InfiniBand and Myrinet-10G

Release Notes for Topspin Linux Host Drivers Release April 8, 2006 Current Release: Version Update 1 (build 82)

Mellanox WinOF VPI Windows Server 2016 Inbox Driver Release Notes

OpenFabrics Alliance Interoperability Logo Group (OFILG) May 2011 Logo Event Report

Open Fabrics Interfaces Architecture Introduction. Sean Hefty Intel Corporation

Mellanox OFED for FreeBSD for ConnectX-4/ConnectX-4 Lx/ ConnectX-5 Release Note. Rev 3.5.0

SNAP Performance Benchmark and Profiling. April 2014

The following InfiniBand products based on Mellanox technology are available for the HP BladeSystem c-class from HP:

ABySS Performance Benchmark and Profiling. May 2010

Unified Runtime for PGAS and MPI over OFED

STAR-CCM+ Performance Benchmark and Profiling. July 2014

UNH-IOL 121 Technology Drive, Suite 2 Durham, NH OpenFabrics Interoperability Logo Group (OFILG)

Transcription:

Windows OpenFabrics (WinOF) Gilad Shainer, Mellanox Ishai Rabinovitz, Mellanox Stan Smith, Intel April 2008

Windows OpenFabrics (WinOF) Collaborative effort to develop, test and release OFA software for Windows Components Kernel and User Space Broader test participation Add-on components for vendors to differentiate above WinOF 2

Supported Platforms Architectures x86, x86_64, IA64 Operating systems Windows XP 32&64 Windows Server 2003 Windows Cluster Compute Server 2003 Windows Server 2008* Windows HPC Server 2008* WHQL ed Windows Server 2003 Windows Cluster Compute Server 2003 3

WinOF Software Stack WinOF 4

WinOF Maintainers IBAL, HCA drivers Leonid Keller (Mellanox) NetworkDirect Leonid Keller (Mellanox) SRP Leonid Keller (Mellanox) & Eleanor Witiak (QLogic) IPoIB (and CM) Tzachi Dar (Mellanox) & Anh Duong (QLogic) WSD Tzachi Dar (Mellanox) OpenSM Yevgeny Kliteynik (Mellanox) DAPL/DAT, Installer Stan Smith (Intel) VNIC Alex Estrin (QLogic) Qualifications sites Intel, Mellanox, QLogic, Voltaire and others 5

WinOF 1.0.1 Release Components HCA Drivers Access Layer (IBAL) NDIS 5.1 (IPoIB) WSD provider * SRP initiator ** udapl OpenSM VNIC MSI installer Tools Verbs perf Benchmarks FW update tools More ADD-ONs MPI Components Microsoft Intel HP SDP Cluster management Components supported by individual vendors may vary * Not available on Windows XP ** Not available on Windows XP 32 bit 6

Microsoft WHQL Logo Microsoft qualification program WHQL can be obtained only by IHV OFA Can not WHQL WinOF IB WHQL includes IPoIB and WSD Low level drivers and hardware WinOF 1.0 - WHQL version 7

WinOF 1.1 Plans Release schedule: April OSs: Windows Server 2003 and XP Mellanox ConnectX support (desirable) IPoIB partitioning support WSD fixes VNIC (QLogic) fixes SRP fixes and performance optimizations udat/dapl, Socket CM DAPL Provider Device drivers Digitally signed by OFA Components: HCA Drivers Access Layer (IBAL) NDIS 5.1 (IPoIB) WSD provider * SRP initiator ** udapl OpenSM VNIC MSI installer Tools * Not available on Windows XP ** Not available on Windows XP 32 bit 8

Futures WinOF 1.2 (July 2008) ConnectX support Windows Server 2008 (HPC), Vista 32/64 support NetworkDirect component Performance optimization Windows HPC 2008 IPoIB CM WinOF 1.3 (November 08) WinVerbs 1.0 Diagnostics 9

We need your help Developing code Sending patches and comments to the mailing list (ofw@lists.openfabrics.org) Doing QA Opening bugs in Bugzilla (https://bugs.openfabrics.org/) When opening a new bug you can choose OpenFabrics Windows 10

Windows OpenFabrics (WinOF) WinVerbs Sean Hefty, Intel

WinVerbs Goals Provide solution optimized for Windows Support for multiple RDMA transports Ensure existing stack stability Evolutionary development Simplify porting applications/tools between Linux and Windows 12

Current WinOF Driver Stack IBAL IB UVP CompLib user kernel Library Filter Function Bus SRP CompLib SRP CompLib IOC PDO IOC PDO VNIC CompLib VNIC CompLib IOC PDO IOC PDO IPoIB CompLib IOU CompLib IOU CompLib IPoIB PDO IOU PDO IOU PDO IB Bus IBAL CompLib IB HCA Driver CompLib 13

WinVerbs Adds asynchronous userspace interface Application controls all threading Use standard Windows OVERLAPPED I/O structures and operations Adds RDMA CM interfaces Keep generic for multiple applications Network Direct, DAPL, OFED RDMA CM interfaces Minimize changes to existing infrastructure 14

Stack Enhancements Follow standard Windows device stacks Migrate IB Bus driver over HCA devices Replace WinOF PnP with Windows PnP Optimize user to kernel transitions Continue to provide common IOCTL framework Let UVP control issuing IOCTL Take asynchronous I/O down to driver Restructuring of kernel bus driver and enhancements to UVP 15

WinOF Total Solution NDI DAPL OFED Abstractions Library WinVerbs IBAL Filter Function iwarp UVP IB UVP Bus user kernel SRP VNIC IPoIB IOU iwarp Bus? WinVerbs iwarp Driver IB Bus WinVerbs IBAL IB HCA Driver CompLib 16

THANK YOU!