InfiniBand* Architecture

Size: px
Start display at page:

Download "InfiniBand* Architecture"

Transcription

1 InfiniBand* Architecture Irv Robinson - Intel Mike Krause - Hewlett Packard Dennis Miller - Intel Arland Kunz - Intel Intel Labs

2 Server Labs Wednesday 10:15-12:15 12:15 IPMI Part 1 Platform Management Technologies Technical details on the IPMI 1.5 LAN and Modem platform management specifications Plus related initiatives such as DMTF Pre-OS Working Group, Metolious-2, and a proposed PCI management bus 2:15-4:15 IPMI Part 2 IPMI v1.0 Conformance Test Lab. Hands-on with the latest IPMI v1.0 Conformance Test Tool Plus labs on IPMI and IPMB messaging, information on test customization, and application of the tool for IPMI-based systems development Intel Labs

3 Agenda Architecture Overview Enabling of Flexible Solutions High Speed Serial Links Intel s Industry Enabling Efforts Intel Labs

4 InfiniBand* Architecture Overview Irv Robinson Intel Corporation Co-Chair Chair - Technical Working Group InfiniBand* Trade Association

5 Agenda Why InfiniBand* Architecture is Important InfiniBand Architecture and Components The Fabric as the Core Concept InfiniBand Hosts, Targets & Management The Fundamentals of Workflow Specification Milestones

6 InfiniBand* Architecture and Components InfiniBand Architecture Describes a System Area Network Unified Fabric for use between elements of computer systems Support for multiple concurrent workloads 1st Order Fabric (memory interface semantics described in the specifications) Abstracted usage model for sophisticated computing hardware & software Message Passing foundation Peer-to to-peer capabilities Virtual memory and multi-threading threading aware

7 InfiniBand* Architecture and Components So What? The application world is moving to distributed message passing applications The demand for clustered & networked systems is increasing rapidly Everything needs to be connected Scalability has got to move out of the box Downtime is out - Hot Plug is in Flexibility is required to accommodate technologies evolving at different rates & in different ways I/O infrastructure separation from processor complex Fabric interface closer to memory & processors Increased processing capabilities in all units

8 InfiniBand* Architecture and Components Scalable Systems are a Pain Today Computing Node Computing Node Computing Node All fabrics are connected to every node System Area Spaghetti IPC Interconnect IP Network Storage Area Network Multiple, function-specific specific,, fabric types are required, with: Multiple technology evolution trends Multiple administration strategies Multiple interfaces And they all funnel into...

9 InfiniBand* Architecture and Components Legacy Host Architecture Slot-based I/O connection Interface to memory via an I/O Bus Low level of abstraction (load/store) CPU CPU Host Interconnect Mem Cntlr Sys Mem I/O Bus Adapter Scaling strategy is adding buses Function-specific fabric

10 InfiniBand* Architecture and Components An InfiniBand Unified Fabric System Computing Node Computing Node Computing Node Eliminates the need for slot-based computing nodes Hot Plug scalability can be practical IP Network InfiniBand Switched Fabric Storage Area Network Much reduced intra-system spaghetti Function-specific fabrics at edges of system. Less intrusive evolution model.

11 InfiniBand* Architecture and Components The InfiniBand Architecture Model Target TCA CPU CPU Host Interconnect Mem Cntlr Sys Mem HCA Link Link Switch Link Link TCA Target xca Router Network Router Link

12 InfiniBand* Architecture and Components The Fabric is the Core Concept Switch Routes packets based on destination Local ID, service level Link Link Link Link Link Switch Link Switch Link Link Link Link Link Link Link Link Links 2.5 Gbits/s signaling rate Dual Simplex Multiple link widths: 1x, 4x, 12x Auto-negotiation to mutually acceptable width and signaling rate Common backplane connector(s)

13 InfiniBand* Architecture and Components Transport Concepts Transaction Message Message Message Packet Packet Packet Packet Packet Packet Packet Packet Packets Routable unit of transfer Messages consist of multiple packets Automatic segmentation & re-assembly Virtual Lanes & Service Levels for QoS & Traffic Shaping

14 InfiniBand* Architecture Fabric Partitions Host A Partition 3 I/O D Host B Partition 1 InfiniBand Fabric Partition 2 I/O A I/O B I/O C

15 InfiniBand* Architecture Subnets & Routers Link Link Link Link Link Link Link Link Router Transports packets between subnets Intra-Router links may be InfiniBand links or other

16 InfiniBand* Architecture and Components InfiniBand Host Architecture Host Channel Adapter CPU CPU Host Interconnect Mem Cntlr Sys Mem HCA Link Link Connects memory controller to fabric through one or more links Provides work queues for posting work requests Provides RDMA engines Manages transport functions Supports memory translation and protection

17 InfiniBand* Architecture and Components I/O Concepts I/O Unit One Target Channel Adapter One or more I/O Controllers IOC TCA Link Link Target Channel Adapter Provides link and transport services to I/O Controllers Peer-to-peer enabled Switch Link Link TCA IOC IOC I/O Controller Target of I/O request messages May be storage or network Request protocols may be standard or proprietary, at low levels or high

18 InfiniBand* Architecture and Components Management Enabling IPMI SNMP CIM ACPI DMI & many more Industry Standards for Network & Sys Management Configuration Management Determination of end node attributes System diagnostics Discovery Switch provisioning Fabric operations Fabric Management Partition Management Creation & Administration of Partitions

19 InfiniBand* Architecture and Components Work Queues The basic mechanism for inter-endpoint endpoint communications Dual Simplex model Work Queues come in pairs - Queue Pairs (QPs) Work is scheduled by posting the request to the queue Poster notified when request completes Outbound Send 5 RDMA Wr 4 RDMA Rd 3 Send 2 Send 1 HCA or TCA Memory Recv 1 Recv 3 Recv 2 Inbound Link

20 Work Queues, Channels, and Connections Connections A connection is a logical association of work request producer / consumer entities Over one or more channels Host Driver Connection IO Controller I O I O Channel

21 Work Queues, Channels, and Connections Channels and Service Types QP - 1 QP - 2 I O I O Channel A channel is an association of QPs InfiniBand* architecture supports several communication service types: Connected Reliable Connected Unreliable Datagram Reliable Datagram Raw Datagram

22 InfiniBand* Architecture Partitioning & Software Work Flow Verbs enqueue Work Requests Verbs abstract the HCA from the application, and the API from the HCA Not directly visible to end-user Work Request Verbs WQE WQE WQE Work Queue HCA Hardware Work Completion CQE CQE Completion Queue

23 InfiniBand* Architecture The Verbs Model An InfiniBand architecture-aware aware O/S assumes capabilities described by Verbs, but assumes its own abstracted view O/S O/S Interface (Driver) HCA Interface HCA Hardware (Vendor Specific) An O/S specific driver interacts with HCA, and abstracts HCA functions and parameters to O/S view The HCA provides a vendor-specific interface with functions and parameters. Non-abstracted view of commands & descriptors. The HCA hardware must provide a specific set of functional capabilities dictated by Verbs

24 Summary InfiniBand* Fabrics are System Area Networks for interconnecting elements of computing systems A message-based, highly abstracted architecture with hardware assists and direct memory interface Host and Target interface capabilities specified independent of hardware and software

25 Specification Milestones Q Working Draft Available to all members via web site Q Final Draft Functionally complete Release for comments Mid-2000 Release

26 Agenda Architectural Review Enabling of Flexible Solutions High Speed Serial Links Intel s Industry Enabling Efforts Intel Labs

27 Flexible Interconnect - Flexible Solutions Michael Krause Hewlett Packard Company Link Working Group Co-Chair Chair InfiniBand SM Trade Association

28 What is a flexible interconnect? A technology inflection point Changes the way we think about problems Delivers innovative solutions to customer problems Creates new customer and industry opportunities Built with an eye on the future Well-defined, layered architecture Architecture evolves at the rate of technology Open multi-vendor inter-operability from day one Strong customer investment protection Can be tailored to fit the customer requirements

29 Paradigm Shift

30 Paradigm Shift Paradigm shift is: Single, cohesive, open paradigm: RAS Management Application Communication Point-to to-point, switch-based Message-based communications Applications direct map to H/W Standardized hardware semantics H/W Semantics == Application paradigm H/W implements standard comms ops H/W implements Frees software to focus on application requirements True peer-to to-peer communication End Node End Node End Node End Node Switch End Node Subnet A Switch Switch End Node End Node Switch End Node End Node End Node

31 Reduce Application Design Impact Map Application paradigm to hardware transport services Reliable Connection (RC) Unreliable Connection (UC) Reliable Datagram (RD) Unreliable Datagram (UD) Multicast (optional) Raw Packet (optional) Off-load application operation with Reliable communication: HW generates acknowledgments for every packet. HW generates / checks packet sequence numbers HW rejects duplicates, detects missing packets Client transparent recovery from most fabric level errors. Client transparent recovery from most fabric level errors.

32 Innovative Fabric-level Services: Virtual Lane (VL) Packets Mux De- Mux Packets Multiplex multiple independent data streams onto a single physical link which provides: Differentiated services on a packet-boundary basis Increase fabric utilization in the face of head-of of-line blocking on a given VL and via VL-based routing across multiple paths Support for up to 16 VLs with 1 VL reserved for fabric management Implementations shall support a minimum of 1 VL for application usage and 1 VL for fabric management IBA defines a VL mapping algorithm to ensure inter-operability between endnodes which support different numbers of VLs.

33 Flexible Topology Building Blocks Switch Routes packets within a subnet. May support VLs Multiple link width - 1 / 4 / 12 Partitioning / Multicast (optional) Router Routes packets between subnets IBA subnet-to to-iba subnet IBA subnet - to disparate fabric to - IBA subnet e.g. 10 GbE joining two InfiniBand subnets IBA subnet - to disparate (multi-protocol) e.g. 10 GbE / OC 192 subnet End Node End Node End Node End Node Subnet A Switch Switch Switch End Node Subnet B End Node Router Switch End Node End Node

34 Flexible Topologies Match topology to customer reqs Single-board Integrated multi-module module Active or passive backplanes Active or passive chassis Single-subnet Multiple subnets Disparate fabric linking IBA subnets Variety of network topologies Trees Loops K-ary cubes CPU CPU MEM CTL HCA Switch Switch Enet FC FC GigE GigE SCSI SCSI SCSI SCSI SCSI Enet GigE GigE

35 Innovative Fabric-level Protection: Partitioning H/W enforced protection Enumeration Only see what you re allowed while transparently sharing common fabric / endnodes Per QP Packet-filtering P_Key (16-bit) Endnode / Switch / Router Silent discard - opt. Alarm gen Partition Manager Within / across subnets Single-point management End Node Subnet B End Node End Node Switch End Node Router Switch End Node Router End Node Switch Switch Switch Switch End Node End Node End Node End Node End Node Subnet A End Node

36 I/O Sharing Increased resource utilization via I/O sharing For example, shared console, boot device, 10 GbE, etc. I/O Sharing resource requirements function of: Transport service (e.g. QP / connection) Partition - QP per supported partition Each QP may be assigned a different QoS Bandwidth, priority of service, etc. Customer benefits: Reduced total cost of ownership Reduced number of endnodes to manage Improved load-sharing for high-speed devices / backbones

37 RAS Benefits Leverages industry s Five 9 s Experience Fault-zone isolation Redundant components / fabrics Multi-port channel adapters Multipath support Within and across subnets Mirroring Load-balancing Active-Active paradigm Delivers customer with maximum value while providing transparent fail-over solutions Hot-plug / removal of all IBA components CPU / Mem Complexes HCA Switch TCA ENET HCA Switch Chassis

38 Paradigm Shift - Customer Benefits Enables widest application / management base Protects customer investment Supports new application paradigms Tailor solutions to meet customer environment Improved latency tolerance Improved system efficiency / performance New hardware topologies Semantics remain constant independent of distance Subnets to improve access control, management, QoS, performance Shared I/O endnodes to reduce total cost of ownership Single-point and distributed management support Price / Performance / RAS / Management Trade-off capabilities to meet operational requirements

39 Hardware Model & Benefits

40 Component Communication Component communication via: ASIC-to to-asic Board-to to-board Chassis-to to-chassis via cables Standardized: Signaling Cables - Copper, Optical Connectors - Backplane, face- plate Customer benefits: Multiple solution sources Open standard - inter-operability Flexible solution deployment CPU CPU CPU CPU TCA FC MEM CTL HCA Switch TCA SCSI Router TCA ENET

41 Multiple Form Factors Four adapter form factors supported: Single wide / height (100 x 240 mm), Single-wide / double-height Double-wide / single-height, Double-wide / double-height Large variety of adapter solutions possible, e.g. array controllers Face plate can support quad SCSI / HSSDC connectors Optional redundant backplane connector Front to Back Airflow Cover Carrier Board Face-plate

42 Flexible H/W Management Built-in in In-band management Defined management operations transmitted via InfiniBand* link Optional I 2 C bridge in InfiniBand* ASIC On-board power regulation Eliminates power mgmt issues as technology evolves Aux power for system management / low-power devices Board bus; Link ASIC local I 2 C backplane I 2 C Board I 2 C access to VPD, LED s, sensors, etc. Backplane access to chassis VPD, slot In-band ACPI & wake-on LAN / Link Out-of of-band board & chassis power control

43 Actively Managed Chassis Provides Increased Functionality Detect presence Control power Control Hot-swap In-band view of: Chassis state Slot population Private Chassis Devices InfiniBand* Inter-chassis Links Switch Module Module Module Chassis Private Mgmt Mgmt Entity Funcs IB-ML Links

44 Hardware Model Benefits Open, inter-operable hardware components Multiple sources - Improved competition Reduced customer / vendor costs Development, manufacturing, support, etc. Variety of h/w communication paradigms Innovative and varied designs / packaging Multiple adapter form factors Tailor design to optimal form factors Switch / router may be implemented in an adapter Passive backplane for improved RAS Integrated management for adapters, chassis, backplanes Complete solution from day one

45 Software Model & Benefits

46 Verbs / Channel Interface Work Request Upper Layer Protocol Work Completion Data Segment 1 Verbs Data Segment 2 Work Queue Element Channel Interface Completion Queue Element Data Segment 3 Message Work Queue Element Work Queue Element Send or Receive Queue Completion Queue Element Completion Queue Element Completion Queue Verbs define abstract h/w semantics - do not define an API Channel interface is implementation-specific - combination of HCA Driver / HCA H/W

47 Memory Management Controls HCA access to user and system memory Provides memory access using virtually contiguous ranges Allows both drivers and user-level applications to initiate and control access to memory Local memory protection Prevents applications from disturbing each other s data Remote memory access control Controls access from other hosts and devices to local memory Memory management provides OS-bypass operation support.

48 Memory Mgmt Verb Categories Protection Domains ( PDs ) PDs provide a fundamental basis for access control by associating Queue Pairs, Regions, and Windows Memory Regions ( Regions ) Regions enable and control HCA access to host memory, both for local and remote operations Memory Windows ( Windows ) Windows enable enhanced control over remote access to host memory Fine-grain memory protection (byte-level) Refined remote key (R_Key) semantics

49 Memory Region Registration & Local Operation Access System Memory Consumer Process Consumer Process Virtual Address Space Data Buffer Register VA range L_Key Work request (VA, L_Key) Data (PAs) Operating System Service Register (VA range, PAs, L_Key) HCA

50 Memory Region Registration & Remote Operation Access System Memory Consumer Process Data Buffer Register VA range L_Key & R_Key Data (PAs) Operating System Service Register (VA range, PAs, L_Key, R_Key) HCA RDMA op (VA, R_Key) Remote Agent

51 Memory Window Allocation, Binding, & Remote Access System Memory Consumer Process Data Buffer Allocate Window Windo w R_Key Bind (VA, R_Key) Data (PA) Operating System Service Allocate Window HCA RDMA op (VA, R_Key) Remote Agent

52 Software Model Benefits Verbs define s/w view of abstract InfiniBand* h/w semantics Used with HCAs to provide complete architecture application access Allows ISVs and OSVs to implement API over common set of h/w semantics Improved application portability ISVs / OSVs free to implement APIs to meet their needs Verbs may be implemented under legacy APIs Transparent legacy application support Two-levels of memory protection Allow vendors to tailor solution to meet customer price / performance / protection requirements

53 Management Model & Benefits

54 Comprehensive Management Infrastructure Partition InfiniBand TM TM Management Mgmt Mgmt Connection Fabric Fabric Mgmt Baseboard Mgmt Mgmt Mgmt Mgmt Mgmt Device Device Mgmt Mgmt S N M P Fabric Management Fabric Administration Partition Management Connection Management Device Management Baseboard Mgmt Subnet Services SNMP Tunneling Fabric Topology & Fabric Health Configuration Query & Update Communication policy enforcement Connection / RD establishment primitives Device Enumeration, diagnostics In-line baseboard instrumentation Layer 2 to Layer 3 servicess Native Support of SNMP

55 Defined Fabric Management Operations Notices are polled via FabricGet(Notice) operation Traps are asynchronously sent to well known target Endnode FabricGet() FabricGetResp() Endnode Notice Polling Model Subscribed Endnode FabricReport( Macid 3 ) FabricReport( ACK ) FabricTrap ( Error ) Trap forwarding model Consistent management operations across all endnodes Fabric Manager

56 Management: RAS Features Diagnostics Fault isolation Trap and Event forwarding Access Control Partitions Management keys Dynamic configuration Hot-plug / removal Channel Migration Redundant fabric support Fabric fail-over support Transparent client fail- over Managed / Automatic

57 Communications are well known Fabric Mgr Fabric Mgmt Agent NODE VERBS LAYER Perform Connect Mgr Mgr General Svc Interface Redirector QP 0 is Fabric Mgmt VL 15, QP 0 is reserved for fabric management QP 1 is General Service Q Interface P GSI can forward packets to 1 interested requester Can also send redirect to LID n requester to refer to appropriate QP Q P 0

58 Management Benefits Management designed in - not an afterthought Supports current and future management technologies (e.g. SNMP, CIM) Reduced development costs Reduce total cost of ownership Complete infrastructure solution Status, control, hardware, software, chassis, etc.

59 InfiniBand TM Architecture Benefits Summary

60 Benefits - Fabric Level Hardware-implemented Transport services Link, Network, and transport layers implemented in HW for faster data movement with lower overhead. Supports recovery from permanent errors when redundant paths configured. Dual routing mechanisms (switches & routers) Fast, efficient local routing based on small local headers. Global routing uses standard RFC2460 IPv6 header Large address space Eliminates having to implement complex protocols to preserve addresses Can address trillions of I/O devices Robust, high-performance link layer. Provides link-level level flow-control and QoS End-to to-end dynamic rate control and congestion management in next release

61 Benefits - Application Level Addressing uses IPv6 Smooth integration into Internet world Hides address / fabric complexity from applications Wide range of transport services in hardware Reliable / Unreliable Connection, Reliable / Unreliable Datagram Multicast, Raw packet Wide range of Message Types Sends, RDMAs, Atomics Multiple memory management paradigms / protection Memory regions and Memory windows Region-level and byte-level Efficient work element post model Matches current application / driver base Encourages and allows innovation

62 Benefits - Solution Level I/O, IPC and raw protocol support are all part of fundamental architecture. Don t re-architect Device Driver Architecture. Only providing I/O bus replacement for data transport services Hides fabric complexity from endnodes End-node node apps know other endnodes only by addresses - paths and fabric-level local addressing is hidden from drivers / adapters Central fabric management Error recovery handled in fabric management / channel adapters Reduced total cost of ownership Single, cohesive RAS architecture Architecture built for the long-haul Forward and backward compatibility

63 Agenda Architectural Review Architectural Key Topics High Speed Serial Links Intel s Industry Enabling Efforts Intel Labs

64 InfiniBand* Interconnect Architecture Overview Dennis Miller Sr. Signal Integrity Engineer Intel Corp. Server Architecture Lab Intel Labs

65 Agenda Interconnect Architectural Overview Interconnect Key Topics Boards Cable/Connectors Serial/De-serializer Component Interconnect Considerations Intel Labs

66 InfiniBand* Architecture Interconnect 1X, 4X, 12X Focus CPU CPU Host Interconnect ASIC Mem Cntlr Sys Mem H P SerDes Bd/Cable/Connectors Target TCA Link HCA Link Switch Link Link xca HCA = Host Channel Adapter TCA = Target Channel Adapter TCA Target xca Link Router Network Router Src: InfiniBand* Trade Association Intel Labs

67 Interconnect Goals Competitive cost for every server market tier 1X, 4X, 12X backplane/cable connections Interconnect to support optical and copper Expandability to future 2.5 Gbit/sec+ serial technologies Interconnect electricals support integrated or external serial/de-serializers serializers Intel Labs

68 InfiniBand* Interconnect Concept Server Router Storage I B L i n k IB Link InfiniBand Link RAID Disks FC Disk Storage Subsystem Remote Client Rack Optic Link I/O Chassis Power Supply IB Link IB SCSI Fiber Channel Optic Link 2.5Gb IB Link Disk Array IB Link Intel Labs

69 Board Focus + - Eye Crosstalk Tx Rx Jitter Blocking Capacitor Vias Reflection Z 0 Z Due to mismatch impedance Diagram Differential Tx - Loss + Rx - Trace Corner Trace EMI EMI Equalization Resistor Capacitor To ensure loss is freq indep InfiniBand* ASIC Clock 250 MHz SerDes Rx Tx Tx + Tx - Rx + Rx - Connector Cable 10-bit parallel bus Cable Gauge/Length Radius Bend EMI EMI Intel Labs

70 Cable/Connector Focus 2.5 Gb/s common backplane connectors and 1X, 4X, 12X cable connector(s) High volume manufacturing Connector type and size Board design rules and assembly Co-axial cable-type considerations Quad, twin lead, spectra-strip strip Cable mechanical considerations Multiple lengths, bends, bundling Cable/connector environmental performance Intel Labs

71 SerDes Component Focus Gb/s 2.5+ Ext./Internal SerDes Multi-Port Optic/Cu 2.5 External SerDes Single Port Optic/Cu Integrated SerDes Multi-Port Optic/Cu External SerDes Quad Port Optic/Cu SerDes Considerations: Silicon technology Voltage Package type/size Port size External/integrated 1.25 External SerDes Single Port Optic/Cu Internal SerDes Multi-Port Optic/Cu Solid Lines: Committed Product Dotted Lines: Potential Product year Intel Labs

72 Interconnect Considerations Interconnect Single-ended ended to differential Equivalent circuit to S-parameters S characterization of discontinuities Simulation models and tools Behavioral to transistor model 2D to 2.5D and 3D tools Measurements Microwave engineering set of measurements Differential measurements Intel Gb/s Physical Link Design Rules IDF IDF InfiniBand* Physical Link Workshop February from 4:30 4:30 to to 6:30 6:30 pm, pm, Mesquite A Intel Labs

73 Agenda Architectural Review Architectural Key Topics High Speed Serial Links Intel s Industry Enabling Efforts Intel Labs

74 Intel s Enabling Plans Arland Kunz Sr. Technical Marketing Engineer Fabric Component Division Intel Corporation Intel Labs

75 IHV H/W Enabling Strategy Focus on two objectives: Short Term - Enable for launch in 01 Storage Networking Long Term - Enable for cost and performance in 02/03 Storage Networking Vendors Can Make or Buy TCA Logic "All dates are provided for planning purposes only and are subject to change." Intel Labs

76 Today SCSI JBOD Array Ultra 2 SCSI FC-AL JBOD Array Crossroads* Storage Router GHz FC-AL World s First 2.5Gbit/Sec I/O Solution! LSI Logic* FC-AL JBOD Array LSI Logic* Controller Dell* Server GHz FC-AL InfiniBand* Architecture prototype links 2.5 Gbit/sec Compaq* Server Intel Labs

77 From Prototypes to Products Specs Development Kits (Host & Switch SDK, SDV) Samples (Host & Switch) Solutions H1 00 H2 00 H1 01 H2 01 Kits & Samples are Limited - Contact your Intel Representative Delivered by the InfiniBand* Trade Association "All dates are provided for planning purposes only and are subject to change." Intel Labs

78 Call To Action Pick up serial link & demo whitepapers Attend the link lab Visit the demo pavilion Speak to your Intel representative about SDK, SDV & sample availability Get InfiniBand* products on your roadmap for 01 Intel Labs

79 Intel Labs

Introduction to High-Speed InfiniBand Interconnect

Introduction to High-Speed InfiniBand Interconnect Introduction to High-Speed InfiniBand Interconnect 2 What is InfiniBand? Industry standard defined by the InfiniBand Trade Association Originated in 1999 InfiniBand specification defines an input/output

More information

Joe Pelissier InfiniBand * Architect

Joe Pelissier InfiniBand * Architect ,QILQL%DQG $UFKLWHFWXUH 2YHUYLHZ Joe Pelissier InfiniBand * Architect Fabric Components Division Corporation August 22-24, 2000 Copyright 2000 Corporation. * Other names and brands are property of their

More information

Aspects of the InfiniBand Architecture 10/11/2001

Aspects of the InfiniBand Architecture 10/11/2001 Aspects of the InfiniBand Architecture Gregory Pfister IBM Server Technology & Architecture, Austin, TX 1 Legalities InfiniBand is a trademark and service mark of the InfiniBand Trade Association. All

More information

InfiniBand Linux Operating System Software Access Layer

InfiniBand Linux Operating System Software Access Layer Software Architecture Specification (SAS) Revision Draft 2 Last Print Date: 4/19/2002-9:04 AM Copyright (c) 1996-2002 Intel Corporation. All rights reserved. InfiniBand Linux Operating System Software

More information

Advanced Computer Networks. End Host Optimization

Advanced Computer Networks. End Host Optimization Oriana Riva, Department of Computer Science ETH Zürich 263 3501 00 End Host Optimization Patrick Stuedi Spring Semester 2017 1 Today End-host optimizations: NUMA-aware networking Kernel-bypass Remote Direct

More information

Introduction to Infiniband

Introduction to Infiniband Introduction to Infiniband FRNOG 22, April 4 th 2014 Yael Shenhav, Sr. Director of EMEA, APAC FAE, Application Engineering The InfiniBand Architecture Industry standard defined by the InfiniBand Trade

More information

Welcome to the IBTA Fall Webinar Series

Welcome to the IBTA Fall Webinar Series Welcome to the IBTA Fall Webinar Series A four-part webinar series devoted to making I/O work for you Presented by the InfiniBand Trade Association The webinar will begin shortly. 1 September 23 October

More information

HP Cluster Interconnects: The Next 5 Years

HP Cluster Interconnects: The Next 5 Years HP Cluster Interconnects: The Next 5 Years Michael Krause mkrause@hp.com September 8, 2003 2003 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice

More information

InfiniBand * Access Layer Programming Interface

InfiniBand * Access Layer Programming Interface InfiniBand * Access Layer Programming Interface April 2002 1 Agenda Objectives Feature Summary Design Overview Kernel-Level Interface Operations Current Status 2 Agenda Objectives Feature Summary Design

More information

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC

SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC SAS Technical Update Connectivity Roadmap and MultiLink SAS Initiative Jay Neer Molex Corporation Marty Czekalski Seagate Technology LLC SAS Connectivity Roadmap Background Connectivity Objectives Converged

More information

Dragon Slayer Consulting

Dragon Slayer Consulting Dragon Slayer Consulting Introduction to the Value Proposition of InfiniBand Marc Staimer marcstaimer@earthlink.net (503) 579-3763 5/27/2002 Dragon Slayer Consulting 1 Introduction to InfiniBand (IB) Agenda

More information

Switch Fabric Architecture. Jack Regula January 12, 2001

Switch Fabric Architecture. Jack Regula January 12, 2001 Switch Fabric Architecture Jack Regula January 12, 2001 PLX Switch Fabric Architecture Agenda Communications Infrastructure Trends What Is a Switch Fabric? How Does it Work? The Compelling Nature of Switch

More information

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD.

OceanStor 9000 InfiniBand Technical White Paper. Issue V1.01 Date HUAWEI TECHNOLOGIES CO., LTD. OceanStor 9000 Issue V1.01 Date 2014-03-29 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Ltd. 2014. All rights reserved. No part of this document may be reproduced or transmitted in

More information

IRU6SHFLDOL]HG6XEV\VWHPV. Target. Architecture Model TCA. TCA Target. IB Link. IB Link +RVW&KDQQHO$GDSWHUV IRUFRPSXWLQJSODWIRUPV.

IRU6SHFLDOL]HG6XEV\VWHPV. Target. Architecture Model TCA. TCA Target. IB Link. IB Link +RVW&KDQQHO$GDSWHUV IRUFRPSXWLQJSODWIRUPV. The InfiniBand * Architecture Model +RVW&KDQQHO$GDSWHUV IRUFRPSWLQJSODWIRUPV Target TCA 7DUJHW&KDQQHO$GDSWHUV IRU6SHFLDOL]HG6EV\VWHPV CPU IB Link IB Link CPU Mem Cntlr Sys Mem HCA IB Link Switch Router

More information

Messaging Overview. Introduction. Gen-Z Messaging

Messaging Overview. Introduction. Gen-Z Messaging Page 1 of 6 Messaging Overview Introduction Gen-Z is a new data access technology that not only enhances memory and data storage solutions, but also provides a framework for both optimized and traditional

More information

Key Measures of InfiniBand Performance in the Data Center. Driving Metrics for End User Benefits

Key Measures of InfiniBand Performance in the Data Center. Driving Metrics for End User Benefits Key Measures of InfiniBand Performance in the Data Center Driving Metrics for End User Benefits Benchmark Subgroup Benchmark Subgroup Charter The InfiniBand Benchmarking Subgroup has been chartered by

More information

IO virtualization. Michael Kagan Mellanox Technologies

IO virtualization. Michael Kagan Mellanox Technologies IO virtualization Michael Kagan Mellanox Technologies IO Virtualization Mission non-stop s to consumers Flexibility assign IO resources to consumer as needed Agility assignment of IO resources to consumer

More information

PEX8764, PCI Express Gen3 Switch, 64 Lanes, 16 Ports

PEX8764, PCI Express Gen3 Switch, 64 Lanes, 16 Ports Highlights PEX8764 General Features o 64-lane, 16-port PCIe Gen3 switch Integrated 8.0 GT/s SerDes o 35 x 35mm 2, 1156-ball FCBGA package o Typical Power: 1. Watts PEX8764 Key Features o Standards Compliant

More information

InfiniBand and Mellanox UFM Fundamentals

InfiniBand and Mellanox UFM Fundamentals InfiniBand and Mellanox UFM Fundamentals Part Number: MTR-IB-UFM-OST-A Duration: 3 Days What's in it for me? Where do I start learning about InfiniBand? How can I gain the tools to manage this fabric?

More information

Designing Interoperability into IA-64 Systems: DIG64 Guidelines

Designing Interoperability into IA-64 Systems: DIG64 Guidelines Designing Interoperability into IA-64 Systems: DIG64 Guidelines Michael Demshki - Intel, DIG64 Chair Melvin Benedict - Compaq, Hardware Architect Dong Wei - Hewlett-Packard, Platform Architect Tomm Aldridge

More information

ehca Virtualization on System p

ehca Virtualization on System p ehca Virtualization on System p Christoph Raisch Technical Lead ehca Infiniband and HEA device drivers 2008-04-04 Trademark Statements IBM, the IBM logo, ibm.com, System p, System p5 and POWER Hypervisor

More information

PEX 8680, PCI Express Gen 2 Switch, 80 Lanes, 20 Ports

PEX 8680, PCI Express Gen 2 Switch, 80 Lanes, 20 Ports , PCI Express Gen 2 Switch, 80 Lanes, 20 Ports Features General Features o 80-lane, 20-port PCIe Gen2 switch - Integrated 5.0 GT/s SerDes o 35 x 35mm 2, 1156-ball BGA package o Typical Power: 9.0 Watts

More information

InfiniBand SDR, DDR, and QDR Technology Guide

InfiniBand SDR, DDR, and QDR Technology Guide White Paper InfiniBand SDR, DDR, and QDR Technology Guide The InfiniBand standard supports single, double, and quadruple data rate that enables an InfiniBand link to transmit more data. This paper discusses

More information

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0

Informatix Solutions INFINIBAND OVERVIEW. - Informatix Solutions, Page 1 Version 1.0 INFINIBAND OVERVIEW -, 2010 Page 1 Version 1.0 Why InfiniBand? Open and comprehensive standard with broad vendor support Standard defined by the InfiniBand Trade Association (Sun was a founder member,

More information

PEX 8636, PCI Express Gen 2 Switch, 36 Lanes, 24 Ports

PEX 8636, PCI Express Gen 2 Switch, 36 Lanes, 24 Ports Highlights PEX 8636 General Features o 36-lane, 24-port PCIe Gen2 switch - Integrated 5.0 GT/s SerDes o 35 x 35mm 2, 1156-ball FCBGA package o Typical Power: 8.8 Watts PEX 8636 Key Features o Standards

More information

NTRDMA v0.1. An Open Source Driver for PCIe NTB and DMA. Allen Hubbe at Linux Piter 2015 NTRDMA. Messaging App. IB Verbs. dmaengine.h ntb.

NTRDMA v0.1. An Open Source Driver for PCIe NTB and DMA. Allen Hubbe at Linux Piter 2015 NTRDMA. Messaging App. IB Verbs. dmaengine.h ntb. Messaging App IB Verbs NTRDMA dmaengine.h ntb.h DMA DMA DMA NTRDMA v0.1 An Open Source Driver for PCIe and DMA Allen Hubbe at Linux Piter 2015 1 INTRODUCTION Allen Hubbe Senior Software Engineer EMC Corporation

More information

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage

Accelerating Real-Time Big Data. Breaking the limitations of captive NVMe storage Accelerating Real-Time Big Data Breaking the limitations of captive NVMe storage 18M IOPs in 2u Agenda Everything related to storage is changing! The 3rd Platform NVM Express architected for solid state

More information

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications

NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications NFS/RDMA over 40Gbps iwarp Wael Noureddine Chelsio Communications Outline RDMA Motivating trends iwarp NFS over RDMA Overview Chelsio T5 support Performance results 2 Adoption Rate of 40GbE Source: Crehan

More information

Low latency, high bandwidth communication. Infiniband and RDMA programming. Bandwidth vs latency. Knut Omang Ifi/Oracle 2 Nov, 2015

Low latency, high bandwidth communication. Infiniband and RDMA programming. Bandwidth vs latency. Knut Omang Ifi/Oracle 2 Nov, 2015 Low latency, high bandwidth communication. Infiniband and RDMA programming Knut Omang Ifi/Oracle 2 Nov, 2015 1 Bandwidth vs latency There is an old network saying: Bandwidth problems can be cured with

More information

Advancing RDMA. A proposal for RDMA on Enhanced Ethernet. Paul Grun SystemFabricWorks

Advancing RDMA. A proposal for RDMA on Enhanced Ethernet.  Paul Grun SystemFabricWorks Advancing RDMA A proposal for RDMA on Enhanced Ethernet Paul Grun SystemFabricWorks pgrun@systemfabricworks.com Objective: Accelerate the adoption of RDMA technology Why bother? I mean, who cares about

More information

Next Generation Computing Architectures for Cloud Scale Applications

Next Generation Computing Architectures for Cloud Scale Applications Next Generation Computing Architectures for Cloud Scale Applications Steve McQuerry, CCIE #6108, Manager Technical Marketing #clmel Agenda Introduction Cloud Scale Architectures System Link Technology

More information

The desire for higher interconnect speeds between

The desire for higher interconnect speeds between Evaluating high speed industry standard serial interconnects By Harpinder S. Matharu The desire for higher interconnect speeds between chips, boards, and chassis continues to grow in order to satisfy the

More information

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved.

2017 Storage Developer Conference. Mellanox Technologies. All Rights Reserved. Ethernet Storage Fabrics Using RDMA with Fast NVMe-oF Storage to Reduce Latency and Improve Efficiency Kevin Deierling & Idan Burstein Mellanox Technologies 1 Storage Media Technology Storage Media Access

More information

Module 2 Storage Network Architecture

Module 2 Storage Network Architecture Module 2 Storage Network Architecture 1. SCSI 2. FC Protocol Stack 3. SAN:FC SAN 4. IP Storage 5. Infiniband and Virtual Interfaces FIBRE CHANNEL SAN 1. First consider the three FC topologies pointto-point,

More information

Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview

Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview May 2010 1 About SSI Established in 1998, the Server System Infrastructure (SM) (SSI) Forum is a leading server industry group

More information

Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses

Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses Introduction Electrical Considerations Data Transfer Synchronization Bus Arbitration VME Bus Local Buses PCI Bus PCI Bus Variants Serial Buses 1 Most of the integrated I/O subsystems are connected to the

More information

PEX 8696, PCI Express Gen 2 Switch, 96 Lanes, 24 Ports

PEX 8696, PCI Express Gen 2 Switch, 96 Lanes, 24 Ports , PCI Express Gen 2 Switch, 96 Lanes, 24 Ports Highlights General Features o 96-lane, 24-port PCIe Gen2 switch - Integrated 5.0 GT/s SerDes o 35 x 35mm 2, 1156-ball FCBGA package o Typical Power: 10.2

More information

Storage Area Networks SAN. Shane Healy

Storage Area Networks SAN. Shane Healy Storage Area Networks SAN Shane Healy Objective/Agenda Provide a basic overview of what Storage Area Networks (SAN) are, what the constituent components are, and how these components fit together to deliver

More information

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms Sayantan Sur, Matt Koop, Lei Chai Dhabaleswar K. Panda Network Based Computing Lab, The Ohio State

More information

Fibre Channel Gateway Overview

Fibre Channel Gateway Overview CHAPTER 5 This chapter describes the Fibre Channel gateways and includes the following sections: About the Fibre Channel Gateway, page 5-1 Terms and Concepts, page 5-2 Cisco SFS 3500 Fibre Channel Gateway

More information

Introduction to the Catalyst 3920

Introduction to the Catalyst 3920 CHAPTER 1 Introduction to the Catalyst 3920 This chapter contains the following information about the Catalyst 3920: Product Overview Physical Characteristics of the Catalyst 3920 System Architecture Product

More information

OpenFabrics Interface WG A brief introduction. Paul Grun co chair OFI WG Cray, Inc.

OpenFabrics Interface WG A brief introduction. Paul Grun co chair OFI WG Cray, Inc. OpenFabrics Interface WG A brief introduction Paul Grun co chair OFI WG Cray, Inc. OFI WG a brief overview and status report 1. Keep everybody on the same page, and 2. An example of a possible model for

More information

MOVING FORWARD WITH FABRIC INTERFACES

MOVING FORWARD WITH FABRIC INTERFACES 14th ANNUAL WORKSHOP 2018 MOVING FORWARD WITH FABRIC INTERFACES Sean Hefty, OFIWG co-chair Intel Corporation April, 2018 USING THE PAST TO PREDICT THE FUTURE OFI Provider Infrastructure OFI API Exploration

More information

The Exascale Architecture

The Exascale Architecture The Exascale Architecture Richard Graham HPC Advisory Council China 2013 Overview Programming-model challenges for Exascale Challenges for scaling MPI to Exascale InfiniBand enhancements Dynamically Connected

More information

IBM Europe Announcement ZG , dated February 13, 2007

IBM Europe Announcement ZG , dated February 13, 2007 IBM Europe Announcement ZG07-0221, dated February 13, 2007 Cisco MDS 9200 for IBM System Storage switches, models 9216i and 9216A, offer enhanced performance, scalability, multiprotocol capabilities, and

More information

Infiniband Fast Interconnect

Infiniband Fast Interconnect Infiniband Fast Interconnect Yuan Liu Institute of Information and Mathematical Sciences Massey University May 2009 Abstract Infiniband is the new generation fast interconnect provides bandwidths both

More information

Leveraging the PCI Support in Windows 2000 for StarFabric-based Systems

Leveraging the PCI Support in Windows 2000 for StarFabric-based Systems Leveraging the PCI Support in Windows 2000 for StarFabric-based Systems Mark Overgaard President, Pigeon Point Systems mark@pigeonpoint.com, 831-438-1565 Agenda Background StarFabric Bus Driver for Windows

More information

Managing Large Data Centers

Managing Large Data Centers Managing Large Data Centers July 9, 2003 Arland Kunz Agenda Enterprise landscape What What do we have today How can you manage this today Management trends What What to expect in the future nterprise Ecosystem

More information

The Promise of Unified I/O Fabrics

The Promise of Unified I/O Fabrics The Promise of Unified I/O Fabrics Two trends are challenging the conventional practice of using multiple, specialized I/O fabrics in the data center: server form factors are shrinking and enterprise applications

More information

I/O Considerations for Server Blades, Backplanes, and the Datacenter

I/O Considerations for Server Blades, Backplanes, and the Datacenter I/O Considerations for Server Blades, Backplanes, and the Datacenter 1 1 Contents Abstract 3 Enterprise Modular Computing 3 The Vision 3 The Path to Achieving the Vision 4 Bladed Servers 7 Managing Datacenter

More information

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO

Voltaire. Fast I/O for XEN using RDMA Technologies. The Grid Interconnect Company. April 2005 Yaron Haviv, Voltaire, CTO Voltaire The Grid Interconnect Company Fast I/O for XEN using RDMA Technologies April 2005 Yaron Haviv, Voltaire, CTO yaronh@voltaire.com The Enterprise Grid Model and ization VMs need to interact efficiently

More information

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting Gopal Hegde, Intel Corporation Outline Market Trends Business Case Blade Server Architectures

More information

access addresses/addressing advantages agents allocation analysis

access addresses/addressing advantages agents allocation analysis INDEX A access control of multipath port fanout, LUN issues, 122 of SAN devices, 154 virtualization server reliance on, 173 DAS characteristics (table), 19 conversion to SAN fabric storage access, 105

More information

Containing RDMA and High Performance Computing

Containing RDMA and High Performance Computing Containing RDMA and High Performance Computing Liran Liss ContainerCon 2015 Agenda High Performance Computing (HPC) networking RDMA 101 Containing RDMA Challenges Solution approach RDMA network namespace

More information

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts

Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE. Gilles Chekroun Errol Roberts Comparing Server I/O Consolidation Solutions: iscsi, InfiniBand and FCoE Gilles Chekroun Errol Roberts SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies

More information

Extending InfiniBand Globally

Extending InfiniBand Globally Extending InfiniBand Globally Eric Dube (eric@baymicrosystems.com) com) Senior Product Manager of Systems November 2010 Bay Microsystems Overview About Bay Founded in 2000 to provide high performance networking

More information

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ

Networking for Data Acquisition Systems. Fabrice Le Goff - 14/02/ ISOTDAQ Networking for Data Acquisition Systems Fabrice Le Goff - 14/02/2018 - ISOTDAQ Outline Generalities The OSI Model Ethernet and Local Area Networks IP and Routing TCP, UDP and Transport Efficiency Networking

More information

Microsoft Office SharePoint Server 2007

Microsoft Office SharePoint Server 2007 Microsoft Office SharePoint Server 2007 Enabled by EMC Celerra Unified Storage and Microsoft Hyper-V Reference Architecture Copyright 2010 EMC Corporation. All rights reserved. Published May, 2010 EMC

More information

Sun Fire V880 System Architecture. Sun Microsystems Product & Technology Group SE

Sun Fire V880 System Architecture. Sun Microsystems Product & Technology Group SE Sun Fire V880 System Architecture Sun Microsystems Product & Technology Group SE jjson@sun.com Sun Fire V880 Enterprise RAS Below PC Pricing NEW " Enterprise Class Application and Database Server " Scalable

More information

InfiniBand* Software Architecture Access Layer High Level Design June 2002

InfiniBand* Software Architecture Access Layer High Level Design June 2002 InfiniBand* Software Architecture June 2002 *Other names and brands may be claimed as the property of others. THIS SPECIFICATION IS PROVIDED "AS IS" WITH NO WARRANTIES WHATSOEVER, INCLUDING ANY WARRANTY

More information

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine April 2007 Part No 820-1270-11 Revision 1.1, 4/18/07

More information

All Roads Lead to Convergence

All Roads Lead to Convergence All Roads Lead to Convergence Greg Scherer VP, Server and Storage Strategy gscherer@broadcom.com Broadcom Corporation 2 Agenda The Trend Toward Convergence over Ethernet Reasons for Storage and Networking

More information

Performance monitoring in InfiniBand networks

Performance monitoring in InfiniBand networks Performance monitoring in InfiniBand networks Sjur T. Fredriksen Department of Informatics University of Oslo sjurtf@ifi.uio.no May 2016 Abstract InfiniBand has quickly emerged to be the most popular interconnect

More information

Pass-Through Technology

Pass-Through Technology CHAPTER 3 This chapter provides best design practices for deploying blade servers using pass-through technology within the Cisco Data Center Networking Architecture, describes blade server architecture,

More information

NC-SI 1.2 Topics- Work-In- Progress. Version 0.10 September 13, 2017

NC-SI 1.2 Topics- Work-In- Progress. Version 0.10 September 13, 2017 NC-SI 1.2 Topics- Work-In- Progress Version 0.10 September 13, 2017 Disclaimer The information in this presentation represents a snapshot of work in progress within the DMTF. This information is subject

More information

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM

V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh. System-x PLM V.I.B.E. Virtual. Integrated. Blade. Environment. Harveenpal Singh System-x PLM x86 servers are taking on more demanding roles, including high-end business critical applications x86 server segment is the

More information

A Dell Technical White Paper Dell Virtualization Solutions Engineering

A Dell Technical White Paper Dell Virtualization Solutions Engineering Dell vstart 0v and vstart 0v Solution Overview A Dell Technical White Paper Dell Virtualization Solutions Engineering vstart 0v and vstart 0v Solution Overview THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES

More information

New Interconnnects. Moderator: Andy Rudoff, SNIA NVM Programming Technical Work Group and Persistent Memory SW Architect, Intel

New Interconnnects. Moderator: Andy Rudoff, SNIA NVM Programming Technical Work Group and Persistent Memory SW Architect, Intel New Interconnnects Moderator: Andy Rudoff, SNIA NVM Programming Technical Work Group and Persistent Memory SW Architect, Intel CCIX: Seamless Data Movement for Accelerated Applications TM Millind Mittal

More information

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved.

1 Copyright 2011, Oracle and/or its affiliates. All rights reserved. 1 Copyright 2011, Oracle and/or its affiliates. All rights ORACLE PRODUCT LOGO Solaris 11 Networking Overview Sebastien Roy, Senior Principal Engineer Solaris Core OS, Oracle 2 Copyright 2011, Oracle and/or

More information

Technical Computing Suite supporting the hybrid system

Technical Computing Suite supporting the hybrid system Technical Computing Suite supporting the hybrid system Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster Hybrid System Configuration Supercomputer PRIMEHPC FX10 PRIMERGY x86 cluster 6D mesh/torus Interconnect

More information

PCI Express: Evolution, Deployment and Challenges

PCI Express: Evolution, Deployment and Challenges PCI Express: Evolution, Deployment and Challenges Nick Ma 马明辉 Field Applications Engineer, PLX Freescale Technology Forum, Beijing Track: Enabling Technologies Freescale Technology Forum, Beijing - November

More information

Fabric Interfaces Architecture. Sean Hefty - Intel Corporation

Fabric Interfaces Architecture. Sean Hefty - Intel Corporation Fabric Interfaces Architecture Sean Hefty - Intel Corporation Changes v2 Remove interface object Add open interface as base object Add SRQ object Add EQ group object www.openfabrics.org 2 Overview Object

More information

Building a Low-End to Mid-Range Router with PCI Express Switches

Building a Low-End to Mid-Range Router with PCI Express Switches Building a Low-End to Mid-Range Router with PCI Express es Introduction By Kwok Kong PCI buses have been commonly used in low end routers to connect s and network adapter cards (or line cards) The performs

More information

TABLE I IBA LINKS [2]

TABLE I IBA LINKS [2] InfiniBand Survey Jeremy Langston School of Electrical and Computer Engineering Tennessee Technological University Cookeville, Tennessee 38505 Email: jwlangston21@tntech.edu Abstract InfiniBand is a high-speed

More information

DRAM and Storage-Class Memory (SCM) Overview

DRAM and Storage-Class Memory (SCM) Overview Page 1 of 7 DRAM and Storage-Class Memory (SCM) Overview Introduction/Motivation Looking forward, volatile and non-volatile memory will play a much greater role in future infrastructure solutions. Figure

More information

<Insert Picture Here> Exadata Hardware Configurations and Environmental Information

<Insert Picture Here> Exadata Hardware Configurations and Environmental Information Exadata Hardware Configurations and Environmental Information Revised July 1, 2011 Agenda Exadata Hardware Overview Environmental Information Power InfiniBand Network Ethernet Network

More information

Memory Management Strategies for Data Serving with RDMA

Memory Management Strategies for Data Serving with RDMA Memory Management Strategies for Data Serving with RDMA Dennis Dalessandro and Pete Wyckoff (presenting) Ohio Supercomputer Center {dennis,pw}@osc.edu HotI'07 23 August 2007 Motivation Increasing demands

More information

Improving Blade Economics with Virtualization

Improving Blade Economics with Virtualization Improving Blade Economics with Virtualization John Kennedy Senior Systems Engineer VMware, Inc. jkennedy@vmware.com The agenda Description of Virtualization VMware Products Benefits of virtualization Overview

More information

PCI EXPRESS TECHNOLOGY. Jim Brewer, Dell Business and Technology Development Joe Sekel, Dell Server Architecture and Technology

PCI EXPRESS TECHNOLOGY. Jim Brewer, Dell Business and Technology Development Joe Sekel, Dell Server Architecture and Technology WHITE PAPER February 2004 PCI EXPRESS TECHNOLOGY Jim Brewer, Dell Business and Technology Development Joe Sekel, Dell Server Architecture and Technology Formerly known as 3GIO, PCI Express is the open

More information

STORAGE NETWORKING TECHNOLOGY STEPS UP TO PERFORMANCE CHALLENGES

STORAGE NETWORKING TECHNOLOGY STEPS UP TO PERFORMANCE CHALLENGES E-Guide STORAGE NETWORKING TECHNOLOGY STEPS UP TO PERFORMANCE CHALLENGES SearchStorage S torage network technology is changing and speed is the name of the game. To handle the burgeoning data growth, organizations

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2.

QuickSpecs. Overview. HPE Ethernet 10Gb 2-port 535 Adapter. HPE Ethernet 10Gb 2-port 535 Adapter. 1. Product description. 2. Overview 1. Product description 2. Product features 1. Product description HPE Ethernet 10Gb 2-port 535FLR-T adapter 1 HPE Ethernet 10Gb 2-port 535T adapter The HPE Ethernet 10GBase-T 2-port 535 adapters

More information

RapidIO.org Update. Mar RapidIO.org 1

RapidIO.org Update. Mar RapidIO.org 1 RapidIO.org Update rickoco@rapidio.org Mar 2015 2015 RapidIO.org 1 Outline RapidIO Overview & Markets Data Center & HPC Communications Infrastructure Industrial Automation Military & Aerospace RapidIO.org

More information

Intel Enterprise Processors Technology

Intel Enterprise Processors Technology Enterprise Processors Technology Kosuke Hirano Enterprise Platforms Group March 20, 2002 1 Agenda Architecture in Enterprise Xeon Processor MP Next Generation Itanium Processor Interconnect Technology

More information

Application Acceleration Beyond Flash Storage

Application Acceleration Beyond Flash Storage Application Acceleration Beyond Flash Storage Session 303C Mellanox Technologies Flash Memory Summit July 2014 Accelerating Applications, Step-by-Step First Steps Make compute fast Moore s Law Make storage

More information

Cisco Quantum Policy Suite for Mobile

Cisco Quantum Policy Suite for Mobile Data Sheet Cisco Quantum Policy Suite for Mobile The Cisco Quantum Policy Suite for Mobile is a proven carrier-grade policy, charging, and subscriber data management solution that enables service providers

More information

The Virtual Machine Aware SAN

The Virtual Machine Aware SAN The Virtual Machine Aware SAN What You Will Learn Virtualization of the data center, which includes servers, storage, and networks, has addressed some of the challenges related to consolidation, space

More information

Chapter 8. Network Troubleshooting. Part II

Chapter 8. Network Troubleshooting. Part II Chapter 8 Network Troubleshooting Part II CCNA4-1 Chapter 8-2 Network Troubleshooting Review of WAN Communications CCNA4-2 Chapter 8-2 WAN Communications Function at the lower three layers of the OSI model.

More information

The Tofu Interconnect 2

The Tofu Interconnect 2 The Tofu Interconnect 2 Yuichiro Ajima, Tomohiro Inoue, Shinya Hiramoto, Shun Ando, Masahiro Maeda, Takahide Yoshikawa, Koji Hosoe, and Toshiyuki Shimizu Fujitsu Limited Introduction Tofu interconnect

More information

Future Routing Schemes in Petascale clusters

Future Routing Schemes in Petascale clusters Future Routing Schemes in Petascale clusters Gilad Shainer, Mellanox, USA Ola Torudbakken, Sun Microsystems, Norway Richard Graham, Oak Ridge National Laboratory, USA Birds of a Feather Presentation Abstract

More information

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini White Paper Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini February 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 9 Contents

More information

Creating an agile infrastructure with Virtualized I/O

Creating an agile infrastructure with Virtualized I/O etrading & Market Data Agile infrastructure Telecoms Data Center Grid Creating an agile infrastructure with Virtualized I/O Richard Croucher May 2009 Smart Infrastructure Solutions London New York Singapore

More information

OPTIMIZING MOBILITY MANAGEMENT IN FUTURE IPv6 MOBILE NETWORKS

OPTIMIZING MOBILITY MANAGEMENT IN FUTURE IPv6 MOBILE NETWORKS OPTIMIZING MOBILITY MANAGEMENT IN FUTURE IPv6 MOBILE NETWORKS Sandro Grech Nokia Networks (Networks Systems Research) Supervisor: Prof. Raimo Kantola 1 SANDRO GRECH - OPTIMIZING MOBILITY MANAGEMENT IN

More information

SAS Standards and Technology Update Harry Mason LSI Corp. Marty Czekalski Seagate

SAS Standards and Technology Update Harry Mason LSI Corp. Marty Czekalski Seagate SAS Standards and Technology Update Harry Mason LSI Corp. Marty Czekalski Seagate SAS Update SAS Overview SAS Performance Roadmap and 12Gb/sec SAS staging MultiLink SAS TM and Advanced Connectivity Connectivity

More information

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel 82599 10GbE Controller Oracle's Sun Dual Port 10 GbE PCIe 2.0 Networking Cards with SFP+ pluggable transceivers, which incorporate the Intel

More information

CERN openlab Summer 2006: Networking Overview

CERN openlab Summer 2006: Networking Overview CERN openlab Summer 2006: Networking Overview Martin Swany, Ph.D. Assistant Professor, Computer and Information Sciences, U. Delaware, USA Visiting Helsinki Institute of Physics (HIP) at CERN swany@cis.udel.edu,

More information

GEN-Z AN OVERVIEW AND USE CASES

GEN-Z AN OVERVIEW AND USE CASES 13 th ANNUAL WORKSHOP 2017 GEN-Z AN OVERVIEW AND USE CASES Greg Casey, Senior Architect and Strategist Server CTO Team DellEMC March, 2017 WHY PROPOSE A NEW BUS? System memory is flat or shrinking Memory

More information

MULTICAST USE IN THE FINANCIAL INDUSTRY

MULTICAST USE IN THE FINANCIAL INDUSTRY 12 th ANNUAL WORKSHOP 2016 MULTICAST USE IN THE FINANCIAL INDUSTRY Christoph Lameter GenTwo [ April, 5 th, 2016 ] OVERVIEW Multicast and the FSI (Financial Services Industry) Short refresher on Multicast

More information

Toward a unified architecture for LAN/WAN/WLAN/SAN switches and routers

Toward a unified architecture for LAN/WAN/WLAN/SAN switches and routers Toward a unified architecture for LAN/WAN/WLAN/SAN switches and routers Silvano Gai 1 The sellable HPSR Seamless LAN/WLAN/SAN/WAN Network as a platform System-wide network intelligence as platform for

More information

Sugon TC6600 blade server

Sugon TC6600 blade server Sugon TC6600 blade server The converged-architecture blade server The TC6600 is a new generation, multi-node and high density blade server with shared power, cooling, networking and management infrastructure

More information