Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview

Similar documents
January 28 29, 2014San Jose. Engineering Workshop

SSI Ethernet Midplane Design Guide

Sugon TC6600 blade server

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Modular Platforms Market Trends & Platform Requirements Presentation for IEEE Backplane Ethernet Study Group Meeting. Gopal Hegde, Intel Corporation

Design a Remote-Office or Branch-Office Data Center with Cisco UCS Mini

Server-Grade Performance Computer-on-Modules:

Pass-Through Technology

Cisco UCS C200 M2 High-Density Rack-Mount Server

IBM Virtual Fabric Architecture

Inspur AI Computing Platform

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

Compute Engineering Workshop March 9, 2015 San Jose

IBM TotalStorage SAN Switch M12

Cisco UCS C210 M1 General-Purpose Rack-Mount Server

Active Optical Cables. Dr. Stan Swirhun VP & GM, Optical Communications April 2008

ExaMAX High Speed Backplane Connector System Innovative pinless connector system delivering superior electrical performance at speeds 25Gb/s to 56Gb/s

Cisco Interconnect Solutions for HP BladeSystem c-class

HP BladeSystem c-class Server Blades OpenVMS Blades Management. John Shortt Barry Kierstein Leo Demers OpenVMS Engineering

RS U, 1-Socket Server, High Performance Storage Flexibility and Compute Power

Full Featured with Maximum Flexibility for Expansion

The Advantages of AdvancedTCA * The Telecom Industry s Open Standard for Equipment Manufacturers

Using On-Board Optics for Networking Technology Innovation

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

IBM eserver xseries. BladeCenter. Arie Berkovitch eserver Territory Manager IBM Corporation

Dell Solution for High Density GPU Infrastructure

Cisco UCS C24 M3 Server

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

Intel Select Solutions for Professional Visualization with Advantech Servers & Appliances

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Cisco UCS C210 M2 General-Purpose Rack-Mount Server

Cisco UCS C240 M3 Server

Intel Server. Performance. Reliability. Security. INTEL SERVER SYSTEMS

Fujitsu PRIMERGY Servers Portfolio

Cisco UCS C240 M3 Server

Force10 Networks. Debbie Montano Copyright 2008 Force10 Networks, Inc.

MicroTCA / AMC Solutions for Real-Time Data Acquisition

Intel Server. Performance. Reliability. Security. INTEL SERVER SYSTEMS FEATURE-RICH INTEL SERVER SYSTEMS FOR PERFORMANCE, RELIABILITY AND SECURITY

EXAMAX HIGH SPEED BACKPLANE CONNECTOR SYSTEM Innovative pinless connector system delivering superior electrical performance at speeds 25 to 56Gb/s

OCP-T Spec Open Pod Update

STORAGE NETWORKING TECHNOLOGY STEPS UP TO PERFORMANCE CHALLENGES

High Performance Ethernet for Grid & Cluster Applications. Adam Filby Systems Engineer, EMEA

ATCA Platform Considerations for Backplane Ethernet. Aniruddha Kundu Michael Altmann Intel Corporation May 2004

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Highest Levels of Scalability Simplified Network Manageability Maximum System Productivity

Cisco SFS 7000D InfiniBand Server Switch

Cisco HyperFlex HX220c Edge M5

Density Optimized System Enabling Next-Gen Performance

SSD Architecture Considerations for a Spectrum of Enterprise Applications. Alan Fitzgerald, VP and CTO SMART Modular Technologies

RapidIO.org Update. Mar RapidIO.org 1

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Cisco UCS C250 M2 Extended-Memory Rack-Mount Server

Cisco HyperFlex HX220c M4 Node

24th Annual ROTH Conference

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

IBM BladeCenter solutions

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

Cisco UCS B200 M3 Blade Server

HPE BladeSystem. HPE BladeSystem. HP ProLiant Gen8/9. Insight HPE BladeSystem Display Onboard Administarator BladeSystem Virtual Connect

IBM BladeCenter solutions

Deploying 10 Gigabit Ethernet with Cisco Nexus 5000 Series Switches

PrepKing. PrepKing

Objectives for HSSG. Peter Tomaszewski Scientist, Signal Integrity & Packaging. rev3 IEEE Knoxville Interim Meeting, September 2006

TECHNOLOGIES CO., LTD.

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Physical Layer Switch Overview

Oracle s Netra Modular System. A Product Concept Introduction

Intel SeRVeR SyStemS

An Introduction to PICMG 3.7 ATCA Base Extensions PENTAIR

A Mainframe Guy Discovers Blades..as in zenterprise Blade Extension

ETHERNET SWITCHING PORTFOLIO

VPI / InfiniBand. Performance Accelerated Mellanox InfiniBand Adapters Provide Advanced Data Center Performance, Efficiency and Scalability

Navigating the Pros and Cons of Structured Cabling vs. Top of Rack in the Data Center

Rack Systems. OpenBlade High-Density, Full-Rack Solution Optimized for CAPEX & OPEX

Flexible General-Purpose Server Board in a Standard Form Factor

AMD Disaggregates the Server, Defines New Hyperscale Building Block

Cisco UCS B440 M1High-Performance Blade Server

Xyratex ClusterStor6000 & OneStor

Huawei KunLun Mission Critical Server. KunLun 9008/9016/9032 Technical Specifications

GEN-3 OpenVPX BACKPLANES

QuickSpecs. HP InfiniBand Options for HP BladeSystems c-class. Overview

Huawei KunLun Mission Critical Server. KunLun 9008/9016/9032 Technical Specifications

DMTF Management Initiatives for Academics

Data Sheet Fujitsu PRIMERGY BX900 S2 Blade Server

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

SAN Design Best Practices for the Dell PowerEdge M1000e Blade Enclosure and EqualLogic PS Series Storage (1GbE) A Dell Technical Whitepaper

Product Catalog Embedded Computing Products

Data Storage World - Tokyo December 16, 2004 SAN Technology Update

HPE Synergy Configuration and Compatibility Guide

Data Storage World - Tokyo December 16, 2004 SAN Technology Update

Industry Standards for the Exponential Growth of Data Center Bandwidth and Management. Craig W. Carlson

Next Generation Computing Architectures for Cloud Scale Applications

Who says world-class high performance computing (HPC) should be reserved for large research centers? The Cray CX1 supercomputer makes HPC performance

Dell EMC Networking: the Modern Infrastructure Platform

Overview. Cisco UCS Manager User Documentation

Building resiliency with IBM System x and IBM System Storage solutions

Promentum MPCBL0050 PRODUCT BENEFITS. Dual-Core Intel Xeon Processor LV Dual Core Xeon processor module FEATURE SUMMARY. [Print This Datasheet]

Data Center Applications and MRV Solutions

Transcription:

Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview May 2010 1

About SSI Established in 1998, the Server System Infrastructure (SM) (SSI) Forum is a leading server industry group that drives the server infrastructure standards For ten years SSI created standards for redundant server power systems, rack-mount server chassis, power control and management, and other components and services that simplify the build of server solutions In recent years SSI has extended it s standardization to add blade-based server standards to address customer and ecosystem challenges. SSI s goal is to enable future server market growth by standardizing interfaces between components, including boards, chassis, and power supplies, and by developing common server hardware elements. 2

The SSI Ecosystem Since 1999, SSI SM has delivered over 45 Industry specifications to enable more than 125 companies to deliver standards-based systems. The SSI organization was extended in preparation for the next wave of server growth into HPC, Blades, Data centers, virtualized environments, and Cloud Computing. Organization enables massive opportunity for enterprise compute platform development - potential $1B or greater industry savings for the Server Market over next decade Industry responding to SSI s market opportunity with more than 40 members engaged and more joining weekly 3

Modular Server Specification Philosophy of SSI (SM) Focus on needs of midsize market segments and channel ecosystem Optimize bladed platform for existing data center power/cooling infrastructures Strong focus on cost structure and rack motherboard design re-use Headroom for at least 3 generations of processor and fabric technology Deliver improved and simplified management & diagnostics Allow member innovation and differentiation 4

Server Specification Philosophy of SSI (SM) Strong focus on cost structure and rack motherboard design re-use Compute Module Specification: Leverage standard 1U Designs: Thermal solutions & standard DIMMs 407.90mm 41.70mm 279.40mm $3,000 Blade vs 1U Cost / Price Crossover $2,500 Avg cost per server $2,000 $1,500 $1,000 $500 $0 1 2 3 4 5 6 7 8 9 10 Number of Servers Standard 1U Rack Server Blade System Source: Intel EPSD Finance Volume Economics = Lower Costs 5

SSI(SM) Compute Module Specification Leverage standard 1U Designs: Thermal solutions & standard DIMMs Maximum Power 41.70mm 407.90mm 450 Watts 279.40mm H: 279.4mm D : mm 7.9 40 P: 41.7mm 6

Compute Module Connectors Hi-Speed Mezzanine Signal Connector Power Connector CPU Blade Signal Connector Optional CPU Blade Signal Connector Guide Module Connector 7

Mezzanine Card Overview 155mm mm Mezzanine Board Highlights Total PCB Area 20.4 square inches Power: 25 Watts 3W Standby power 2 x8 PCI-e links from the CPU board 2 x4 High Speed links to the Midplane 2 x1 High Speed links to the Midplane IPMI (SMB) based management I/F to the CPU Blade Blade to Mezzanine Board Connector Mezzanine Board (Transparent) Mezzanine Board to Midplane Connector 8

SSI (SM) Chassis Manager Architecture Overview Public Network Primary Switch Switch Blade Blade Mgt Mgt Ctlr Ctlr BMI Private Network Chassis Chassis Manager Manager Mgt Mgt Ctlr Ctlr Cooling Mgt Ctlr Power Mgt Ctlr Inside the Box Industry Standard Protocols: IPMI 2.0 for Power up, Discovery, Configuration and HW Monitoring Ethernet for rich functions like SOL, KVM redirection Primary Ethernet Mgt Ctlr Mgt Ctlr Mgt Ctlr Mgt Ctlr Mgt Ctlr Mgt Ctlr External Management Clients Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Compute Blade Ext Ethernet Intf Chassis SM/CLP CIM WS-MAN Extension Env File Storage OEM Features Outside the Box Industry Standard Protocols Management Dashboard (SNMP2.0 MIBs), WS-MAN/CIM, SMASH Profiles OEM Extensions CEL Power Mgt IPMI Module Management BMI Intf Cooling Mgt File Access Mgt Ethernet Intf To all Managed Modules To Switches and Blades 9

CMM Overview Total PCB Area ~54 square inches Power: 50W Max Management provision for 10 Blades, 4 Switches, and 2 custom I/O Modules IPMI (SMB) based management I/F to all modules, with 10/100 Ethernet to Blades, Switches, I/O Redundant CMM Failover comprehended 2 x 120 Pin Airmax* connectors for Signal and Power connections to the Midplane Guide Pin for CMM Module alignment with Midplane 10

Backplane Interconnect in Blades Provide Flexibility, Scalability, and Headroom for future through use of standard interfaces Flexible Fabric Solution: Primary Ethernet: Dual 4x differential Link Pairs Support IEEE802.3ap data rates (max. 10Gbps) Payload Bandwidths: max 80Gbps Optional Flexi Channels through Mezzanine Interconnect: Dual 4x differential Link pairs Ethernet, PCIe*, InfiniBand* Reserved and Expansion Interconnects for OEM differentiation Primary (Ethernet) Primary (Ethernet) OEM Fabric Optional Secondary Fabric Secondary Fabric 1X 1X 1X 1X 1X Enet 1X Enet 1X Enet 1X Enet Enet Enet Enet Enet 4x XAUI or 10GbE 4x XAUI or SFP Rsvd (1x) Rsvd (1x) On CPU Blade (8x) (8x) 1X 1X 1X 1X 1X 1X 1X PCIe-Gen2, Ethernet, Infiniband PCIe-Gen2, Ethernet, InfiniBand On Mezzanine Card 1x 11

1x Switch Module Overview Lower Speeds (x1) VHDM Connector - 14 x1 internal ports - Dual I2C Busses (to each Management Module) 45 Watts Side->Side Cooling 2 Power Domains (Management, Full On) FC, 1GbE, IB Available Uplinks available in Copper and Fiber Optic SNMP/Web management Standard OK/Fault LEDs 112mm 29mm 259.8mm 12

4x Switch Module Overview High Speed (x4) Single Height (20mm) Double Height (41mm) GBX Connector - 14 x4 internal ports - Dual I2C Busses (to each Management Module) 60 Watts 10GbE, IB, PCIe Available SNMP/Web management Standard OK/Fault LEDs 20mm 260.93mm 239.9mm 13

SSI (SM) Midplane Electrical Specification SSI Midplane Ethernet Electrical Specification Ensures 802.3ap compliance Simulation complete for KX and KX-4 Hardware verification of KX and KX-4 on-going Simulation for KR on-going Provides Standards Based requirements for a complete end to end SERDES channel for SSIbased chassis designs Defines test criteria for ensuring compliance Verification of this criteria will be validated by the SSI Compliance and Interoperability Lab 14

SSI (SM) Midplane Design Guide Mechanical Guidance Specifies the connectors needed on the Midplane Compute Blades utilize Airmax* series connectors Switch Modules utilize VHDM (1x) and GbX (4x) connectors Provides connector placement locations Electrical Guidance Sample board stackup Power distribution High-speed routing PCB technology choices based upon system requirements Channel Guidance End-to-end channel lengths Routing techniques Via stub reduction Length matching/spacing System Guidance I2C topology Thermal consideration 15

SSI (SM) Midplane Design Guide 2. Midplane 3. Switch 1. Blade Midplane Connectivity Guide 16 PCIe: PCI Express* Technology

SSI (SM) Midplane Design Guide Midplane Route Length Blade Airmax Airmax Midplane Airmax or GBX or VHDM Airmax or GBX or VHDM Switch Channel Technology Channel Blade Midplane Switch Design Point Impedance Connector Impedance Connector Impedance Target Type LC MC HC Target Type Target KX KX4 KX4 KR KR Airmax Airmax Airmax Airmax Airmax Max Len Max Len Max Len Max Len Max Len 16.5 10 9 6 4 19.5 14 10.5 10 8 22.5 16 12 16 12 Rev Rev Rev Rev Rev 0.8 1.0 0.5 0.0 0.0 0.0 1.0 0.5 0.0 0.0 0.0 0.0 0.5 0.0 0.0 VHDM Airmax GBX Airmax GBX 100 100 100 KX KX4 KX4 KR KR Channel Technology Rev Definitions 0.0 Engineering targets to meet timings and noise margins 0.5 Simulations complete pending Blade, Switch or Midplane Specification changes 0.8 Simulations complete without outstanding specification changes 1.0 Simulations correlated with hardware 2.0 Demonstrated in Compatibility and Interoperability Lab 17

Blade Specifications Scope 18

Opportunity for Innovation w/open Specs: Ability to add unique value within SSI (SM) Match system size to application, 3 blades and up Compute blade spec supports an optional fabric 10G Ethernet, Fiber channel, InfiniBand*, etc. Compute blade mezzanine can support multiple technologies Add value to the backplane or chassis e.g. virtualized storage Flexible Chassis Management Module specification supports user feature differentiation Cost Effective Standards, Unique OEM Value 19

For More Info Visit to learn more about SSI (SM) organization Current status or download of all specifications Benefits of joining SSI at different membership levels Demos and trade show activities Companies participating in SSI Compute, Mezzanine, Chassis Management Module, and Switch Specifications available for Design starts Participate in Server, Architecture, C&I, Marketing Work Groups Contact Information: chairman@ssiforum.org (jim.ryan@intel.com) Thank You! 20