OCP-T Spec Open Pod Update

Similar documents
Building Forward Together 1

Open Compute Project Telecom Working Group

Samuli Toivola Lead HW Architect Nokia

Inspur AI Computing Platform

Density Optimized System Enabling Next-Gen Performance

All-Flash Storage System

Intel Select Solution for NFVI with Advantech Servers & Appliances

The Why and How of Developing All-Flash Storage Server

Full Featured with Maximum Flexibility for Expansion

WiRack19 - Computing Server. Wiwynn SV324G2. Highlights. Specification.

Copyright 2017 Intel Corporation

Next Generation Computing Architectures for Cloud Scale Applications

Intel Select Solutions for Professional Visualization with Advantech Servers & Appliances

January 28 29, 2014San Jose. Engineering Workshop

All product specifications are subject to change without notice.

Essentials. Expected Discontinuance Q2'15 Limited 3-year Warranty Yes Extended Warranty Available

Cisco HyperFlex HX220c Edge M5

Flexible General-Purpose Server Board in a Standard Form Factor

SERVER. Samuli Toivola Lead HW Architect Nokia

OCP CG OpenRack 19 & Edge Computing. Jeff Sharpe, Director Product Strategy, ADLINK

QCT Rackgo X Yosemite V2 2018/10/11

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

AI Solution

CSU 0201 Compute Sled Unit

Open edge server. Revision 0.1. Author: Samuli Toivola, Nokia 28/09/ / 25

Cisco HyperFlex HX220c M4 Node

Supports up to four 3.5-inch SAS/SATA drives. Drive bays 1 and 2 support NVMe SSDs. A size-converter

About 2CRSI. OCtoPus Solution. Technical Specifications. OCtoPus servers. OCtoPus. OCP Solution by 2CRSI.

Intel Select Solution for ucpe

Sugon TC6600 blade server

Telecom Open Rack Concept

Front-loading drive bays 1 12 support 3.5-inch SAS/SATA drives. Optionally, front-loading drive bays 1 and 2 support 3.5-inch NVMe SSDs.

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Cisco HyperFlex HX220c M4 and HX220c M4 All Flash Nodes

Achieve Optimal Network Throughput on the Cisco UCS S3260 Storage Server

CSU 0111 Compute Sled Unit

Intel Server Boards. The Quality You Want. The Confidence INTEL SERVER BOARDS. Intel Product Quick Reference Matrix Q4 2018

Server-Grade Performance Computer-on-Modules:

Cisco UCS S3260 System Storage Management

Cisco UCS S3260 System Storage Management

About 2CRSI. OCtoPus Solution. Technical Specifications. OCtoPus. OCP Solution by 2CRSI.

Cisco UCS S3260 System Storage Management

HostEngine 5URP24 Computer User Guide

Huawei FusionServer. V5 Rack Server HUAWEI TECHNOLOGIES CO., LTD.

Altos R320 F3 Specifications. Product overview. Product views. Internal view

Capacity driven storage server. Capacity driven storage server

AI ready in deep learning solution

15 Jun 2012 Kevin Chang

Resource Saving: Latest Innovation in Optimized Cloud Infrastructure

Intel Xeon E v4, Optional Operating System, 8GB Memory, 2TB SAS H330 Hard Drive and a 3 Year Warranty

Xu Wang Hardware Engineer Facebook, Inc.

Rack Systems. OpenBlade High-Density, Full-Rack Solution Optimized for CAPEX & OPEX

Intel Open Network Platform. Recep Ozdag Intel Networking Division May 8, 2013

A Universal Terabit Network Dataplane

FUJITSU Server PRIMERGY CX400 M4 Workload-specific power in a modular form factor. 0 Copyright 2018 FUJITSU LIMITED

OCP3. 0. ConnectX Ethernet Adapter Cards for OCP Spec 3.0

NVMe Takes It All, SCSI Has To Fall. Brave New Storage World. Lugano April Alexander Ruebensaal

Lil e 2 octobre 2018

Edition

Intel Server S2600BP Product Family Configuration Guide

Cisco UCS B440 M1High-Performance Blade Server

Cloud Scale Architectures

Nokia open edge server

TELCO. Tomi Männikkö/HW Architect, Datacenter R&D/ Nokia

COMPUTING. MC1500 MaxCore Micro Versatile Compute and Acceleration Platform

Deep Dive QFX5100 & Virtual Chassis Fabric Washid Lootfun Sr. System Engineer

Mika Hatanpää Head of AirFrame data center solutions R&D Nokia

Intel Data Center Blocks (Intel DCB)

N V M e o v e r F a b r i c s -

T-CAP (Converged Appliance Platform)

Dual-Core Server Computing Leader!! 이슬림코리아

DELL EMC READY BUNDLE FOR VIRTUALIZATION WITH VMWARE AND FIBRE CHANNEL INFRASTRUCTURE

ASUS Server & Workstation Product Roadmap. Session 2 : General Purpose Purley System

Service Edge Virtualization - Hardware Considerations for Optimum Performance

Data Sheet FUJITSU Server PRIMERGY CX2550 M4 Dual Socket Server Node

Server System Infrastructure (SM) (SSI) Blade Specification Technical Overview

Cisco UCS B200 M3 Blade Server

QuickSpecs. HPE Cloudline CL7100 Server. Overview

All product specifications are subject to change without notice.

Compute Engineering Workshop March 9, 2015 San Jose

SK Telecom CNA-SSX2RC (Service-Optimized Hybrid Network Appliance)

SU Dual and Quad-Core Xeon UP Server

NFV Infrastructure for Media Data Center Applications

October 30-31, 2014 Paris

PPC-F-Q370 SERIES. OpenVINO. AI Ready Panel PC Based on Intel OpenVINO toolkit AR 501 CAR 500 CAR 406 CAR 407 CAR 408 CAR 409 CAR 404 CAR 405 CAR 402

Intel Xeon E v4, Windows Server 2016 Standard, 16GB Memory, 1TB SAS Hard Drive and a 3 Year Warranty

BIGTWIN THE INDUSTRY S HIGHEST PERFORMING TWIN MULTI-NODE SYSTEM

Project Olympus 3U PCIe Expansion Server. 1/23/2019 Mark D. Chubb

DataON and Intel Select Hyper-Converged Infrastructure (HCI) Maximizes IOPS Performance for Windows Server Software-Defined Storage

Agenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 >

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Lot # 10 - Servers. 1. Rack Server. Rack Server Server

UCS Fundamentals Aaron Kerr, Consulting Systems Engineer

Cisco UCS B230 M2 Blade Server

MediaKind G7 Series. High Performance Intel-based Video Processing

Broadberry. Artificial Intelligence Server for Fraud. Date: Q Application: Artificial Intelligence

SSG-6028R-E1CR16T SSG-6028R-E1CR24N/L SSG-2028R-E1CR48N/L SSG-6048R-E1CR60N/L

Intel Server. Performance. Reliability. Security. INTEL SERVER SYSTEMS

Sun Dual Port 10GbE SFP+ PCIe 2.0 Networking Cards with Intel GbE Controller

PPC-F-Q370 SERIES AI Ready Panel PC Based on Intel OpenVINO toolkit

Transcription:

OCP-T Spec Open Pod Update Defining the next generation of OCP and RSD Open Pods Jeff Sharpe January, 2017 Building Forward Together Building Forward Together 1

Scope for OCP-T Pod for Frame/appliance layouts Current OCP-T Spec: OCP-T frame, Power, Interconnect & Pod dimension spec submitted by Radisys: Current spec defines the dimensions of 2 Pods, ½ and Full Width 2U Pods Pre-defines interfaces and DC Power requirements into Frame Does not specify characteristics of the Pod internal component layout ADLINK Proposed OCP-T Pod Spec: Specs are used as a guideline for appliance delivery that is based on Modular Industrial Compute Architecture - CPU, Appliance, Future Switching (MR-IOV Plex and RRC as examples) Pod spec defines zones, what the zones are for and parameters for zones and use MICA as reference). Power, Mezzanine and interconnections between components are defined Both ½ width and Full Width POD Spec planned Building Forward Together 2

OCP-T Server Spec Overview Benefits Enables a common architecture for OCP-T suppliers to build a standard Central office product Provides both ½ Width and Full Width Pods for a multitude of options Defines the zones of each pod for operators to feel confident that the POD is open, however provides many options for POD use cases Enables the front part of the sled (Zone 5) and the Mezzanine (Zone 4) for Server, ARM, HW acceleration, Front Panel options, Storage, etc. Future Mezzanine enables ability to add multi-host controllers, MR-IOV capabilities, switching and HW acceleration for additional capabilities. Utilizes Radisys best in class spec for Frame and Sled interconnection, power and physical dimensions. Building Forward Together 3

OCP Telecom Standardization based on MICA 19 OCP-T Frames Pods for OCP T & OpenRack Reusable Sleds OCP T Specification ½ Width POD Spec: POD Zones & Layout POD Power POD I/O POD Environmental Full Width Building Forward Together 4

Open Pod Zone Specification POD Spec: Defines POD Zones, Dimensions & Layout for ½ and full width sleds POD Power requirements POD I/O options, Mezzanine and NIC POD Environmental for cooling and heat dissipation Provides interface connections between each zone Mezzanine interface for additional multi-hosted options Enables multiple Server options Building Forward Together 5

Proposed OCP-T OpenPod Layout OCP-T Pod Zones Across both Layer Definition for ½ width and full width OpenPods Shared function 1x OCP Power Bus Connector 8x Optical Data Port Connector Fits into the OCP-T CG OpenRack spec utilizing MICA: PCIe Midplane, CPU and NIM sizes Dimensions and interconnects defined by OCP-T CG OpenRack Spec Defines interconnections between each zone above Utilizes OCP-T Power and Frame interconnect as defined by OCP-T CG OpenRack Spec Zones 1 3 Defines the foundation of the POD for system connections, Cooling, Power and Mid-Plane Zones 4 5 Defines the compute & optional interfaces connecting into the Pod base Building Forward Together 6

Full Width (2U) OCP-T Tray with 4 CPU sleds Full Mezzanine enabling passive connections for POD switching. Multi-host controller, PCIe Switching within the POD as a single appliance Could be a complete storage sled as well. 1x OCP Power Bus Connector 8x Optical Data Port Connector 1x OCP Power Bus Connector 8x Optical Data Port Connector Building Forward Together 7

M.I.C.A. Plan of Intent (2017-2020) Sleds Purley / FPGA / TickTock 5G Acceleration NFV/SDN/IoT Acceleration Continued increase of switching throughput SSD/HDD Storage GPGPU Systems OCP Telecom Hyper Converged Systems Backplane Enhancements Continued Platform s/w expansion Building Forward Together 8

Network Alliance program Industrial leaders as part of our Modular Industrial Compute Alliance Alliance Application Partners Integration, Testing & Support Middleware Building Forward Together 9

Backup Building Forward Together 10

End-to-End Solutions Application Ready Platforms vcpe Network Security & DPI Video Streaming, Transcoding Telecom Network Solutions IoT Compute & Analytics Mobile Edge Computing Building Forward Together 11

Mix & Match Rich system configuration choice CSA-7200 8 NIMs, 128 Eth Ports Rich I/O CSA-7400 4 Compute Nodes, 360G Throughput High Compute Performance &Throughput Mother Board MCN-2600 CSA-7x00 24*HDD+ 1* CSA-7400 Large Storage, Storage, Compute, Switch Hyper Converged OCP-T Compute Sled 2 Compute Nodes, Building Forward Together 12

The Next Gen.- Network Communication Platform With Flexible Modular Building Blocks Compute Nodes Broadwell, Skylake, Purley (Skylake + FPGA) ½ and ¼ width sleds for needed core density Flexibility: mix and match E3 and E5 Integrated NFVi Software and Platform S/W MCN-2600 Broadwell MCN-2610T Purley MCN-1500 (Greenlow GT4e) MCN-1510 Purley Switching Nodes Intel Red Rock Canyon & Broadcom PEX Extension trays for flexible I/O configs System Management and wide range of SW MXN-3610 (RRC) MXN-3620 (PCIe switch) Planning: HW Accelerators & ARM I/O Nodes Optional Network Interface Modules Optical, copper, w and w/o bypass ¼ width size to support large I/O options CSA-Z5C4F CSA-ZBX10 CSA-Z5C2F CSA-Z5C8F+ CSA-Z4X01 Complete Systems 19-2U 4U and OCP-Telecom versions Data Center 2U AC/DC power, NEBS ready, customizable Multiple options based on required solution CSA- 7200/7400 MCS-2080 OCP Custom Building Forward Together 13

Compute Node MCN-2600, MCN-1500 MCN-2600 ½ width Intel Grantley Refresh Platform 2x E5-2600 v3 CPU (1 CPU optional) 12 DIMM, 192G 1600MHz DDR4 2x 2.5 hot-swappable HDD PCIe G3 x 16 base connection, up to x64 expansion Redundant M.2 SSD Module management, IPM I2.0 Planned for 2017: MCN-2610T is Purley MCN-1500 ¼ width Intel Skylake platform E3-1500 v5 CPU (GT4e Greenlow) 2 DIMM, 64 1600MHz DDR4 1x MiniSATA PCIe G3 x 8 backplane bandwidth Module management, IPMI 2.0 Available Now Building Forward Together 14

MCN-2610T: Xeon + FPGA + NVMe DDR4 DDR4 DDR4 DDR4 DDR4 DDR4 UPI0 SKL#2 SKL#1 UPI1 PCIe 3.0 x16 PCIe 3.0 x16 Backplane Prog I/F UPI2 PCIe 3.0 x8 PCIe 3.0 x8 FPGA HSSI HSSI 4x SFP+ or 2x QSFP On Front panel Co-layout SKL non-fpga & FPGA SKUs 1.5X memory bandwidth with 6 vs. 4 memory channels Richer set of available IOs including more PCIe lanes, espi etc. Integrated 4x10GbE: Cost, power and area advantages vs. discrete Ethernet Si Single standard server development: Accelerate NFV transition Reduced total platform investment with application, control, and data plane workload consolidation Cores Up to 28C with Intel HT Technology FPGA Altera Arria 10 GX 1150 Socket TDP Socket Scalability PCH Memory Intel UPI PCIe* High Speed Serial Interface (Different board design based on HSSI config) Shared socket TDP Up to 165W SKL & Up to 90W FPGA Socket P Up to 2S with SKL-SP or SKL + FPGA SKUs Lewisburg: DMI3 4 lanes; 14xUSB2 ports Up to: 10xUSB3; 14xSATA3, 20xPCIe*3 New: Innovation Engine, 4x10GbE ports, Intel QuickAssist Technology For CPU For FPGA 6 channels DDR4 RDIMM, LRDIMM, Apache Pass DIMMs 2666 1DPC, 2133, 2400 2DPC 2 channels (10.4, 9.6 GT/s) PCIe* 3.0 (8.0, 5.0, 2.5 GT/s) 32 lanes per CPU Bifurcation support: x16, x8, x4 N/A Low latency access to system memory via UPI & PCIe interconnect 1 channel (9.6 GT/s) PCIe* 3.0 (8.0, 5.0, 2.5 GT/s) 16 lanes per FPGA Bifurcation support: x8 2xPCIe 3.0 x8 Direct Ethernet (4x10 GbE, 2x40 GbE, 10x10 GbE, 2x25 GbE) Building Forward Together 15

Compute Node MCN-2610T Purley & Purley w/ FPGA Highlight ½ width Skylake server CPU Node Intel Purley Platform Dual Skylake server CPU or dual Skylake server CPU + FPGA IPMI V2.0 Compliant Fully compliant to Modular Industrial Cloud Architecture CPU Module definition Same size as MCN-2600T, ½ width, 204x 430x 44mm (W x D x H) Co-layout Dual Skylake server CPU or dual Skylake server CPU + FPGA 2x 40G or 4x 10G Front IO direct to FPGA Lewisburg PCH 12x DDR4 RDIMMs, up to 384G Hot-swappable CPU Node 2x M.2 redundant SSD on board Front panel IO. 2x RJ45 for management. 1x RJ45 console port. 1x VGA. 4x SFP+ or 2x QSFP direct to FPGA only 3 x LED (APP, Power, Status) 1~2x internal SATA HDD Node management with IPMI V2.0 Compliant NEBS Level 3 Design: FCC/CE/UL Planned 1H 17 EA Q1 17 Building Forward Together 16

Switch Node MXN-3610 and extension card Specifications ½ width in 19 rack Intel RRC FM10840 (960Mpps) 4x PCIe x8 to 4x CPU Module Up to 360G for network interface 20x10G on board; 16x 10G from I/O extension card, or to backplane Optional 1U front panel with 40G/100G for OCP COMe Type 6 for CPP Intel Core i7/i5/i3 v4 Platform or Atom Platform 1x RJ45 management port for CPP remote management 1x Console port for CMM and CPP local debug 2 x msata for redundant storage Chassis Management (CMM), IPMI 2.0 1x RJ45 management port for CMM remote management Inventory Info collection (Product Serial, Mfg Date, ) Remote Reset/Power up/down, Adaptive Fan Reset CPP via CMM Highlight PCIe x32 Gen3 Connectors to CPU modules & 200G interconnection to another switch module Total BW ~ 456 Gbps RRC chipset IO Extension Card- NIM-1610 ½ width Sled Signal Connector COMe Type 6 Available Now Building Forward Together 17

Switch Node MXN-3620 w/ MR-IOV support Specifications ½ width in 19 rack 1U/2U depending on needs Highlight PLX (Broadcom) PEX87xx/97xx PCIe x8 Gen3 Connectors up to 10 CPU 10x PCIe x 8 to 10x CPU module modules & x4 Gen3 interconnection to another switch module Up to 160G for network interface Total BW ~ 640 Gbps 4x 40G QSFP+ on face plate COMe Type 7 for CPP High speed Signal Connector Intel Xeon Processor D PLX PCIe Gen3 switch 1x RJ45 management port for CPP remote management 1x Console port for CMM and CPP local debug 2 x msata for redundant storage 1 x4 PCIe to another switch module Chassis Management (CMM), IPMI 2.0 1x RJ45 management port for CMM remote management ½ width Sled Inventory Info collection (Product Serial, Mfg Date, ) Remote Reset/Power up/down, Adaptive Fan IO Extension 4x40G QSFP+ Reset CPP via CMM Available 1H 17 Building Forward Together 18

I/O Nodes I/O Signal Connector CSA-Z4X01 NIM1: 4-port GbE copper with LAN bypass CSA-Z5C4F NIM2: 4-port GbE SFP w/o LAN bypass CSA-Z8X10 NIM3: 8-port GbE copper with LAN bypass CSA-Z5C2F NIM4: 2-port SFP+ without LAN bypass CSA-Z5C8F NIM5: 8-port GbE SFP w/o LAN bypass CSA-Z5C8F+ NIM5: 8-port GbE SFP+ w/o LAN bypass Highlight PCIe x8 Gen3 Connectors for 1x IO modules total BW ~ 64 Gbps Available Now Building Forward Together 19

Proven Software Infrastructure vcpe Analytics OpenSource VMs Security Media MEC Apps 4 Middleware ADLINK PacketManager (Layer 2/3 switch, VLAN, GRE, etc.) Intel FPGA / DPDK Building Forward Together 20