Jeff Kash, Dan Kuchta, Fuad Doany, Clint Schow, Frank Libsch, Russell Budd, Yoichi Taira, Shigeru Nakagawa, Bert Offrein, Marc Taubenblatt

Similar documents
Optical PCB Overview. Frank Libsch IBM T.J. Watson Research Center Yorktown Heights, NY. IBM Research

Interconnect Challenges in a Many Core Compute Environment. Jerry Bautista, PhD Gen Mgr, New Business Initiatives Intel, Tech and Manuf Grp

Cost-Effective Optics: Enabling the Exascale Roadmaps

Optical Interconnect Opportunities in Supercomputers and High End Computing

Optical Interconnects: Trend and Applications

Active Optical Cables. Dr. Stan Swirhun VP & GM, Optical Communications April 2008

From Majorca with love

Optical Engine? What s That?

System Packaging Solution for Future High Performance Computing May 31, 2018 Shunichi Kikuchi Fujitsu Limited

VCSEL-based solderable optical modules

Organics in Photonics: Opportunities & Challenges. Louay Eldada DuPont Photonics Technologies

Intel: Driving the Future of IT Technologies. Kevin C. Kahn Senior Fellow, Intel Labs Intel Corporation

3D Integration & Packaging Challenges with through-silicon-vias (TSV)

The Future of Electrical I/O for Microprocessors. Frank O Mahony Intel Labs, Hillsboro, OR USA

PSMC Roadmap For Integrated Photonics Manufacturing

Brand-New Vector Supercomputer

Reflex Photonics Inc. The Light on Board Company. Document #: LA Rev 3.1 June 2009 Slide 1

Interconnect Your Future

Moving Forward with the IPI Photonics Roadmap

2000 Technology Roadmap Optoelectronics. John Stafford, Motorola January 17, 2001

QSFP (Quad Small Form-factor Pluggable) Products

Trends in High Density Optical Interconnects for HPC Applications. US Conec Ltd.

Trickle Up: Photonics and the Future of Computing Justin Rattner Chief Technology Officer Intel Corporation

The Road from Peta to ExaFlop

OIF CEI-56G Project Activity

Intro to: Ultra-low power, ultra-high bandwidth density SiP interconnects

AIM Photonics: Manufacturing Challenges for Photonic Integrated Circuits

The MIT Communications Technology Roadmap Program IPI TWG Report

Photonics & 3D, Convergence Towards a New Market Segment Eric Mounier Thibault Buisson IRT Nanoelec, Grenoble, 21 mars 2016

White paper FUJITSU Supercomputer PRIMEHPC FX100 Evolution to the Next Generation

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

In-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017

Packaging for parallel optical interconnects with on-chip optical access

100G and Beyond: high-density Ethernet interconnects

Overview Brochure Direct attach and active optical cables

Technology challenges and trends over the next decade (A look through a 2030 crystal ball) Al Gara Intel Fellow & Chief HPC System Architect

Next Generation Transceivers: The Roadmap Component Driver Contributions from Roadmap team. Dominic O Brien Mike Schabel

Exascale: Parallelism gone wild!

High-bandwidth CX4 optical connector

CDFP MSA Delivers 400 Gb/s Today

Data Transport: Defining the Problem and the Solution for Photonics in servers

HiPINEB 2018 Panel Discussion

Race to Exascale: Opportunities and Challenges. Avinash Sodani, Ph.D. Chief Architect MIC Processor Intel Corporation

On Board Optical Interconnection A Joint Development Project Consortium. Terry Smith & John MacWilliams October 31, 2016

The Future of High Performance Interconnects

for Image Applications

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

3D & Advanced Packaging

Packaging avancé pour les modules photoniques

Monolithic Integration of Energy-efficient CMOS Silicon Photonic Interconnects

Silicon Photonics PDK Development

Hybrid On-chip Data Networks. Gilbert Hendry Keren Bergman. Lightwave Research Lab. Columbia University

Photonics Integration in Si P Platform May 27 th Fiber to the Chip

Moving Forward with the Cross-Market Applications Roadmap. TWG Chairs: Tremont Miao, Analog Alan Benner, IBM

Proposal for SAS 2.x Specification to Enable Support for Active Cables

Active Optical Cable. Optical Interconnection Design Innovator 25G SFP28 AOC 10G SFP+ AOC. Shenzhen Gigalight Technology Co.,Ltd.

Roadmapping of HPC interconnects

VCSEL Technology and Digital

Fiber Backbone Cabling in Buildings

Passive Optical LAN Executive Summary. Overview

Silicon Photonics and the Future of Optical Connectivity in the Data Center

PSM4 Technology & Relative Cost Analysis Update

The Mont-Blanc approach towards Exascale

250 Mbps Transceiver in LC FB2M5LVR

IBM HPC DIRECTIONS. Dr Don Grice. ECMWF Workshop November, IBM Corporation

Integrated Optical Devices

Cray XC Scalability and the Aries Network Tony Ford

CMOS Photonic Processor-Memory Networks

Paving the Road to Exascale

Over 5,000 products High Performance Adapters and Sockets Many Custom Designs Engineering Electrical and Mechanical ISO9001:2008 Registration

Optical Interconnection as an IP Macro of COMS LSIs (OIP)

VISUALIZING THE PACKAGING ROADMAP

InfiniBand FDR 56-Gbps QSFP+ Active Optical Cable PN: WST-QS56-AOC-Cxx

Key Challenges in High-End Server Interconnects. Hubert Harrer

Exascale challenges. June 27, Ecole Polytechnique Palaiseau France

Adapter Technologies

IntraChip Optical Networks for a Future Supercomputer-on-a-Chip

BlueGene/L. Computer Science, University of Warwick. Source: IBM

QSFP+ Passive Copper Cable QSFP-40G-0xC

Blue Gene: A Next Generation Supercomputer (BlueGene/P)

Product Data Sheet. 3M Active Optical Cable (AOC) Assemblies for CX4 and QSFP+ Applications

Introduction to Integrated Photonic Devices

High Performance Memory in FPGAs

L évolution des architectures et des technologies d intégration des circuits intégrés dans les Data centers

THE NEXT-GENERATION INTEROPERABILITY STANDARD

LIGHTCONEX BANDWIDTH UNLEASHED

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Proposal for SAS 2.x Specification to Enable Support for Active Cables

From 3D Toolbox to 3D Integration: Examples of Successful 3D Applicative Demonstrators N.Sillon. CEA. All rights reserved

Overview. CS 472 Concurrent & Parallel Programming University of Evansville

Data Center Connectivity

Insuring 40/100G Performance Multimode Fiber Bandwidth Testing & Array Solutions. Kevin Paschal Sr. Product Manager Enterprise Fiber Solutions

1x40 Gbit/s and 4x25 Gbit/s Transmission at 850 nm on Multimode Fiber

Fiber backbone cabling in buildings

Packaging and Integration Technologies for Silicon Photonics. Dr. Peter O Brien, Tyndall National Institute, Ireland.

Advancing high performance heterogeneous integration through die stacking

Xilinx SSI Technology Concept to Silicon Development Overview

Active. Beyond! Optical Cables. and. Express, and. Sr. Vice. Page 1

Supercomputers. Alex Reid & James O'Donoghue

Thermal Management Challenges in Mobile Integrated Systems

Transcription:

IBM Research PCB Overview Jeff Kash, Dan Kuchta, Fuad Doany, Clint Schow, Frank Libsch, Russell Budd, Yoichi Taira, Shigeru Nakagawa, Bert Offrein, Marc Taubenblatt November, 2009 November, 2009 2009 IBM Corporation Electrical BW Bottlenecks Optics Opportunities Electrical Buses become increasingly difficult at high data rates (physics): Increasing losses & cross-talk ; Frequency resonant affects data transmission: Power Efficiency, much less lossy, not plagued by resonant effects Physical size of electrical connections (BGA, connector) limits number of connections density ~10X higher Rack Backplane Module Card μp OPTICS μp MULTI CHIP MODULE CIRCUIT BOARD 2 November 2009 2009 IBM Corporation 1

Evolution of Rack-to-Rack Optics in Supercomputers VCSEL-based Optics has displaced electrical cables today 2008: 1PFLOP/sec 2005 2002 Combination of Electrical & Cabling IBM Roadrunner (LLNL) Cray Jaguar(ORNL) NEC Earth Simulator no optics IBM Federation Switch for ASCI Purple (LLNL) - Copper for short-distance links ( 10 m) - for longer links (20-40m) ~3000 parallel links 12+12@2Gb/s/channel *http://www.lanl.gov/roadrunner/ Introduced in 2008 Still #1 as of June, 2009 4X DDR Infiniband (5Gb/s) 55 miles of Active Cables *http://www.nccs.gov/jaguar/ #2 as of June, 2009 Infiniband 3 miles of Cables, longest = 60m 3 November 2009 2009 IBM Corporation Exponential Growth in Supercomputing Power 10X performance every 4 years 1ExaFlop (10 18 Floating Point Operations / Sec) ~2020 Sum of performance of top 500 machines (39% of Flops are IBM) #1 machine: Roadrunner 1PFlop #500 machine. Lags #1 machine by 5-6 years (but there are a lot more of these machines) 19 June, 2009 http://www.top500.org BW requirements must scale with System Performance, ~1B/FLOP (memory & network) Requires exponential increases in communication bandwidth at all levels of the system Inter-rack, backplane, card, chip 4 November 2009 2009 IBM Corporation 2

The Road to Exascale: Cost and power of a supercomputer Year Peak Performance Machine Cost Total Power Consumption 2008 1PF $150M 2.5MW 2012 10PF $225M 5MW 2016 100PF $340M 10MW 2020 1000PF (1EF) $500M 20MW Assumptions: Based on typical industry trends (See, e.g., top500.org and green500.org) 10X performance / 4yrs (from top500 chart) 10X performance costs 1.5X more 10X performance consumes 2X more power 5 November 2009 2009 IBM Corporation The Road to Exascale: Total bandwidth, cost and power for optics in a machine Year Peak Performance (Bidi) Bandwidth Optics Power Consumption Optics Cost 2008 1PF 0.012PB/s (1.2x10 5 Gb/s) 0.012MW $2.4M 2012 10PF 2016 100PF 1PB/s (10 7 Gb/s) 20PB/sec (2x10 8 Gb/s) 0.5MW $22M 2MW $68M 2020 1000PF (1EF) 400PB/sec (4x10 9 Gb/s) 8MW $200M Require >0.2Byte/FLOP I/O bandwidth, >0.2Byte/FLOP memory bandwidth 2008 optics replaces electrical cables (0.012Byte/FLOP, 40mW/Gb/s) 2012 optics replaces electrical backplane (0.1Byte/FLOP, 10% of power/cost) 2016 optics replaces electrical PCB (0.2Byte/FLOP, 20% of power/cost) 2020 optics on-chip (or to memory) (0.4Byte/FLOP, 40% of power/cost) 6 November 2009 2009 IBM Corporation 3

In HPC space, increased need for and use of optics cost and power must decrease (per bit unidirectional is shown) Year Peak Performance number of optical channels Optics Power Consumption Optics Cost 2008 1PF 48,000 (@5Gb/s) 50mW/Gb/s (50pJ/bit) $10/Gb/s 2012 10PF 2x10 6 (@10Gb/s) 25mW/Gb/s $1.1/Gb/s 2016 100PF 4x10 7 (@10Gb/s) 5mW/Gb/s $0.17/Gb/s 2020 1000PF (1EF) 8x10 8 (@10Gb/s) 1mW/Gb/s $0.025/Gb/s Industry trend derived roadmap, not IBM product plans Table is based on historical trends for HPCs To get optics to millions of units in HPC, need ~$1/Gb/s unidirectional Cost targets continue to decrease with time below that Power is OK for 2012, then sharp reductions will be needed 7 November 2009 2009 IBM Corporation HPC driving volume optics Higher volumes lower Cost A Single machine in the next few years could be similar to today s WW parallel optics Number of Channels 10000000 1000000 100000 10000 1000 WW volume in 2008 Roadrunner HPC single machine MareNostrum ASCI Purple e.g. 100GE Commercial use 2004 2006 2008 2010 2012 2014 2016 2018 2020 Year 8 November 2009 2009 IBM Corporation 4

Printed Circuit Boards and Components: Enabling mass manufacturing Electronics: Wires and discretes Optics: Fibers and modules Optics Modules 12x3Gbps 48 parallel channels 3.9 mm 35 x 35μm 62.5μm pitch 2D waveguide array to Printed Circuit Boards with electrical components 4x12 OE + IC (bottom view) to integrated waveguides on PCBs with optical components 9 November 2009 2009 IBM Corporation waveguide interconnects: The Terabus project and related work Optochip CMOS IC CMOS ICCMOS CMOS IC VCSEL OE s PD SLC Waveguide Lens Array Optomodule SLC Optomodule 2 CMOS ICCMOS CMOS IC VCSEL OE s Waveguide Lens Array Optocard Polymer waveguides Dense Hybrid Integration: demonstrate a low-cost packaging approach compatible with conventional PCB manufacturing and surface-mount board assembly Future Vision: optically-enabled s Circa 2014-2016 Transceiver Optochip Other Chips organic chip carrier Circuit Board w/ both electrical traces & optical waveguides Low-density, conventional electrical interface for power & control High-density, wide and fast optical interfaces for data I/O Much higher off-module bandwidth at low cost in $$ and power OE XCVR Power Cost Datarate Density Reliability <10mw/Gbps (EOE) <$0.25/Gbps (TRx + on-card optics) 20Gbps/channel 2 Tbps/cm 2 (on module) < 10 FIT per channel 10 November 2009 2009 IBM Corporation 5

Full Terabus Link (985-nm): 2 Transceiver Optomodules on Optocard TRX1: 16TX + 16RX 4x4 VCSEL Array TRX2: 16TX + 16RX 4x4 PD Array 16 Channels TRX1 TRX2 at 10Gb/s + 16 Channels TRX1 TRX2 at 10Gb/s 11 November 2009 2009 IBM Corporation interconnect density exceeds electrical: Another reason for optics Electrical Packaging is Running out of Pins Optics Can Help ICs MultiChip Module 1 MultiChip Module 2 ICs All-electrical circuit board For each 10Gb/s signal you need: 3 or 4 electrical pins (differential pair) on 800um pitch B G A OR 1 OE array element (250um pitch) Optics density exceeds electrical by more than 10X Multimode Polymer Optochip Waveguides 1 Optochip 2 Bottom Equivalent bandwidths Cutout with OEs-on-IC OE MultiChip Module 1 MultiChip Module 2 Circuit Board with multimode waveguides OE 12 November 2009 2009 IBM Corporation 6

P1 S2 P3 S4 P5 S6 P7 P8 P9 P10 S11 P12 S13 P14 P15 S16 P17 S18 P19 S20 P21 P22 S23 P24 S25 P26 S27 P28 Comparison of Electrical and Interconnect Density Cross-section view Chip C4s Module LGA or BGA connector PCB Bandwidth limit (Tb/s) of Packaging Elements (50x50mm module, 20Gb/s bitrate) LGA 12.6 Card 70 C4s 160 Module wiring 114 OE Modules 76.8 Waveguides 192 Electrical Packaging is mature - # Signal Pins & BW/pin not increasing at rate of silicon - All-electrical packages will not have enough pins or per-pin BW Chokepoint is at the module-tocircuit board - Optics density is much greater than electrical there CMOS TRX and SLC OE Lens array 50mm Organic Module Printed circuit board (PCB) with optical waveguides & turning mirrors Organic Module 1cm 2 1cm 2 1cm 2 1cm 2 OE TRXes 13 November 2009 2009 IBM Corporation Advantages of Polymer Waveguide Technology compared to Parallel Fiber Optics Integrated mass manufacturing Board, sheet, film level processing of optical interconnects Lower assembly, waveguide jumper costs Costs for wide busses should scale better Simple assembly Avoid fiber handling Similar assembly procedures as for electrical components and boards Establish electrical and optical connections simultaneously Avoid separate optical layer (if integrated with board) New or compact functionality supporting new architectures Shuffles, Crossings, splitters, Enabler for multi-drop splitting, & complex re-routing that is expensive in fiber Higher density, waveguide pitch < 125 um (best future fiber pitch) Cost Higher bandwidth density, less signal layers Reduced Optics module cost AND jumper/connector costs both important Possible Lower Maintenance costs 14 November 2009 2009 IBM Corporation 7

PCB Roadmap Initial FOCUS 2012-2014 Discrete Active Flex assemblies are easily replaced, less disruptive 2014-2016 waveguides/ modules on PCB (could be flex laminate) once technology matures to sufficient reliability waveguides Electrical Connector Connector Low cost PCB card for control signals, power, ground Waveguides Waveguides DIMM 2018 Waveguides embedded in card w/optical vias are a complete replacement for all high speed copper lines OE elements Processor Socket PCB (All off- links are optical) 15 November 2009 2009 IBM Corporation Use of active waveguide flex and fiber cable in systems Drawer Backplane TODAY Electrical connectors Rack to Rack Electrical from to back of drawer, electrical to backplane and to electrical connector for active optical cable Drawers attach to electrical backplane (PCB) Active fiber based cable: Rugged (may be routed under floor), 3-100m lengths Electrical connector, active optics Drawer Backplane FUTURE with OPCB Rack to Rack Active waveguide flex from to back of drawer, may include channel shuffles connector (e.g. MT ) cable based optical backplane, may be passive waveguide flex cables or fiber cables (initially more likely). May include channel shuffles, typically < 1m- 2m Passive fiber based cable: Rugged (may be routed under floor), 3-100m lengths 16 November 2009 2009 IBM Corporation 8

Printed Circuit Boards IBM Research has invested heavily in the past 5 years in printed circuit board technology based on multi-mode polymer waveguides Partially funded by the US Government (Terabus program) We believe this technology will be needed to provide the needed BW for future server generations, allow highly integrated electrical-optical links and provide a path to much lower cost optical links. We are interested in establishing a market eco-system that will provide components, standards and specifications for this technology. 17 November 2009 2009 IBM Corporation 9