Roadrunner and hybrid computing

Similar documents
What does Heterogeneity bring?

IBM HPC DIRECTIONS. Dr Don Grice. ECMWF Workshop November, IBM Corporation

Driving a Hybrid in the Fast-lane: The Petascale Roadrunner System at Los Alamos

CellSs Making it easier to program the Cell Broadband Engine processor

Experts in Application Acceleration Synective Labs AB

The Mont-Blanc approach towards Exascale

high performance medical reconstruction using stream programming paradigms

Customer Success Story Los Alamos National Laboratory

A Transport Kernel on the Cell Broadband Engine

Technology Trends Presentation For Power Symposium

Roadrunner. By Diana Lleva Julissa Campos Justina Tandar

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

High Performance Computing with Accelerators

High Performance Computing: Blue-Gene and Road Runner. Ravi Patel

Compilation for Heterogeneous Platforms

Titan - Early Experience with the Titan System at Oak Ridge National Laboratory

Parallel Computing: Parallel Architectures Jin, Hai

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Heterogeneous Processing

All About the Cell Processor

HPC Technology Trends

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

Sony/Toshiba/IBM (STI) CELL Processor. Scientific Computing for Engineers: Spring 2008

Building supercomputers from embedded technologies

High Performance Computing (HPC) Introduction

How to Write Fast Code , spring th Lecture, Mar. 31 st

Interconnection of Clusters of Various Architectures in Grid Systems

Performance and Power Co-Design of Exascale Systems and Applications

Cell Broadband Engine. Spencer Dennis Nicholas Barlow

GPUs and GPGPUs. Greg Blanton John T. Lubia

HPC and Accelerators. Ken Rozendal Chief Architect, IBM Linux Technology Cener. November, 2008

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Los Alamos National Laboratory Modeling & Simulation

Computing architectures Part 2 TMA4280 Introduction to Supercomputing

Interconnect Your Future

MELLANOX EDR UPDATE & GPUDIRECT MELLANOX SR. SE 정연구

Mercury Computer Systems & The Cell Broadband Engine

GPUs and Emerging Architectures

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms

Computer Architecture

Accelerating the Implicit Integration of Stiff Chemical Systems with Emerging Multi-core Technologies

Roadmapping of HPC interconnects

Interconnect Your Future

Parallel Computing Platforms. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620

COSC 6385 Computer Architecture - Data Level Parallelism (III) The Intel Larrabee, Intel Xeon Phi and IBM Cell processors

Parallel Computing. Hwansoo Han (SKKU)

Addressing Heterogeneity in Manycore Applications

Timothy Lanfear, NVIDIA HPC

HPC future trends from a science perspective

The Road to ExaScale. Advances in High-Performance Interconnect Infrastructure. September 2011

Exascale: Parallelism gone wild!

SNAP Performance Benchmark and Profiling. April 2014

ABySS Performance Benchmark and Profiling. May 2010

Graphics Processor Acceleration and YOU

GPU Architecture. Alan Gray EPCC The University of Edinburgh

CSCI-GA Multicore Processors: Architecture & Programming Lecture 10: Heterogeneous Multicore

Technology for a better society. hetcomp.com

INF5063: Programming heterogeneous multi-core processors Introduction

Cell Processor and Playstation 3

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

Parallel Computing Platforms

2008 International ANSYS Conference

Architecture Conscious Data Mining. Srinivasan Parthasarathy Data Mining Research Lab Ohio State University

Massively Parallel Architectures

WHY PARALLEL PROCESSING? (CE-401)

Introduction to GPU computing

Trends in HPC (hardware complexity and software challenges)

IBM Cell Processor. Gilbert Hendry Mark Kretschmann

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS

Computer Architecture!

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes

Computer Systems Architecture I. CSE 560M Lecture 19 Prof. Patrick Crowley

COMPUTING ELEMENT EVOLUTION AND ITS IMPACT ON SIMULATION CODES

Paving the Road to Exascale

A Data-Parallel Genealogy: The GPU Family Tree. John Owens University of California, Davis

Computing on GPUs. Prof. Dr. Uli Göhner. DYNAmore GmbH. Stuttgart, Germany

The Cielo Capability Supercomputer

Unit 11: Putting it All Together: Anatomy of the XBox 360 Game Console

Performance Analysis and Evaluation of Mellanox ConnectX InfiniBand Architecture with Multi-Core Platforms

Outline Marquette University

Convergence of Parallel Architecture

InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice

Introduction to CELL B.E. and GPU Programming. Agenda

Real Parallel Computers

HYCOM Performance Benchmark and Profiling

The Uintah Framework: A Unified Heterogeneous Task Scheduling and Runtime System

Supercomputing and Mass Market Desktops

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group

The University of Texas at Austin

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA

What Next? Kevin Walsh CS 3410, Spring 2010 Computer Science Cornell University. * slides thanks to Kavita Bala & many others

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

Tesla GPU Computing A Revolution in High Performance Computing

Practical Scientific Computing

Trends in the Infrastructure of Computing

Lecture 1. Introduction Course Overview

W H I T E P A P E R W i t h I t s N e w P o w e r X C e l l 8i Product Line, IBM Intends to Take Accelerated Processing into the HPC Mainstream

Transcription:

Roadrunner and hybrid computing Ken Koch Roadrunner Technical Manager Scientific Advisor, CCS-DO August 22, 2007 LA-UR-07-6213

Slide 2 Outline 1. Roadrunner goals and thrusts 2. Hybrid computing and the Cell processor 3. Roadrunner system and Cell-acceleration architecture 4. Overview of programming concepts 5. Overview of algorithms & applications

Roadrunner is essential for fulfilling our science and stewardship missions for the weapons program

Slide 4 Petascale computing is essential to ensuring a sustainable nuclear deterrence Z Omega current stockpile future stockpile New critical experiments are coming on line Roadrunner Computational physics and high performance computing are the transformational tools Predictive Capability Enhancing Deterrence NIF DARHT Blue Gene external threats weapons science

Slide 5 The Cell-accelerated Roadrunner targets two goals Science @ Scale Multi-scale unit physics for weapons & open science o o o Validate model assumptions Better understand physics Cross-validate physics models at overlapping resolutions Run at Petascale (25% to 80+% of machine) Advanced Architecture for algorithms and applications Target select physics and work on algorithms and implementations o o Convert algorithms to Cell & Roadrunner Or use an alternative or modified parallel algorithm Provide faster solutions or improved accuracy Incrementally update existing ASC integrated codes for targeting key uncertainties o o Target focused simulations, not general usage Focus is more predictive science oriented than speeding up production jobs More on this at end of talk

Heterogeneous & hybrid computing is an industry trend The Cell processor is a powerful hybrid processor that can accelerate algorithms

Slide 7 Microprocessor trends are changing Moore s law still holds, but is now being realized differently Frequency, power, & instructionlevel-parallelism (ILP) have all plateaued Multi-core is here today and manycore ( 32 ) looks to be the future Memory bandwidth and capacity per core are headed downward (caused by increased core counts) Key findings of Jan. 2007 IDC Study: Next Phase in HPC o new ways of dealing with parallelism will be required o must focus more heavily on bandwidth (flow of data) and less on processor 386 Pentium Montecito clock transistors power ILP From Burton Smith, LASCI-06 keynote, with permission

Slide 8 Industry presentations show changing trends in processors Intel s Microprocessor Research Lab AMD Fusion Intel s Visual Computing Group - Larabee nvidia G80-2006 Taken from publicly available information

Slide 9 The change is already happening New processors & accelerators: Multi-core to many-core: 8, 16, 32,... 80, 128, 1000? IBM Cell, AMD Fusion, Intel Polaris, NVidia G8800 Distributed memory & caches at core level FPGAs, GPGPU, Clearspeed CSX600, IBM Cell, XtremeData XD1000, Nvidia G80, AMD Stream Processor Connection standards AMD Torrenza, Intel/IBM Geneseo, AMD HyperTransport Initiative Programming IBM Roadrunner Cell libraries, RapidMind, Peakstream, Impulse C, Standford s Sequoia, NVidia CUDA, Clearspeed C, Mercury MFC, stream programming Heterogeneous architectures Clusters of mixed node Hybrid accelerated node (e.g. Roadrunner, Clearspeed, FPGA) Hybrid on the same bus (e.g. CPU+GPUs, Intel s Geneseo, AMD s Torrenza) Within processor itself (e.g. Cell, AMD Fusion) + + = Cell or DSP or FPGA CPU GPU hybrid

Slide 10 There were hybrid computers before Roadrunner Floating Point Systems FPS Array Processors (AP-120B, FPS- 164/264) (circa 1976-1982) http://en.wikipedia.org/wiki/floating_point_systems Deep Blue for chess (IBM SP-2: 30 RS6K + 480 chess chips) (circa 1997) http://en.wikipedia.org/wiki/deep_blue Grape-6 for stellar dynamics w/ custom chips) (circa 2000-2004) http://grape.astron.s.u-tokyo.ac.jp/~makino/grape6.html Various FPGA supercomputers from system vendors: SRC-6 (w/ MAP), Cray XD1 (w/ Application Acceleration), SGI Altix (w/ RASC) Titech TSUBAME (w/ some Clearspeed) (2006) http://www.gsic.titech.ac.jp/english/publication/pressrelease.html.en RIKEN MDGrape-3 Protein Explorer (w/ custom chips) (2006) http://mdgrape.gsc.riken.jp/modules/tinyd0/index.php Terra Soft s Cell E.coli/Amoeba PS3 Cluster (cluster of 1U PlayStation 3 development systems) (2007) http://www.hpcwire.com/hpc/967146.html

Slide 11 Roadrunner is paving the way along an alternate path for the future of HPC Roadrunner is the first of a new breed of high performance computers Roadrunner is sooner, cheaper, and smaller than Comparison of Recent High End Systems building a petascale 1,000 machine in the )Roadrunner Base (2007 conventional way )Roadrunner Final (2008 Roadrunner )Blue Gene/L (2005 Roadrunner at Los Alamos )Jaguar (2006 100 )Red Storm (2006 attacks now the )CEA (2006 )Earth Simulator (2002 unavoidable software )Purple (2006 challenge early )Barcelona (2006 The Labs must be in the 10 game now. LANS Independent Functional Blue Gene L Management Assessment of RR project (May, 2007) Speed of Processor (Gigaflop/s) Petaflop/s barrier 1 1,000 10,000 100,000 1,000,000 Number of Processors

Slide 12 The Cell is a powerful hybrid multi-processor chip Cell Broadband Engine * (Cell BE) Developed under Sony-Toshiba-IBM efforts Current Cell chip is used in the Sony PlayStation 3 An 8-way heterogeneous parallel engine 8 SPE units Power PC CPU architecture to PCIe/IB Each of the 8 SPEs are 128 bit (e.g. 2-way DP-FP) vector engines w/ 256KB of Local Store (LS) memory & a DMA engine. They can operate together or independently (SPMD or MPMD). ~200 GF/s single precision ~ 15 GF/s double precision (current chip) * Trademark of Sony Computer Entertainment, Inc.

Slide 13 Cell Broadband Engine Memory Interfcace PPU PowerPC & Cache SPU SPU SPU SPU Input / Output SPU SPU SPU SPU Interconnect Bus Heterogeneous: 1PPU + 8 SPUs

Slide 14 A new Cell rev. was needed for Roadrunner Performance Enhancements/ Scaling Cost Reduction Cell BE (1+8) 90nm SOI ~15 GF/s double precision Enhanced Cell BE (1+8eDP) 65nm SOI Continued shrink Cell HPC chip ~100 GF/s double precision 1TF Processor 45nm SOI 2006 2007 2008 2009 2010 Cell BE Roadmap Version 5.0 24-Jul-2006 All future dates are estimations only; Subject to change without notice.

Slide 15 Cell local store architecture has a performance per area advantage Cell Cell BE HPC ~100 GF/s AMD AMD ~20 GF/s IBM ~20 GF/s Intel ~20 GF/s

Slide 16 Cell blades and software improve during Roadrunner development period Performance Currently available Cell QS20 Blade 2 Cell BE Processors Good Single Precision Floating Pt 1 GB Memory Up to 4X PCI Express Target Availability: 2H07 Advanced Cell Blade 2 Cell BE Processors Good Single Precision Floating Pt 2 GB Memory Up to 16X PCI Express SDK 3.0 Target Availability: 1H08 Cell HPC Blade 2 Cell HPC Processors Good SP & DP Floating Point Up to 16 GB Memory (RR 8GB) Up to 16X PCI Express SDK 4.0 Target Availability: 1H08 SDK 2.1 Target Availability: Oct All future dates are estimations only; Subject to change without notice. 2006 2007 2008

Roadrunner is a hybrid architecture for 2008 deployment achieving a sustained PetaFlop/s performance level

Slide 18 Roadrunner project is a partnership with IBM Contract contract signed September 8, 2006 with Critical component of stockpile stewardship Phase 1 (Base system) supports near-term mission deliverables Phase 2 (Cell prototypes) supports pre-final assessment Phase 3 (Hybrid final system) o Achieves PetaFlops level of performance o Demonstrates new paradigm for high performance computing Accelerated vision of the future Cell processor (2007, 100 GF) 100x in 14 yrs 8 vector units each CM-5 board (1994, 1 GF)

Slide 19 Status of the Roadrunner Phases Phase 1 Known as Base system 14 clusters of Opteron-only nodes In classified operation now with general availability early September Already contributing to DSW efforts o Application Readiness Team (ART) provided early support Phase 2 Known as AAIS (Advanced Architecture Initial System) 6 Opteron nodes and 14 Cell blades on Infiniband Has been in use since January for Cell & hybrid application prototyping Phase 3 Contract option for 2008 delivery Two technical Assessments scheduled for this October A redesigned Cell-accelerated system for better performance o No longer an upgrade to the Base system

Slide 20 Roadrunner Phase 1 Base is deployed now as a capacity resource Fourteen InfiniBand-interconnected clusters of Opteron nodes Provides 70 Teraflop/s of needed capacity computing resources In classified operation now with early September general availability 144 4-socket dual-core 32 GB memory nodes per cluster 14 clusters IB 4X SDR 2 nd stage InfiniBand interconnect (multiple switches)

Slide 21 Roadrunner Phase 3 is Cell-accelerated, not a cluster of Cells Cell-accelerated compute node Add Cells to each individual node Multi-socket multi-core Opteron cluster nodes (100 s of such cluster nodes) I/O gateway nodes Scalable Unit Cluster Interconnect Switch/Fabric This is what makes Roadrunner different!

Slide 22 Phase 3 was redesigned to be a better system Phase 3 was to be an upgrade Simply add InfiniBand connected Cell blades to Phase 1 Single data rate 8-way node Phase 3 is now an entirely new additional system Uses smaller node with PCI Express connected Cell blades Better performance on same schedule LANL keeps existing Base system o o Classified capacity work not disturbed by a major upgrade Science runs and presence in the open is now possible Double data rate Single data rate cell cell cell cell cell cell cell cell 4-way node cell cell Double data rate 4-way node cell cell cell cell cell cell

Slide 23 Roadrunner uses Cells to make nodes ~30x faster 400+ GFlop/s performance per hybrid node! One Cell chip per Opteron core Cell HPC Cell HPC Cell HPC Cell HPC PCIe x8 PCIe x8 PCIe x8 PCIe x8 Two Cell Blades Four independent PCIe x8 links (2 GB/s) (~1-2 us) PCIe x8 PCIe x8 PCIe x8 PCIe x8 PCIe x8 IB 4x HCA AMD AMD Opteron Node (4 cores) ConnectX 2 nd Generation IB 4x DDR (2 GB/s) To cluster fabric Redesign changes shown in red - 4x more BW/flop and better latencies

Slide 24 Roadrunner is a Petascale system in 2008 Connected Unit cluster 192 Opteron nodes (180 w/ 2 dual-cell blades connected w/ 4 PCIe x8 links) ~7,000 dual-core Opterons ~50 TeraFlop/s (from Opterons) ~13,000 Cell HPC chips 1.4 PetaFlop/s (from Cell) ~18 clusters 2 nd stage InfiniBand 4x DDR interconnect (18 sets of 12 links to 8 switches) 2 nd Gen IB 4X DDR

Slide 25 The Roadrunner procurement is tracked like a construction project via DOE Order 413 w/ NNSA Base Capacity System Phase 1 >71 TFs Secure + 10 TFs Unclassified Cycles Initial Advanced Architecture Tech. Delivery FY06 Stage 1 Accelerator Technology Assessment (Phase 2) Accelerator Technology Refreshes Delivery FY07 Final System Assessment & Review CD-0 & CD-1 (Approved) CD-2 (Approved) CD-3a (Approved) CD-4a (Acceptance of Base System) (Planned 07/07) Stage 2 Roadrunner Final System Phase 3 Final Delivery of Advanced Architecture system up to a sustained Petaflop Delivery FY08 CD-3b (Planned 10/07, includes criteria for CD-4b) CD-4b (11/08) CD-1 Approval May 4, 2006 RFP Released May 9, 2006 Proposals Received June 5, 2006 Selection made July Contract signed Sept. 8, 2006 Project Status/Milestones Software Architecture Design Certification work on Roadrunner Option to be executed in early FY08 Final System Delivery Final System Acceptance Test

Slide 26 Roadrunner Phase 3 is an option with planned assessment reviews Two assessments are planned for October 2007 LANL-chartered Assessment committee NNSA ASC chartered independent assessments by HPC experts Assessment metrics Performance o Future workload (e.g. Science@Scale & Advanced Architecture) o Expected Linpack 1.0 PF sustained Usability and manageability o System management and integration at scale o API for programming hybrid system Technology o Delivery of advanced technology

Programming Roadrunner is tractable

Slide 28 Three levels of parallelism are exposed along with remote offload of an algorithm solver MPI message passing still used between nodes and within-node (2 levels) Global Arrays, IPC, UPC, Global Address Space (GAS) languages, etc. also remain possible choices Additional parallelism can be introduced within the node ( divide & conquer ) o Roadrunner does not require this due to it s modest scale Offload large-grain computationally intense algorithms for Cell acceleration within a node process This is equivalent to function offload and similar to client-server & RPCs o o One Cell per one Opteron core (1:1 process ratio) Opteron would typically block, but could do concurrent work Embedded MPI communications are possible via relay approach Threaded fine-grained parallelism within the Cell itself (1 level) Create many-way parallel pipelined work units for SPMD on the SPEs o MPMD, RPCs, streaming, etc. are also possible Consistent with heterogeneous chips future trends Considerable flexibility and opportunities exist

Slide 29 Three types of processors work together Remote Communication to/from Cell Data communication & synchronization Process management & synchronization Topology description Error handling IBM developing DaCS library OpenMPI has proven useful for early remote computation prototyping and parts are being ported to PCIe Parallel computing on Cell Task data partitioning & work queue pipelining Process management Error handling IBM provides Cell libspe (threads & DMAs) IBM developing (ALF) Accelerator Library Framework Many 3 rd party runtimes/languages in development OpenMPI (cluster) x86 compiler DaCS (OpenMPI) PowerPC compiler ALF or libspe SPE compiler Opteron PPE SPE (8)

Slide 30 Three types of processors work together MPI Host CPU upload download DaCS Cell PPE work units SPE SPE SPE SPE SPE SPE SPE SPE relay of DaCS MPI messages Overlapped DMAs and compute ALF does this automatically! pipelined work units Get Switch buffers Get (prefetch) Wait Compute Put (write behind) Wait (previous) Switch buffers Wait From LA-UR-07-2919 Salishan RR Talk

Slide 31 Message-passing MPI programs can evolve Key concepts: Pair one Cell core with one Opteron core ALF/DMAs SPEs PPE CPU cluster Cell shared memory DaCS Node memory Move an entire compute-intensive function/algorithm & associated data onto Cell o Can be implemented one function or algorithm at a time o Use message relay to/from Cells to cluster-wide MPI on host CPUs Identify & expose many-way fine-grain parallelism for SPEs o Add SPE specific optimizations, many of which are also good on multi-core commodity processors, some are more SPE specific o Current SPE programming is limited in C/C++ Retain main code control, I/O and MPI communications on host CPUs MPI

And what about applications

Slide 33 A few key algorithms are being targeted Transport PARTISN (neutron transport via Sn) o Sweep3D o Sparse solver (PCG) MILAGRO (IMC) Particle methods Molecular dynamics (SPaSM) o Data parallel CM-5 implementation Particle-in-cell (VPIC) Eulerian hydro Direct Numerical Simulation Linear algebra LINPACK Preconditioned Conjugate Gradient (PCG) Refine algorithm Parallel hybrid implementation Parallel hybrid algorithm design Hybrid implementation Hybrid algorithm design Cell implementation Cell algorithm design Refine data structures

Slide 34 We are making good progress on applications Target full application SPaSM VPIC PARTISN MILAGRO RAGE PARTISN, RAGE, Truchas, etc. Parallel hybrid implementation 8/07 SPaSM in progress Coding completed Late due to re-design Cell cluster version exists Near coding completion Near coding completion Parallel hybrid design Serial Hybrid implementation On hold In progress Hybrid coding completed Serial Hybrid design curtailed Optimizations possible Cell implementation Re-design in progress SIMD coding Cell design CellMD VPIC Re-design completed Sweep3D JK-iagonals PAL- Sweep3D Domain decomp Milagro Milagro rewrite Research code CG and GMRES Molecular Dynamics (MD) reimplement Particle-in- Cell (PIC) SPE Threads SPE Sweeps Deterministic Neutron Transport (Discrete Ordinates, S N ) re-design Stochastic Radiation Transport (Implicit Monte Carlo, IMC) Eulerian Hydro Sparse Linear Algebra (Krylov methods)

Slide 35 Roadrunner hybrid implementations are faster Speed ups so far range from 1x (disappointing: redesign & more optimizations) to 9x (very good) Reference performance is taken as the performance on a 2.2GHz Opteron core or cluster of such Opterons Science codes are fairing the best Extensive performance modeling with key measurements on hardware prototypes will be used to used to project final Roadrunner performance of these applications AAIS machine has IB-connected Cell blades A few Cells are connected to Opterons via PCIe at IBM Rochester site IBM is building a ConnectX IB cluster for testing at Poughkeepsie A few prototype hybrid nodes with current Cell chips will be available in late August in Rochester A couple of new Cell HPC chips are available at IBM Austin in test rigs Much work is yet to be accomplished by the October Assessments

Slide 36 Roadrunner targets two application areas Science@Scale Targeting VPIC & SPaSM codes for weapons science Cross-validate physics models at overlapping resolutions Run at Petascale o ~75% of machine, for 1½ to 2 week durations, is minimally needed for each VPIC ICF study Guy Dimonte will talk next about VPIC and its role for Science@Scale multi-scale ICF NIF implosion increasing fidelity

Slide 37 Roadrunner targets two application areas Advanced Architecture for algorithms and applications Targeting unclassified transport algorithms initially Provide faster solutions or improved accuracy Advanced parallel hybrid algorithms and application development Demonstrate a path to incrementally updating existing ASC integrated codes o Target key physics or simulation uncertainties, not general DSW/baselining usage o Potentially target 3D safety

Slide 38 More is information is available on the LANL Roadrunner home page http://www.lanl.gov/roadrunner/ Roadrunner Architecture Other Roadrunner talks Computing Trends Related Internet links

The End