Architetture di calcolo e di gestione dati a alte prestazioni in HEP IFAE 2006, Pavia

Size: px
Start display at page:

Download "Architetture di calcolo e di gestione dati a alte prestazioni in HEP IFAE 2006, Pavia"

Transcription

1 Architetture di calcolo e di gestione dati a alte prestazioni in HEP IFAE 2006, Pavia Marco Briscolini Deep Computing Sales Marco_briscolini@it.ibm.com

2 IBM Pathways to Deep Computing Single Integrated Systems Single systems image SMP's Clusters of dense systems Grids Access to remote resources available on demand Computational and data-intensive Autonomic Computing Self-managing and self-healing systems React to conditions and redistribute workload On Demand Computing Delivery of standardized processes, applications and infrastructure over the network Utility-like model -- quick access to incremental capacity

3 % CGR 1000 IBM Blue Gene Human brain ops TeraFlops IBM Deep Blue 1 ASCI Lizard brain ops Mouse brain ops

4 Pharmacogenomics Proteins Metabolic Pathways Human Genome SNPs Petabytes Combinatorial Chemistry HTS Computational Biology MIPs ESTs Moore's Law

5 Intel Xeon (including EM64T) is the processor in 51% of the systems but IBM POWER architectures account for 15.4% of TOP500 systems. Itanium2; 79; 16% Source: TOP500 Processor Technology - June 05 PA RISC; 36; 7% PPC 440; 16; 3% POWER5; 8; 2% PowerPC; 9; 2% PowerPC includes 3 Apple Xserve systems Opteron; 25; 5% POWER3; 7; 1% Alpha; 5; 1% POWER4; 37; 7% POWER4 includes 2 Hitachi SR11000 systems MIPS, 1, 0% EM64T; 76; 15% SPARC, 5, 1% Cray ; 9; 2% NEC; 7; 1% Other; 4; 1% Xeon; 175; 36% POWER Architecture Combined is is 15.4% 15.4%

6 IBM has 6 of TOP10 Cray Others SGI Fujitsu NEC Dell Apple HP IBM IBM has doubled the results of the #1 spot with the DOE/LLNL BlueGene/L machine and together with other BlueGene/L Clusters and JS20 Cluster at Barcelona, IBM has 6 TOP10 entries, the only vendor with more than one. C ount TOP10 Supercomputers # Ven-dor IBM IBM SGI NEC IBM IBM Calif Digital IBM IBM Rmax TFlops Installation DOE/NSSA/LLNL (32 racks BlueGene/L) BlueGene at Watson (20 racks BlueGene/L) NASA/Columbia (Itanium2) Japan Earth Simulator Barcelona Super-computer (JS20) ASTRON Netherlands (6 racks BlueGene/L) LLNL Thunder (Itanium2) EPFL Switzerland (4 racks BlueGene/L) AIST - Japan (4 racks BlueGene/L) Source: 0 Jun04 Nov04 Jun05 10 Cray Sandia Natl Lab (Red Storm, Cray XT3)

7 IBM Systems Industry Leadership & Choice Clusters / Virtualization Large SMP Scale Up / SMP Computing p595 High BW SSI LPAR RAS High Performance Switch Linux Cluster (1350) AIX Clusters (1600) p575 High density POWER5 based e326 IntelliStation 1U Opteron-based POWER-based Intel-based Opteron-based High Density Rack Mount BladeCenterTM x336 1U/2p density Intel-based Scale Out / Distributed Computing Denser form factors Rapid deployment Flexible architectures Switch integration LINUX

8 Dual Core As frequency increase is limited due to power limitation Dual core is a way to : 2 x Peak Performance per chip (and per cycle) But at the expense of frequency (around 20% down) Other features also increase Flop/cycle FMA for POWER (and IA64) = 4 Flops/cycle/core VMX for PowerPC = 8 Flops/cycle/core (32bit only) IBM implements all these technologies: POWER4 introduced dual core in 2001 JS20 introduced dual heterogeneous core in 2003 JS21 will have quadruple heterogeneous core in 1Q06

9 Multi-Core and Multi-Chip There are two ways to put more CPUs on a chip : Multi-core has several cores on the same die Multi-chip glues several chip on the same module One and the other can be used depending on time: Multi-core is most costly to develop since it s a new chip design Multi-core and multi-chip can be used simultaneously Single Chip/Dual core Dual Chip Single Module

10 Multi-Core and Multi-Chip Multi Core: AMD and Intel are calling dual processor What IBM is calling dual core chip (with two processors) Therefore comparison is uneasy What they call 4 dual core processors is indeed a 8 way!!! Multi Chip: Intel is also calling multi-chip processor what we call multi-chip module For example the p550q is a multi-chip processor with two quadruple chip module with dual-core chips : 8 ways with two modules only

11 POWER Processor Roadmap POWER4 TM POWER4+ TM POWER5 TM POWER5+ TM POWER6 TM 180 nm 1+ GHz Core 1+ GHz Core 130 nm 1.2+ GHz Core GHz GHz Core Shared L2 130 nm > GHz Core > GHz Core Shared L2 90 nm >> GHz Core >> GHz Core Shared L2 Distributed Switch 65 nm Ultra high frequency cores L2 caches Advanced System Features Shared L2 Distributed Distributed Switch Switch Distributed Switch Distributed Switch Chip Multi Processing - Distributed Switch - Shared L2 Dynamic LPARs (16) Reduced size Lower power Larger L2 More LPARs (32) Simultaneous multi-threading Sub-processor partitioning Dynamic firmware updates Enhanced scalability, parallelism High throughput performance Enhanced cache/memory subsystem Autonomic Computing Enhancements

12 PowerPC : The Most Scalable Architecture POWER3 POWER2 PPC 603e PPC 401 PPC 750 PPC 405GP PPC 750CXe POWER4 PPC 750FX PPC 440GP PPC 750GX POWER5 PPC 970FX PPC 440GX POWER6 Desktop Games Embedded Servers Binary Compatibility PPC 970FX

13 BlueGene/L

14 BlueGene/L Hardware Principles A large number of nodes (65,536) Low-power nodes for density High floating-point performance System-on-a-chip technology Nodes interconnected as 64x32x32 three-dimensional torus Easy to build large systems, as each node connects only to six nearest neighbors full routing in hardware Cross-section bandwidth per node is proportional to n 2 /n 3 Auxiliary networks for I/O and global operations Applications consist of multiple processes with message passing Strictly one process/node

15 BlueGene/L Compute System-ona-Chip ASIC 5.5GB/s P LB (4:1) 5.6GF peak node 2.7GB/s 32k/32k L1 440 CP U Double FPU 32k/32k L1 440 CPU I/O proc Double FPU L2 snoop L Multiported Shared SRAM Buffer GB/s Shar ed L3 dir ectory for EDRAM Includes ECC ECC 22GB/s 4M B EDRAM L3 Cache or Memory l 128 Ethernet Gbit JTAG Access Torus Tree Global Interrupt DDR Control with ECC 5.5 GB/s Gbit Ethernet JTAG 6 out and 6 in, each at 1.4 Gbit/s link 3 out and 3 in, each at 2.8 Gbit/s link 4 global barriers or interrupts 144 bit wide DDR 256MB

16 BlueGene/L Interconnection Networks 3 Dimensional Torus Interconnects all compute nodes (65,536) Virtual cut-through hardware routing 1.4Gb/s on all 12 node links (2.1 GB/s per node) Communications backbone for computations 350/700 GB/s bisection bandwidth Global Tree One-to-all broadcast functionality Reduction operations functionality 2.8 Gb/s of bandwidth per link Latency of tree traversal in the order of 5 µs Interconnects all compute and I/O nodes (1024) Ethernet Incorporated into every node ASIC Active in the I/O nodes (1:64) All external comm. (file I/O, control, user interaction, etc.) Low Latency Global Barrier and Interrupt Control Network

17 BlueGene/L System Software Architecture User applications execute exclusively on the compute nodes and see only the application volume as exposed by the user-level APIs The outside world interacts only with the I/O nodes and processing sets (I/O node + compute nodes) they represent, through the operational surface functionally, the machine behaves as a cluster of I/O nodes Internally, the machine is controlled through the service nodes in the control surface goal is to hide this surface as much as possible

18 Software Stack in BlueGene/L Compute Node CNK controls all access to hardware, and enables bypass for application use User-space libraries and applications can directly access torus and tree through bypass As a policy, user-space code should not directly touch hardware, but there is no enforcement of that policy Application code can use both processors in a compute node Application code User-space libraries CNK Bypass BlueGene/L ASIC

19 High Performance and Scalability on LINPACK Linpack Scalability on BlueGene/L 100 Percentage of peak Performance (Rmax): 1435 Gflop/s Problem size (N): TOP500 rank: Number of nodes

20 MPI Latency and Bandwidth on BlueGene/L ½ roundtrip latency: 3000 cycles MHz Measured with Dave Turner s Measured in heater mode; bound to increase a bit in co-processor mode Measured on nearest neighbors: HW latency is only about 1200 cycles Composition of roundtrip latency: HW 32% High level (MPI) 26% Bandwidth: Measured with a custom made program that sends nearest neighbor messages Heater mode Only eager protocol suboptimally implemented (224 byte packet payload instead of 240) Max bandwidth = * B TP (864 MBytes/s send, receive) packet overheads 29% msg layer 13%

21 Artist s Rendition of Full BlueGene/L System Source:

22 PowerPC 970 and POWER4 Comparaison Memory Bandwidth 12.8 GB/s POWER4+ SMP Optimized Balanced system/bus design One or two processor cores Memory Bandwidth 6.4 GB/s PowerPC 970 CPU Core L2 Cache ( 512KB ) SIMD Units BIU Single processor core 2 Load/store units 2 Fixed point units 2 IEE Floating point units 2 SIMD sub-units (VMX 32 bit) Branch unit Condition register unit CPU Core CPU Core L2 Cache Chip- Chip Interconnect L2 Cache CPU Core Chip- Chip Interconnect 8 Execution pipeline 2 load / store units 2 fixed point units 2 DP multiply-add execution units 1 branch resolution unit / 1 CR execution unit 8 prefetching streams L3 Cache L3 Cache Main Memory Main Memory

23 BladeCenter Server Overview Enterprise-class shared infrastructure Shared power, cooling, cabling, switches means reduced cost and improved availability Enables consolidation of many servers for improved utilization curve High performance and density: 168 processors in a 42U rack

24 The portfolio continues to build... HS20 2-way Xeon HS40 4-way Xeon JS20 PowerPC Featu res Intel Xeon DP EM64T Mainstream rack dense blade High availability apps Optional HS HDD Intel Xeon MP processors 4-way SMP capability Supports Windows, Linux, and NetWare Two PowerPC 970 processors 32-bit/64-bit solution for Linux & AIX 5L Performance for deep computing clusters LS20 AMD Blade Two socket AMD Dual core ready Similar feature set to HS20 Edge and mid-tier workloads Collaboration Web serving Target Apps Back-end workloads Large mid-tier apps 32-bit/64-bit HPC UNIX server consolidation 32- or 64-bit HPC stellar performer Common Chassis and Infrastructure

25

26

27 Cell Processor Based Workstation (CPBW) (Sony Group and IBM) First Prototype Powered On 16 Tera-flops in a rack (est.) ( equals 1 Peta-flop in 64 racks ) Optimized for Digital Content Creation, including Computer entertainment Movies Real-time rendering Physics simulation CELL Processor Board High BW Sys Networks Mgmt I/O Bridge CELL Processor (2-Way SMP) Memory 16 TFlop rack Storage

28 Very Basic Cluster Design Management A Cluster that is designed like this is ready to perform a variety of compute functions. Node Network Switch Storage Nodes Storage Nodes Compute Nodes Compute Nodes Compute Nodes Compute Nodes Compute Nodes Compute Nodes Compute Nodes Compute Nodes

29 Could I have an example as picture? Sure... Terminal Servers Ethernet switches Mgmt. Lan from Myrinet 2000 Management VLAN Storage Controllers Cluster Nodes RS RS-485 Management Node Myrinet2000 Cluster VLAN 100 Mbit Enet (integrated) Gigabit Enet Myrinet 2000 Falcon 100 Mbit Enet Gigabit Enet Myrinet 2000

30 Interconnect Performance Interconnect Type Bi-directional Bandwidth* Unidirectional Bandwidth* Small Message Latency I/O Bus Equivalent Gigabit Ethernet 250 MB/s (184 MB/s) 125 MB/s (111 MB/s) microsecond 66 MHz PCI Single-port Myrinet 500 MB/s (463 MB/s) 250 MB/s (229 MB/s) ~4.5 microsecond 100 MHz PCI-X Dual-port Myrinet 1000 MB/s (>850 MB/s) 500 MB/s (448 MB/s) ~4.2 microsecond 133 MHz PCI-X Quadrics 1067 MB/s (950 MB/s) 1067 MB/s (950 MB/s) ~1.5 microsecond 133 MHz PCI-X Single-port InfiniBand 2000 MB/s (1750 MB/s) 1000 MB/s (914 MB/s) ~3.9 microsecond 4x PCI-E * B/W quoted as Peak and (Payload)

31 Infiniband - TopSpin Bandwidth (MBps/s) MPI over InfiniBand MPI over Myrinet MPI over Quadrics Latency (us) MPI over InfiniBand MPI over Myrinet MPI over Quadrics Message Size (Bytes) Message Size (Bytes) Throughput InfiniBand 840 MBps Quadrics 300 MBps Myrinet 220 MBps GigE 120 MBps Latency (small msg) 5.5 us 5 us 7 us 70 us CPU Utilization 1-3% Not available Not available 50% Source: Ohio State and Topspin

32 Systems Management for HPC CSM Clusters Diagnostics Diagnostics Monitoring Monitoring Sensors Sensors HPC Stack Parallel Environment GPFS Loadleveler Deployment Deployment Integration Integration AIX SUSE RedHat Interoperability Interoperability Adapter Adapter Setup Setup Monitoring Monitoring Myrinet GigE Federation InfiniBand Hardware Hardware Control Control Deployment Deployment pseries Cluster 1600 BlueGene/L BladeCenter x&p xseries (Intel, AMD) Cluster 1350

33 How does GPFS work? Network Shared Disk (NSD) Direct Attach (DA)

34 How does GPFS work? Each file spans multiple servers and disks Imagine a big virtual RAID across servers and disks Network Shared Disk (NSD) concept Disks appear to be local Transparent I/O over cluster interconnect Application GPFS Daemon NSD NSD GPFS Daemon Application Linux Linux Disk Device Driver Interconnect Device Driver Interconnect Device Driver Disk Device Driver Disk Interconnect Disk

35 GPFS: Storage Nodes - Data Access from any GPFS client via NSD - Disk processing off-loaded from application through NSD servers - High Avalaibility and Recoverability - High Speed/Bandwidth Network suggest for Scalability Myrinet/Infiniband ready

36 I/O Network

37 GPFS in GRID environment

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers Outline Execution Environments for Parallel Applications Master CANS 2007/2008 Departament d Arquitectura de Computadors Universitat Politècnica de Catalunya Supercomputers OS abstractions Extended OS

More information

Stockholm Brain Institute Blue Gene/L

Stockholm Brain Institute Blue Gene/L Stockholm Brain Institute Blue Gene/L 1 Stockholm Brain Institute Blue Gene/L 2 IBM Systems & Technology Group and IBM Research IBM Blue Gene /P - An Overview of a Petaflop Capable System Carl G. Tengwall

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance

More information

Real Parallel Computers

Real Parallel Computers Real Parallel Computers Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel Computing 2005 Short history

More information

The Mont-Blanc approach towards Exascale

The Mont-Blanc approach towards Exascale http://www.montblanc-project.eu The Mont-Blanc approach towards Exascale Alex Ramirez Barcelona Supercomputing Center Disclaimer: Not only I speak for myself... All references to unavailable products are

More information

Real Parallel Computers

Real Parallel Computers Real Parallel Computers Modular data centers Overview Short history of parallel machines Cluster computing Blue Gene supercomputer Performance development, top-500 DAS: Distributed supercomputing Short

More information

Cluster Network Products

Cluster Network Products Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster

More information

Technology Trends Presentation For Power Symposium

Technology Trends Presentation For Power Symposium Technology Trends Presentation For Power Symposium 2006 8-23-06 Darryl Solie, Distinguished Engineer, Chief System Architect IBM Systems & Technology Group From Ingenuity to Impact Copyright IBM Corporation

More information

Intel Enterprise Solutions

Intel Enterprise Solutions Intel Enterprise Solutions Catalin Morosanu Business Development Manager High Performance Computing catalin.morosanu@intel.com Intel s figures 2003/Q104 Revenue 2003: $ 31 billion first Quarter 2004: $

More information

Parallel Computer Architecture II

Parallel Computer Architecture II Parallel Computer Architecture II Stefan Lang Interdisciplinary Center for Scientific Computing (IWR) University of Heidelberg INF 368, Room 532 D-692 Heidelberg phone: 622/54-8264 email: Stefan.Lang@iwr.uni-heidelberg.de

More information

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc

Scaling to Petaflop. Ola Torudbakken Distinguished Engineer. Sun Microsystems, Inc Scaling to Petaflop Ola Torudbakken Distinguished Engineer Sun Microsystems, Inc HPC Market growth is strong CAGR increased from 9.2% (2006) to 15.5% (2007) Market in 2007 doubled from 2003 (Source: IDC

More information

Intel Enterprise Processors Technology

Intel Enterprise Processors Technology Enterprise Processors Technology Kosuke Hirano Enterprise Platforms Group March 20, 2002 1 Agenda Architecture in Enterprise Xeon Processor MP Next Generation Itanium Processor Interconnect Technology

More information

Chapter 5b: top500. Top 500 Blades Google PC cluster. Computer Architecture Summer b.1

Chapter 5b: top500. Top 500 Blades Google PC cluster. Computer Architecture Summer b.1 Chapter 5b: top500 Top 500 Blades Google PC cluster Computer Architecture Summer 2005 5b.1 top500: top 10 Rank Site Country/Year Computer / Processors Manufacturer Computer Family Model Inst. type Installation

More information

Agenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 >

Agenda. Sun s x Sun s x86 Strategy. 2. Sun s x86 Product Portfolio. 3. Virtualization < 1 > Agenda Sun s x86 1. Sun s x86 Strategy 2. Sun s x86 Product Portfolio 3. Virtualization < 1 > 1. SUN s x86 Strategy Customer Challenges Power and cooling constraints are very real issues Energy costs are

More information

An Overview of High Performance Computing

An Overview of High Performance Computing IFIP Working Group 10.3 on Concurrent Systems An Overview of High Performance Computing Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 1/3/2006 1 Overview Look at fastest computers

More information

MIMD Overview. Intel Paragon XP/S Overview. XP/S Usage. XP/S Nodes and Interconnection. ! Distributed-memory MIMD multicomputer

MIMD Overview. Intel Paragon XP/S Overview. XP/S Usage. XP/S Nodes and Interconnection. ! Distributed-memory MIMD multicomputer MIMD Overview Intel Paragon XP/S Overview! MIMDs in the 1980s and 1990s! Distributed-memory multicomputers! Intel Paragon XP/S! Thinking Machines CM-5! IBM SP2! Distributed-memory multicomputers with hardware

More information

Blue Gene/Q. Hardware Overview Michael Stephan. Mitglied der Helmholtz-Gemeinschaft

Blue Gene/Q. Hardware Overview Michael Stephan. Mitglied der Helmholtz-Gemeinschaft Blue Gene/Q Hardware Overview 02.02.2015 Michael Stephan Blue Gene/Q: Design goals System-on-Chip (SoC) design Processor comprises both processing cores and network Optimal performance / watt ratio Small

More information

Technology Trends IT ELS. Kevin Kettler Dell CTO

Technology Trends IT ELS. Kevin Kettler Dell CTO Technology Trends IT ELS Kevin Kettler Dell CTO Core Technology Building Blocks Processor Chipset Graphics Memory I/O Subsystems Process Technology.13µ 2001 90nm 2003 65nm 2005 45nm 2007 32nm ~2009 22nm

More information

IBM eserver xseries. BladeCenter. Arie Berkovitch eserver Territory Manager IBM Corporation

IBM eserver xseries. BladeCenter. Arie Berkovitch eserver Territory Manager IBM Corporation BladeCenter Arie Berkovitch eserver Territory Manager 2006 IBM Corporation IBM BladeCenter What is a Blade A server on a card each Blade has its own: processor networking memory optional storage etc. IBM

More information

Architecture of the IBM Blue Gene Supercomputer. Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY

Architecture of the IBM Blue Gene Supercomputer. Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY Architecture of the IBM Blue Gene Supercomputer Dr. George Chiu IEEE Fellow IBM T.J. Watson Research Center Yorktown Heights, NY President Obama Honors IBM's Blue Gene Supercomputer With National Medal

More information

Cray events. ! Cray User Group (CUG): ! Cray Technical Workshop Europe:

Cray events. ! Cray User Group (CUG): ! Cray Technical Workshop Europe: Cray events! Cray User Group (CUG):! When: May 16-19, 2005! Where: Albuquerque, New Mexico - USA! Registration: reserved to CUG members! Web site: http://www.cug.org! Cray Technical Workshop Europe:! When:

More information

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine April 2007 Part No 820-1270-11 Revision 1.1, 4/18/07

More information

POWER7: IBM's Next Generation Server Processor

POWER7: IBM's Next Generation Server Processor POWER7: IBM's Next Generation Server Processor Acknowledgment: This material is based upon work supported by the Defense Advanced Research Projects Agency under its Agreement No. HR0011-07-9-0002 Outline

More information

IBM Virtual Fabric Architecture

IBM Virtual Fabric Architecture IBM Virtual Fabric Architecture Seppo Kemivirta Product Manager Finland IBM System x & BladeCenter 2007 IBM Corporation Five Years of Durable Infrastructure Foundation for Success BladeCenter Announced

More information

EE 4683/5683: COMPUTER ARCHITECTURE

EE 4683/5683: COMPUTER ARCHITECTURE 3/3/205 EE 4683/5683: COMPUTER ARCHITECTURE Lecture 8: Interconnection Networks Avinash Kodi, kodi@ohio.edu Agenda 2 Interconnection Networks Performance Metrics Topology 3/3/205 IN Performance Metrics

More information

Application Performance on Dual Processor Cluster Nodes

Application Performance on Dual Processor Cluster Nodes Application Performance on Dual Processor Cluster Nodes by Kent Milfeld milfeld@tacc.utexas.edu edu Avijit Purkayastha, Kent Milfeld, Chona Guiang, Jay Boisseau TEXAS ADVANCED COMPUTING CENTER Thanks Newisys

More information

An Overview of High Performance Computing. Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 11/29/2005 1

An Overview of High Performance Computing. Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 11/29/2005 1 An Overview of High Performance Computing Jack Dongarra University of Tennessee and Oak Ridge National Laboratory 11/29/ 1 Overview Look at fastest computers From the Top5 Some of the changes that face

More information

Unfolding Blue Gene. Workshop on High-Performance Computing ETH Zürich, Switzerland, September

Unfolding Blue Gene. Workshop on High-Performance Computing ETH Zürich, Switzerland, September Unfolding Blue Gene Workshop on High-Performance Computing ETH Zürich, Switzerland, September 4-5 2006 Michael Hennecke HPC Systems Architect hennecke @ de.ibm.com 2006 IBM Corporation Prologue 1 Key Criteria

More information

Power 7. Dan Christiani Kyle Wieschowski

Power 7. Dan Christiani Kyle Wieschowski Power 7 Dan Christiani Kyle Wieschowski History 1980-2000 1980 RISC Prototype 1990 POWER1 (Performance Optimization With Enhanced RISC) (1 um) 1993 IBM launches 66MHz POWER2 (.35 um) 1997 POWER2 Super

More information

p5 520 server Robust entry system designed for the on demand world Highlights

p5 520 server Robust entry system designed for the on demand world Highlights Robust entry system designed for the on demand world IBM p5 520 server _` p5 520 rack system with I/O drawer Highlights Innovative, powerful, affordable, open and adaptable UNIX and Linux environment system

More information

POWER7: IBM's Next Generation Server Processor

POWER7: IBM's Next Generation Server Processor Hot Chips 21 POWER7: IBM's Next Generation Server Processor Ronald Kalla Balaram Sinharoy POWER7 Chief Engineer POWER7 Chief Core Architect Acknowledgment: This material is based upon work supported by

More information

Parallel Computing: From Inexpensive Servers to Supercomputers

Parallel Computing: From Inexpensive Servers to Supercomputers Parallel Computing: From Inexpensive Servers to Supercomputers Lyle N. Long The Pennsylvania State University & The California Institute of Technology Seminar to the Koch Lab http://www.personal.psu.edu/lnl

More information

ECMWF HPC Workshop 10/26/2004. IBM s High Performance Computing Strategy. Dr Don Grice Distinguished Engineer, HPC Solutions

ECMWF HPC Workshop 10/26/2004. IBM s High Performance Computing Strategy. Dr Don Grice Distinguished Engineer, HPC Solutions Research ECMWF HPC Workshop 10/26/2004 s High Performance Computing Strategy Dr Don Grice Distinguished Engineer, HPC Solutions Research HPC Key to an Innovation Economy Life Sciences: Digital Media: Increasing

More information

Node Hardware. Performance Convergence

Node Hardware. Performance Convergence Node Hardware Improved microprocessor performance means availability of desktop PCs with performance of workstations (and of supercomputers of 10 years ago) at significanty lower cost Parallel supercomputers

More information

The Road from Peta to ExaFlop

The Road from Peta to ExaFlop The Road from Peta to ExaFlop Andreas Bechtolsheim June 23, 2009 HPC Driving the Computer Business Server Unit Mix (IDC 2008) Enterprise HPC Web 100 75 50 25 0 2003 2008 2013 HPC grew from 13% of units

More information

InfiniBand Experiences of PC²

InfiniBand Experiences of PC² InfiniBand Experiences of PC² Dr. Jens Simon simon@upb.de Paderborn Center for Parallel Computing (PC²) Universität Paderborn hpcline-infotag, 18. Mai 2004 PC² - Paderborn Center for Parallel Computing

More information

European energy efficient supercomputer project

European energy efficient supercomputer project http://www.montblanc-project.eu European energy efficient supercomputer project Simon McIntosh-Smith University of Bristol (Based on slides from Alex Ramirez, BSC) Disclaimer: Speaking for myself... All

More information

IBM BladeCenter: The right choice

IBM BladeCenter: The right choice Reliable, innovative, open and easy to deploy, IBM BladeCenter gives you the right choice to fit your diverse business needs IBM BladeCenter: The right choice Overview IBM BladeCenter provides flexibility

More information

The BlueGene/L Supercomputer: Delivering Large Scale Parallelism

The BlueGene/L Supercomputer: Delivering Large Scale Parallelism : Deliering Large Scale Parallelism José E. Moreira IBM T. J. Watson Research Center August 2004 BlueGene/L Outline BlueGene/L high-leel oeriew BlueGene/L system architecture and technology oeriew BlueGene/L

More information

Porting Applications to Blue Gene/P

Porting Applications to Blue Gene/P Porting Applications to Blue Gene/P Dr. Christoph Pospiech pospiech@de.ibm.com 05/17/2010 Agenda What beast is this? Compile - link go! MPI subtleties Help! It doesn't work (the way I want)! Blue Gene/P

More information

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC? Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC? Nikola Rajovic, Paul M. Carpenter, Isaac Gelado, Nikola Puzovic, Alex Ramirez, Mateo Valero SC 13, November 19 th 2013, Denver, CO, USA

More information

Early experience with Blue Gene/P. Jonathan Follows IBM United Kingdom Limited HPCx Annual Seminar 26th. November 2007

Early experience with Blue Gene/P. Jonathan Follows IBM United Kingdom Limited HPCx Annual Seminar 26th. November 2007 Early experience with Blue Gene/P Jonathan Follows IBM United Kingdom Limited HPCx Annual Seminar 26th. November 2007 Agenda System components The Daresbury BG/P and BG/L racks How to use the system Some

More information

Lecture 2 Parallel Programming Platforms

Lecture 2 Parallel Programming Platforms Lecture 2 Parallel Programming Platforms Flynn s Taxonomy In 1966, Michael Flynn classified systems according to numbers of instruction streams and the number of data stream. Data stream Single Multiple

More information

SMP and ccnuma Multiprocessor Systems. Sharing of Resources in Parallel and Distributed Computing Systems

SMP and ccnuma Multiprocessor Systems. Sharing of Resources in Parallel and Distributed Computing Systems Reference Papers on SMP/NUMA Systems: EE 657, Lecture 5 September 14, 2007 SMP and ccnuma Multiprocessor Systems Professor Kai Hwang USC Internet and Grid Computing Laboratory Email: kaihwang@usc.edu [1]

More information

Overview of the SPARC Enterprise Servers

Overview of the SPARC Enterprise Servers Overview of the SPARC Enterprise Servers SPARC Enterprise Technologies for the Datacenter Ideal for Enterprise Application Deployments System Overview Virtualization technologies > Maximize system utilization

More information

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.

More information

FUSION1200 Scalable x86 SMP System

FUSION1200 Scalable x86 SMP System FUSION1200 Scalable x86 SMP System Introduction Life Sciences Departmental System Manufacturing (CAE) Departmental System Competitive Analysis: IBM x3950 Competitive Analysis: SUN x4600 / SUN x4600 M2

More information

IBM System p5 510 and 510Q Express Servers

IBM System p5 510 and 510Q Express Servers More value, easier to use, and more performance for the on demand world IBM System p5 510 and 510Q Express Servers System p5 510 or 510Q Express rack-mount servers Highlights Up to 4-core scalability with

More information

represent parallel computers, so distributed systems such as Does not consider storage or I/O issues

represent parallel computers, so distributed systems such as Does not consider storage or I/O issues Top500 Supercomputer list represent parallel computers, so distributed systems such as SETI@Home are not considered Does not consider storage or I/O issues Both custom designed machines and commodity machines

More information

This Unit: Putting It All Together. CIS 371 Computer Organization and Design. Sources. What is Computer Architecture?

This Unit: Putting It All Together. CIS 371 Computer Organization and Design. Sources. What is Computer Architecture? This Unit: Putting It All Together CIS 371 Computer Organization and Design Unit 15: Putting It All Together: Anatomy of the XBox 360 Game Console Application OS Compiler Firmware CPU I/O Memory Digital

More information

IBM _` p5 570 servers

IBM _` p5 570 servers Innovative, modular, scalable, mid-range systems designed for the on demand world IBM _` p5 570 servers and departmental or regional server deployments. The rack-mount p5-570 delivers power, flexibility,

More information

How то Use HPC Resources Efficiently by a Message Oriented Framework.

How то Use HPC Resources Efficiently by a Message Oriented Framework. How то Use HPC Resources Efficiently by a Message Oriented Framework www.hp-see.eu E. Atanassov, T. Gurov, A. Karaivanova Institute of Information and Communication Technologies Bulgarian Academy of Science

More information

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1

Sami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Acknowledgements: Petra Kogel Sami Saarinen Peter Towers 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Motivation Opteron and P690+ clusters MPI communications IFS Forecast Model IFS 4D-Var

More information

This Unit: Putting It All Together. CIS 501 Computer Architecture. What is Computer Architecture? Sources

This Unit: Putting It All Together. CIS 501 Computer Architecture. What is Computer Architecture? Sources This Unit: Putting It All Together CIS 501 Computer Architecture Unit 12: Putting It All Together: Anatomy of the XBox 360 Game Console Application OS Compiler Firmware CPU I/O Memory Digital Circuits

More information

Blue Gene: A Next Generation Supercomputer (BlueGene/P)

Blue Gene: A Next Generation Supercomputer (BlueGene/P) Blue Gene: A Next Generation Supercomputer (BlueGene/P) Presented by Alan Gara (chief architect) representing the Blue Gene team. 2007 IBM Corporation Outline of Talk A brief sampling of applications on

More information

TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology

TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology BY ERICH STROHMAIER COMPUTER SCIENTIST, FUTURE TECHNOLOGIES GROUP, LAWRENCE BERKELEY

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances) HPC and IT Issues Session Agenda Deployment of Simulation (Trends and Issues Impacting IT) Discussion Mapping HPC to Performance (Scaling, Technology Advances) Discussion Optimizing IT for Remote Access

More information

HW Trends and Architectures

HW Trends and Architectures Pavel Tvrdík, Jiří Kašpar (ČVUT FIT) HW Trends and Architectures MI-POA, 2011, Lecture 1 1/29 HW Trends and Architectures prof. Ing. Pavel Tvrdík CSc. Ing. Jiří Kašpar Department of Computer Systems Faculty

More information

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007

Mellanox Technologies Maximize Cluster Performance and Productivity. Gilad Shainer, October, 2007 Mellanox Technologies Maximize Cluster Performance and Productivity Gilad Shainer, shainer@mellanox.com October, 27 Mellanox Technologies Hardware OEMs Servers And Blades Applications End-Users Enterprise

More information

Fujitsu s new supercomputer, delivering the next step in Exascale capability

Fujitsu s new supercomputer, delivering the next step in Exascale capability Fujitsu s new supercomputer, delivering the next step in Exascale capability Toshiyuki Shimizu November 19th, 2014 0 Past, PRIMEHPC FX100, and roadmap for Exascale 2011 2012 2013 2014 2015 2016 2017 2018

More information

IBM System x servers. Innovation comes standard

IBM System x servers. Innovation comes standard IBM System x servers Innovation comes standard IBM System x servers Highlights Build a cost-effective, flexible IT environment with IBM X-Architecture technology. Achieve maximum performance per watt with

More information

The AMD64 Technology for Server and Workstation. Dr. Ulrich Knechtel Enterprise Program Manager EMEA

The AMD64 Technology for Server and Workstation. Dr. Ulrich Knechtel Enterprise Program Manager EMEA The AMD64 Technology for Server and Workstation Dr. Ulrich Knechtel Enterprise Program Manager EMEA Agenda Direct Connect Architecture AMD Opteron TM Processor Roadmap Competition OEM support The AMD64

More information

High Performance Computing: Blue-Gene and Road Runner. Ravi Patel

High Performance Computing: Blue-Gene and Road Runner. Ravi Patel High Performance Computing: Blue-Gene and Road Runner Ravi Patel 1 HPC General Information 2 HPC Considerations Criterion Performance Speed Power Scalability Number of nodes Latency bottlenecks Reliability

More information

COSC 6385 Computer Architecture - Multi Processor Systems

COSC 6385 Computer Architecture - Multi Processor Systems COSC 6385 Computer Architecture - Multi Processor Systems Fall 2006 Classification of Parallel Architectures Flynn s Taxonomy SISD: Single instruction single data Classical von Neumann architecture SIMD:

More information

The Optimal CPU and Interconnect for an HPC Cluster

The Optimal CPU and Interconnect for an HPC Cluster 5. LS-DYNA Anwenderforum, Ulm 2006 Cluster / High Performance Computing I The Optimal CPU and Interconnect for an HPC Cluster Andreas Koch Transtec AG, Tübingen, Deutschland F - I - 15 Cluster / High Performance

More information

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes Introduction: Modern computer architecture The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes Motivation: Multi-Cores where and why Introduction: Moore s law Intel

More information

Unit 11: Putting it All Together: Anatomy of the XBox 360 Game Console

Unit 11: Putting it All Together: Anatomy of the XBox 360 Game Console Computer Architecture Unit 11: Putting it All Together: Anatomy of the XBox 360 Game Console Slides originally developed by Milo Martin & Amir Roth at University of Pennsylvania! Computer Architecture

More information

2008 International ANSYS Conference

2008 International ANSYS Conference 2008 International ANSYS Conference Maximizing Productivity With InfiniBand-Based Clusters Gilad Shainer Director of Technical Marketing Mellanox Technologies 2008 ANSYS, Inc. All rights reserved. 1 ANSYS,

More information

COSC 6374 Parallel Computation. Parallel Computer Architectures

COSC 6374 Parallel Computation. Parallel Computer Architectures OS 6374 Parallel omputation Parallel omputer Architectures Some slides on network topologies based on a similar presentation by Michael Resch, University of Stuttgart Spring 2010 Flynn s Taxonomy SISD:

More information

The Future of Computing: AMD Vision

The Future of Computing: AMD Vision The Future of Computing: AMD Vision Tommy Toles AMD Business Development Executive thomas.toles@amd.com 512-327-5389 Agenda Celebrating Momentum Years of Leadership & Innovation Current Opportunity To

More information

Power your planet. Optimizing the Enterprise Data Center POWER7 Powers a Smarter Infrastructure

Power your planet. Optimizing the Enterprise Data Center POWER7 Powers a Smarter Infrastructure Power your planet. Optimizing the Enterprise Data Center POWER7 Powers a Smarter Infrastructure Enoch Lau Field Technical Sales Specialist, Power Systems Systems & Technology Group Power your planet. Smarter

More information

Convergence of Parallel Architecture

Convergence of Parallel Architecture Parallel Computing Convergence of Parallel Architecture Hwansoo Han History Parallel architectures tied closely to programming models Divergent architectures, with no predictable pattern of growth Uncertainty

More information

What does Heterogeneity bring?

What does Heterogeneity bring? What does Heterogeneity bring? Ken Koch Scientific Advisor, CCS-DO, LANL LACSI 2006 Conference October 18, 2006 Some Terminology Homogeneous Of the same or similar nature or kind Uniform in structure or

More information

Roadmapping of HPC interconnects

Roadmapping of HPC interconnects Roadmapping of HPC interconnects MIT Microphotonics Center, Fall Meeting Nov. 21, 2008 Alan Benner, bennera@us.ibm.com Outline Top500 Systems, Nov. 2008 - Review of most recent list & implications on interconnect

More information

The Center for Computational Research & Grid Computing

The Center for Computational Research & Grid Computing The Center for Computational Research & Grid Computing Russ Miller Center for Computational Research Computer Science & Engineering SUNY-Buffalo Hauptman-Woodward Medical Inst NSF, NIH, DOE NIMA, NYS,

More information

IBM System x servers. Innovation comes standard

IBM System x servers. Innovation comes standard IBM System x servers Innovation comes standard IBM System x servers Highlights Build a cost-effective, flexible IT environment with IBM X-Architecture technology. Achieve maximum performance per watt with

More information

Initial Performance Evaluation of the Cray SeaStar Interconnect

Initial Performance Evaluation of the Cray SeaStar Interconnect Initial Performance Evaluation of the Cray SeaStar Interconnect Ron Brightwell Kevin Pedretti Keith Underwood Sandia National Laboratories Scalable Computing Systems Department 13 th IEEE Symposium on

More information

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES

SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES Jan - Mar 2009 SMART SERVER AND STORAGE SOLUTIONS FOR GROWING BUSINESSES For more details visit: http://www-07preview.ibm.com/smb/in/expressadvantage/xoffers/index.html IBM Servers & Storage Configured

More information

This Unit: Putting It All Together. CIS 371 Computer Organization and Design. What is Computer Architecture? Sources

This Unit: Putting It All Together. CIS 371 Computer Organization and Design. What is Computer Architecture? Sources This Unit: Putting It All Together CIS 371 Computer Organization and Design Unit 15: Putting It All Together: Anatomy of the XBox 360 Game Console Application OS Compiler Firmware CPU I/O Memory Digital

More information

Accelerating HPC. (Nash) Dr. Avinash Palaniswamy High Performance Computing Data Center Group Marketing

Accelerating HPC. (Nash) Dr. Avinash Palaniswamy High Performance Computing Data Center Group Marketing Accelerating HPC (Nash) Dr. Avinash Palaniswamy High Performance Computing Data Center Group Marketing SAAHPC, Knoxville, July 13, 2010 Legal Disclaimer Intel may make changes to specifications and product

More information

What Transitioning from 32-bit to 64-bit x86 Computing Means Today

What Transitioning from 32-bit to 64-bit x86 Computing Means Today What Transitioning from 32-bit to 64-bit x86 Computing Means Today Chris Wanner Senior Architect, Industry Standard Servers Hewlett-Packard 2004 Hewlett-Packard Development Company, L.P. The information

More information

The way toward peta-flops

The way toward peta-flops The way toward peta-flops ISC-2011 Dr. Pierre Lagier Chief Technology Officer Fujitsu Systems Europe Where things started from DESIGN CONCEPTS 2 New challenges and requirements! Optimal sustained flops

More information

Intel released new technology call P6P

Intel released new technology call P6P P6 and IA-64 8086 released on 1978 Pentium release on 1993 8086 has upgrade by Pipeline, Super scalar, Clock frequency, Cache and so on But 8086 has limit, Hard to improve efficiency Intel released new

More information

COSC 6374 Parallel Computation. Parallel Computer Architectures

COSC 6374 Parallel Computation. Parallel Computer Architectures OS 6374 Parallel omputation Parallel omputer Architectures Some slides on network topologies based on a similar presentation by Michael Resch, University of Stuttgart Edgar Gabriel Fall 2015 Flynn s Taxonomy

More information

HP s Performance Oriented Datacenter

HP s Performance Oriented Datacenter HP s Performance Oriented Datacenter and Automation SEAH Kwang Leng Marketing Manager Enterprise Storage and Servers Asia Pacific & Japan 2008 Hewlett-Packard Development Company, L.P. The information

More information

Parallel File Systems Compared

Parallel File Systems Compared Parallel File Systems Compared Computing Centre (SSCK) University of Karlsruhe, Germany Laifer@rz.uni-karlsruhe.de page 1 Outline» Parallel file systems (PFS) Design and typical usage Important features

More information

High Performance Computing - Parallel Computers and Networks. Prof Matt Probert

High Performance Computing - Parallel Computers and Networks. Prof Matt Probert High Performance Computing - Parallel Computers and Networks Prof Matt Probert http://www-users.york.ac.uk/~mijp1 Overview Parallel on a chip? Shared vs. distributed memory Latency & bandwidth Topology

More information

Server Networking e Virtual Data Center

Server Networking e Virtual Data Center Server Networking e Virtual Data Center Roma, 8 Febbraio 2006 Luciano Pomelli Consulting Systems Engineer lpomelli@cisco.com 1 Typical Compute Profile at a Fortune 500 Enterprise Compute Infrastructure

More information

Octopus: A Multi-core implementation

Octopus: A Multi-core implementation Octopus: A Multi-core implementation Kalpesh Sheth HPEC 2007, MIT, Lincoln Lab Export of this products is subject to U.S. export controls. Licenses may be required. This material provides up-to-date general

More information

Eldorado. Outline. John Feo. Cray Inc. Why multithreaded architectures. The Cray Eldorado. Programming environment.

Eldorado. Outline. John Feo. Cray Inc. Why multithreaded architectures. The Cray Eldorado. Programming environment. Eldorado John Feo Cray Inc Outline Why multithreaded architectures The Cray Eldorado Programming environment Program examples 2 1 Overview Eldorado is a peak in the North Cascades. Internal Cray project

More information

Presentations: Jack Dongarra, University of Tennessee & ORNL. The HPL Benchmark: Past, Present & Future. Mike Heroux, Sandia National Laboratories

Presentations: Jack Dongarra, University of Tennessee & ORNL. The HPL Benchmark: Past, Present & Future. Mike Heroux, Sandia National Laboratories HPC Benchmarking Presentations: Jack Dongarra, University of Tennessee & ORNL The HPL Benchmark: Past, Present & Future Mike Heroux, Sandia National Laboratories The HPCG Benchmark: Challenges It Presents

More information

What have we learned from the TOP500 lists?

What have we learned from the TOP500 lists? What have we learned from the TOP500 lists? Hans Werner Meuer University of Mannheim and Prometeus GmbH Sun HPC Consortium Meeting Heidelberg, Germany June 19-20, 2001 Outlook TOP500 Approach Snapshots

More information

Birds of a Feather Presentation

Birds of a Feather Presentation Mellanox InfiniBand QDR 4Gb/s The Fabric of Choice for High Performance Computing Gilad Shainer, shainer@mellanox.com June 28 Birds of a Feather Presentation InfiniBand Technology Leadership Industry Standard

More information

IBM System p5 550 and 550Q Express servers

IBM System p5 550 and 550Q Express servers The right solutions for consolidating multiple applications on a single system IBM System p5 550 and 550Q Express servers Highlights Up to 8-core scalability using Quad-Core Module technology Point, click

More information

IBM Power 755 server. High performance compute node for scalable clusters using InfiniBand architecture interconnect products.

IBM Power 755 server. High performance compute node for scalable clusters using InfiniBand architecture interconnect products. IBM Power 755 server High performance compute node for scalable clusters using InfiniBand architecture interconnect products. Highlights Optimized for running highly parallel computationally intensive

More information

What is Parallel Computing?

What is Parallel Computing? What is Parallel Computing? Parallel Computing is several processing elements working simultaneously to solve a problem faster. 1/33 What is Parallel Computing? Parallel Computing is several processing

More information

Power Technology For a Smarter Future

Power Technology For a Smarter Future 2011 IBM Power Systems Technical University October 10-14 Fontainebleau Miami Beach Miami, FL IBM Power Technology For a Smarter Future Jeffrey Stuecheli Power Processor Development Copyright IBM Corporation

More information

Philippe Thierry Sr Staff Engineer Intel Corp.

Philippe Thierry Sr Staff Engineer Intel Corp. HPC@Intel Philippe Thierry Sr Staff Engineer Intel Corp. IBM, April 8, 2009 1 Agenda CPU update: roadmap, micro-μ and performance Solid State Disk Impact What s next Q & A Tick Tock Model Perenity market

More information