NCEP HPC Transition. 15 th ECMWF Workshop on the Use of HPC in Meteorology. Allan Darling. Deputy Director, NCEP Central Operations
|
|
- Annice Byrd
- 5 years ago
- Views:
Transcription
1 NCEP HPC Transition 15 th ECMWF Workshop on the Use of HPC Allan Darling Deputy Director, NCEP Central Operations
2 WCOSS NOAA Weather and Climate Operational Supercomputing System CURRENT OPERATIONAL CHALLENGE 2
3 NCEP Operational Forecast Skill 36 and 72 Hour 500 MB over North America [100 * (1-S1/70) Method] 36 Hour Forecast 72 Hour Forecast IBM 701 IBM 704 IBM 7090 IBM 7094 CDC 6600 IBM 360/195 CYBER 205 CRAY Y-MP CRAY C90 IBM SP IBM P690 IBM P655+ IBM Power
4 NCEP HPC Growth TeraFLOP per operational system
5 HPC Production HWM Utilization (02/16/2010) February :00:00 00:56:00 01:52:00 02:48:00 03:44:00 04:40:00 05:36:00 06:32:00 07:28:00 08:24:00 09:20:00 10:16:00 11:12:00 12:08:00 13:04:00 14:00:00 14:56:00 15:52:00 16:48:00 17:44:00 18:40:00 19:36:00 20:32:00 21:28:00 22:24:00 23:20:00 5
6 150 Production 08/27/ % Utilized (69.77% Prod/11.33% Dev) :00 00:27 00:54 01:21 01:48 02:15 02:42 03:09 03:36 04:03 04:30 04:57 05:24 05:51 06:18 06:45 07:12 07:39 08:06 08:33 09:00 09:27 09:54 10:21 10:48 11:15 11:42 12:09 12:36 13:03 13:30 13:57 14:24 14:51 15:18 15:45 16:12 16:39 17:06 17:33 18:00 18:27 18:54 19:21 19:48 20:15 20:42 21:09 21:36 22:03 22:30 22:57 23:24 23:51 6
7 May
8 FY11 On-Time Product Generation % Average On-Time Product Generation - Oct-July /27/2010: Storage failure 2/16/2011: Failed node /6/2010 and 10/7/2010: Job failures
9 FY12 On-Time Product Generation 99.35% Average On-time Product Generation - Oct Aug 2012 Impact worsened due to system load /7/2012, 2/8/2012 & 2/9/2012: System performance issue 2/18/2012: System hardware failure 6/9/2012: Model failure 8/28/2012 & 8/29/2012: System management loading issue /14/2012: Job failure 8/16/2012 & 8/17/2012: Hardware repair issue /18/2011 & 10/19/2011: Implementation error. 2/1/2012: DNS issue /10/2011: Facility power outage, requiring emergency system switch 12/24/2011 & 12/25/2011: Model failure 2/6/2012: HW issues leads to emergency system switch. 2/16/2012: Hardware issue 8/1/2012: Power loss at facility and emergency system switch /14/2012 & 2/15/2012: Implementation error 3/20/2012: Hardware 5/13/2012: Storage failure & emergency system switch
10 WCOSS NOAA Weather and Climate Operational Supercomputing System CONTRACT & SYSTEM OVERVIEW 10
11 WCOSS Contract Overview Sustains current operational systems during build, acceptance and transition to new systems Bridge contract expires September 2013 One 5-year base period, one 3-year option period, one 2-year transition option period Provides phased delivery of new Primary and Backup systems in the 5-year base period Will transition from IBM/Power6/AIX to IBM/Intel/Linux by end of FY13 Provides new sites Primary Reston, VA Backup Orlando, FL 11
12 Bridge Contract Current NOAA Weather and Climate Operational Supercomputer System Location Primary Gaithersburg, MD (IBM provided facility) Backup Fairmont, WV (GFE NASA IV&V facility) Configuration Identical Systems (per site) IBM Power 6/P575/AIX 73.9 trillion calculations/sec 5,314 processing cores 0.8 petabytes of storage Performance Requirements Minimum 99.0% Operational Use Time Minimum 99.0% On-time Product Generation Minimum 99.0% Development Use Time Minimum 99.0% System Availability Failover tested regularly Significance Where United States weather forecast process starts for the protection of lives and livelihood Produces model guidance at global, national, and regional scales Examples: Hurricane Forecasts Aviation / Transportation Air Quality Fire Weather Contract ensures no gap in operations Inputs and Outputs Processes 3.5 billion observations/day Produces over 15 million products/day 12
13 WCOSS Contract Future NOAA Weather and Climate Operational Supercomputer System Location Primary Reston, VA (IBM provided facility) Backup Orlando, FL (IBM provided facility) Configuration Identical Systems (per site) IBM idataplex/intel Sandy Bridge/Linux 208 trillion calculations/sec 10,048 processing cores 2.59 petabytes of storage Performance Requirements Minimum 99.9% Operational Use Time Minimum 99.0% On-time Product Generation Minimum 99.0% Development Use Time Minimum 99.0% System Availability Failover tested regularly Significance Where United States weather forecast process starts for the protection of lives and livelihood Produces model guidance at global, national, and regional scales Examples: Hurricane Forecasts Aviation / Transportation Air Quality Fire Weather Inputs and Outputs Processes 3.5 billion observations/day Produces over 15 million products/day 13
14 Key Performance Metrics Bridge At least 99.0% operational use time WCOSS At least 99.9% operational use time WCOSS and Bridge At least 99.0% system availability Sustain on-time generation of model guidance products at a rate of 99% or better; within 15 minutes of target completion times No degradation in product delivery to occur during planned, routine failover testing between the Primary and Backup WCOSS Delivery of committed computing capability upgrades objectively measured and accepted through the use of benchmark suite 14
15 WCOSS Locations 15
16 System Characteristics (per site) Bridge System WCOSS Phase 1 WCOSS Phase 2 Life Cycle Architecture OS Average Capability Average Capacity Cores TeraFLOP Oct Sep 2013 Power6 AIX X 1.0X 5, TF Planned accept Dec 2012 FOC Aug 2013 Planned accept Dec 2014 FOC Jul 2015 idataplex idataplex Linux (RHEL) Linux (RHEL) *2.0X over Bridge P6 *1.9X over Phase 1 *Estimated values; assumes FY13 NWS Reallocation Budget *2.3X over Bridge P6 *4.4X over Phase 1 10, TF *44,400 *920 TF 16
17 System Characteristics cont. (per site) Bridge System WCOSS Phase 1 WCOSS Phase 2 Life Cycle Operational Use Time Requirement Storage Useable Power KW Floor Space Oct Sep % 0.80 PB 689 KW 3,100 SF Plan accept Dec 2012 FOC Aug 2013 Plan accept Dec 2014 FOC Jul % 2.59 PB 469 KW 4,060 SF 99.9% *7.2PB *1050 KW 4,060 SF * Estimated values; assumes FY13 NWS Reallocation Budget 17
18 WCOSS HPC Growth TeraFLOP per operational system Built into the contract are capability and storage increases approximately every 3 years with no increase in funding NOAA modeling plans rely heavily on built-in contractual increases Projected
19 WCOSS NOAA Weather and Climate Operational Supercomputing System SCHEDULE & TRANSITION 19
20 Bridge Contract, WCOSS Contract and Transition from Bridge to new WCOSS Services FY11 FY12 FY13 FY14 Previous Contract Operations Transition activities Date Bridge Contract New WCOSS Contract Milestone Major Milestones Aug 2011 Sept 2011 Nov 2011 Apr 2012 Sep 2012 Nov 2012 Dec 2012 May2013 July 2013 Aug2013 Sep 2013 Awarded 2-year Bridge Contract Previous WCOSS contract expired Awarded new WCOSS ID/IQ Contract Installed early access system & started early testing with new architecture Complete installation of Primary system (Reston, VA) Complete installation of Backup system (Orlando, FL) Acceptance of all systems completed Achieve Authority to Operate (ATO) for new WCOSS (NOAA8866) Complete transition porting, testing, validation and tuning Go operational on new WCOSS (Ops on new WCOSS Contract) Close out Bridge contract 20
21 Transition End-to-End Validation Development Organizations Porting, Testing and Source Code Certification - $DEV Branch Chiefs certify New baseline of software systems and applications NCEP Operations & Development Organizations Production Validation - NCO/PMB Branch certify Operational products equivalence NCEP Operations & Development Organizations & Customers Customer Validation - NCO Director certify GO LIVE - Using the WCOSS Transition Test Checklist & Certification Document: - Part I:Approval for planned changes - Part II: Evaluation and certification - Vertical structure - Re-compile - Integrate into production scheduler - Tuning for resources and timing - Post production/ product generation - Validate output - Downstream testing for other software systems - Vertical structure - Dataflow actions - Validate network capacity - Notify customers - Receive customer feedback - Make decisions and changes based on customer feedback 21
22 WCOSS NOAA Weather and Climate Operational Supercomputing System SYSTEM DETAILS 22
23 Phase 1 and 2 Software Overview Red Hat Linux OS IBM xcat IBM Parallel Environment including runtime and developer editions Platform Computing Load Sharing Facility (LSF) ECMWF ecflow Workflow Manager IBM General Parallel Filesystem (GPFS) Intel Cluster Studio Rogue Wave TotalView Debugger IDL Community Supported Software, e.g., BUFR, GRIB, netcdf, GEMPAK, GIS, GoogleMaps, GrADS, NCAR Graphics, VIS5D, Subversion 23
24 System Architecture IBM dx360m4 idataplex server based on the Intel Sandy Bridge Processor Clustered with a Mellanox InfiniBand FDR fabric Cluster components include the compute nodes, login nodes, high-performance interconnect, and the management infrastructure Configured with hot spares for high system availability Storage based on the IBM DCS3700 disk subsystem and GPFS Includes services to address the implementation, acceptance testing, and training for the transition from the current to new systems 24
25 Phase 1 System Architecture 25
26 Phase 2 System Architecture 26
27 WCOSS NOAA Weather and Climate Operational Supercomputing System WRAP-UP 27
28 Transition Key Considerations Schedule constraint with high impact consequences Transition before bridge contract expires in September 2013 Current operational systems at peak load nearing end of life Resource dependencies external to investment Transition highly dependent on resources from the NOAA Data Assimilation & Modeling/Science investment Technical NOAA transitioning operational software systems/applications from legacy systems (Bridge) to a new/different architecture (WCOSS) Technical challenges with integration of IBM s new hardware and system software architecture 28
29 Summary System growth curve delayed but resuming and accelerating (for now) Model implementations continued at the cost of system recoverability Contract architecture allows for agile response to budget changes System architecture designed with agility to match contract capability 29
30 WCOSS NOAA Weather and Climate Operational Supercomputing System QUESTIONS 30
Supercomputing at the United States National Weather Service (NWS)
Supercomputing at the United States National Weather Service (NWS) Rebecca Cosgrove Deputy Director, NCEP Central Operations United States National Weather Service 18th Workshop on HPC in Meteorology September
More informationNCAR s Data-Centric Supercomputing Environment Yellowstone. November 28, 2011 David L. Hart, CISL
NCAR s Data-Centric Supercomputing Environment Yellowstone November 28, 2011 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone hardware and software Deployment schedule Allocations
More informationShared Services Canada Environment and Climate Change Canada HPC Renewal Project
Shared Services Canada Environment and Climate Change Canada HPC Renewal Project CUG 2017 Redmond, WA, USA Deric Sullivan Alain St-Denis & Luc Corbeil May 2017 Background: SSC's HPC Renewal for ECCC Environment
More informationNCAR s Data-Centric Supercomputing Environment Yellowstone. November 29, 2011 David L. Hart, CISL
NCAR s Data-Centric Supercomputing Environment Yellowstone November 29, 2011 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone hardware and software Deployment schedule Allocations
More informationUpdate on Cray Activities in the Earth Sciences
Update on Cray Activities in the Earth Sciences Presented to the 13 th ECMWF Workshop on the Use of HPC in Meteorology 3-7 November 2008 Per Nyberg nyberg@cray.com Director, Marketing and Business Development
More information2014 LENOVO. ALL RIGHTS RESERVED.
2014 LENOVO. ALL RIGHTS RESERVED. Parallel System description. Outline p775, p460 and dx360m4, Hardware and Software Compiler options and libraries used. WRF tunable parameters for scaling runs. nproc_x,
More informationGOES-R Update Greg Mandt System Program Director
GOES-R Update Greg Mandt System Program Director GOES User Conference 3 Nov 2009 1 Continuity of Geostationary Operational Satellite Programs Calendar Year As of October 30, 2009 07 08 09 10 11 12 13 14
More informationSami Saarinen Peter Towers. 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1
Acknowledgements: Petra Kogel Sami Saarinen Peter Towers 11th ECMWF Workshop on the Use of HPC in Meteorology Slide 1 Motivation Opteron and P690+ clusters MPI communications IFS Forecast Model IFS 4D-Var
More informationMaking Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010
Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Windows HPC Server 2008 R2 Windows HPC Server 2008 R2 makes supercomputing
More informationIntroduction to ECMWF resources:
Introduction to ECMWF resources: Computing and archive services. and how to access them Paul Dando User Support Paul.Dando@ecmwf.int advisory@ecmwf.int University of Reading - 23 January 2014 ECMWF Slide
More informationThe NCAR Yellowstone Data Centric Computing Environment. Rory Kelly ScicomP Workshop May 2013
The NCAR Yellowstone Data Centric Computing Environment Rory Kelly ScicomP Workshop 27 31 May 2013 Computers to Data Center EVERYTHING IS NEW 2 NWSC Procurement New facility: the NWSC NCAR Wyoming Supercomputing
More information2014 LENOVO INTERNAL. ALL RIGHTS RESERVED.
2014 LENOVO INTERNAL. ALL RIGHTS RESERVED. Who is Lenovo? A $39 billion, Fortune 500 technology company - Publicly listed/traded on the Hong Kong Stock Exchange - 54,000 employees serving clients in 160+
More information2014 ITS 3C Summit GDOT TMC Reconfiguration
2014 ITS 3C Summit GDOT TMC Reconfiguration September 16, 2014 Project Objectives Accommodate Current & Future Needs Improve Operational Efficiency Enhance GDOT-SRTA Operational Synergy Minimize Construction
More informationIllinois Proposal Considerations Greg Bauer
- 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and
More informationIBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
More informationHPCF Cray Phase 2. User Test period. Cristian Simarro User Support. ECMWF April 18, 2016
HPCF Cray Phase 2 User Test period Cristian Simarro User Support advisory@ecmwf.int ECMWF April 18, 2016 Content Introduction Upgrade timeline Changes Hardware Software Steps for the testing on CCB Possible
More informationAWIPS Technology Infusion Darien Davis NOAA/OAR Forecast Systems Laboratory Systems Development Division April 12, 2005
AWIPS Technology Infusion Darien Davis NOAA/OAR Forecast Systems Laboratory Systems Development Division Plans for AWIPS Next Generation 1 What s a nice lab like you, doing in a place like this? Plans
More informationInauguration Cartesius June 14, 2013
Inauguration Cartesius June 14, 2013 Hardware is Easy...but what about software/applications/implementation/? Dr. Peter Michielse Deputy Director 1 Agenda History Cartesius Hardware path to exascale: the
More informationUniversity at Buffalo Center for Computational Research
University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support
More informationThe Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center
The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.
More informationThe Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011
The Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011 HPC Scale Working Group, Dec 2011 Gilad Shainer, Pak Lui, Tong Liu,
More informationThe Architecture and the Application Performance of the Earth Simulator
The Architecture and the Application Performance of the Earth Simulator Ken ichi Itakura (JAMSTEC) http://www.jamstec.go.jp 15 Dec., 2011 ICTS-TIFR Discussion Meeting-2011 1 Location of Earth Simulator
More informationCo-designing an Energy Efficient System
Co-designing an Energy Efficient System Luigi Brochard Distinguished Engineer, HPC&AI Lenovo lbrochard@lenovo.com MaX International Conference 2018 Trieste 29.01.2018 Industry Thermal Challenges NVIDIA
More informationMAHA. - Supercomputing System for Bioinformatics
MAHA - Supercomputing System for Bioinformatics - 2013.01.29 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : 0.3
More informationCurrent Progress of Grid Project in KMA
Current Progress of Grid Project in KMA CUG 2006 Kim, Hee-Sik Cray Korea Inc. This Presentation May Contain Some Preliminary Information, Subject To Change Outline KMA s Cray X1E system Relationship between
More informationThe Cray Rainier System: Integrated Scalar/Vector Computing
THE SUPERCOMPUTER COMPANY The Cray Rainier System: Integrated Scalar/Vector Computing Per Nyberg 11 th ECMWF Workshop on HPC in Meteorology Topics Current Product Overview Cray Technology Strengths Rainier
More informationInfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014
InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance
More informationVirtualizing Oracle on VMware
Virtualizing Oracle on VMware Sudhansu Pati, VCP Certified 4/20/2012 2011 VMware Inc. All rights reserved Agenda Introduction Oracle Databases on VMware Key Benefits Performance, Support, and Licensing
More informationUNCLASSIFIED. FY 2016 Base FY 2016 OCO
Exhibit R-2, RDT&E Budget Item Justification: PB 2016 Defense Security Service Date: February 2015 0400: Research, Development, Test & Evaluation, Defense-Wide / BA 7: Operational Systems Development COST
More informationThe Hopper System: How the Largest* XE6 in the World Went From Requirements to Reality! Katie Antypas, Tina Butler, and Jonathan Carter
The Hopper System: How the Largest* XE6 in the World Went From Requirements to Reality! Katie Antypas, Tina Butler, and Jonathan Carter CUG 2011, May 25th, 2011 1 Requirements to Reality Develop RFP Select
More informationCONDUIT and NAWIPS Migration to AWIPS II Status Updates Unidata Users Committee. NCEP Central Operations April 18, 2013
CONDUIT and NAWIPS Migration to AWIPS II Status Updates 2013 Unidata Users Committee NCEP Central Operations April 18, 2013 CONDUIT Cooperative Opportunity for NCEP Data Using IDD Technology No changes
More informationNVIDIA Update and Directions on GPU Acceleration for Earth System Models
NVIDIA Update and Directions on GPU Acceleration for Earth System Models Stan Posey, HPC Program Manager, ESM and CFD, NVIDIA, Santa Clara, CA, USA Carl Ponder, PhD, Applications Software Engineer, NVIDIA,
More informationPushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1
Pushing the Limits ADSM Symposium Sheelagh Treweek sheelagh.treweek@oucs.ox.ac.uk September 1999 Oxford University Computing Services 1 Overview History of ADSM services at Oxford October 1995 - started
More informationFederal Data Center Consolidation Initiative (FDCCI) Workshop III: Final Data Center Consolidation Plan
Federal Data Center Consolidation Initiative (FDCCI) Workshop III: Final Data Center Consolidation Plan August 10, 2010 FDCCI Agenda August 10 th, 2010 1. Welcome Katie Lewin GSA Director Cloud Computing
More informationUAntwerpen, 24 June 2016
Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO
More informationN-Wave Overview. Westnet Boulder, CO 5 June 2012
N-Wave Overview Mark Mutz Westnet Boulder, CO 5 June 2012 mark.mutz@noaa.gov Outline Background Network Design & Operation Uptime and Traffic Stats Futures NOAA s Mission To understand and predict changes
More informationPART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System
INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE
More informationDDN s Vision for the Future of Lustre LUG2015 Robert Triendl
DDN s Vision for the Future of Lustre LUG2015 Robert Triendl 3 Topics 1. The Changing Markets for Lustre 2. A Vision for Lustre that isn t Exascale 3. Building Lustre for the Future 4. Peak vs. Operational
More informationIATF Transition Strategy Presenter: Cherie Reiche, IAOB
IATF 16949 Transition Strategy Presenter: Cherie Reiche, IAOB IATF 16949 Transition Strategy IATF 16949 transition strategy was presented at the IATF global stakeholder conference in Rome, Italy in April
More informationAn End2End (E2E) Operationalized Pipeline for Predictive Analysis for the Intelligent Grid
An End2End (E2E) Operationalized Pipeline for Predictive Analysis for the Intelligent Grid Presented by Peng Li & Yun Zhao China Southern Power Grid EPRI Vijay K Narayanan Microsoft Agenda China Southern
More informationReliability & Resiliency in the US Capitol Region/Hardening the Grid One Year after Hurricane Sandy
1 Reliability & Resiliency in the US Capitol Region/Hardening the Grid One Year after Hurricane Sandy 2 Pepco D.C. Distribution System 260,000 Customers 175,000 Customers Supplied by Underground System
More informationQuotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly.
Enquiry No: IITK/ME/mkdas/2016/01 May 04, 2016 Quotations invited Sealed quotations are invited for the purchase of an HPC cluster with the specification outlined below. Technical as well as the commercial
More informationFederal Data Center Consolidation Initiative (FDCCI) Workshop I: Initial Data Center Consolidation Plan
Federal Data Center Consolidation Initiative (FDCCI) Workshop I: Initial Data Center Consolidation Plan June 04, 2010 FDCCI Workshop I Agenda for June 4, 2010 1. Welcome Katie Lewin GSA Director Cloud
More informationManaging HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory
Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department
More informationHPC projects. Grischa Bolls
HPC projects Grischa Bolls Outline Why projects? 7th Framework Programme Infrastructure stack IDataCool, CoolMuc Mont-Blanc Poject Deep Project Exa2Green Project 2 Why projects? Pave the way for exascale
More informationYellowstone allocations and writing successful requests. November 27, 2012 David L. Hart, CISL
Yellowstone allocations and writing successful requests November 27, 2012 David L. Hart, CISL dhart@ucar.edu Welcome to the Petascale Yellowstone environment AllocaCons opportunices at NWSC University,
More informationCCNA Routing and Switching Overview. June 2013
CCNA Routing and Switching Overview June 2013 Overview 1 5 Planning Resources Recommended Learning Path 2 6 Instructor Training Timeline 3 7 Transitioning Materials Transitioning 4 2012 Cisco and/or its
More informationUNCLASSIFIED FY 2016 OCO. FY 2016 Base
Exhibit R-2, RDT&E Budget Item Justification: PB 2016 Air Force : February 2015 3600: Research, Development, Test & Evaluation, Air Force / BA 7: Operational Systems Development COST ($ in Millions) (+)
More informationCheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment
Cheyenne NCAR s Next-Generation Data-Centric Supercomputing Environment David Hart, NCAR/CISL User Services Manager June 23, 2016 1 History of computing at NCAR 2 2 Cheyenne Planned production, January
More informationHIGH PERFORMANCE COMPUTING FROM SUN
HIGH PERFORMANCE COMPUTING FROM SUN Update for IDC HPC User Forum, Norfolk, VA April 2008 Bjorn Andersson Director, HPC and Integrated Systems Sun Microsystems Sun Constellation System Integrating the
More informationACTIVE MICROSOFT CERTIFICATIONS:
Last Activity Recorded : February 14, 2014 Microsoft Certification ID : 2997927 CHRISTIAN GYSSELS CAIXA POSTAL 22.033 FLORIANOPOLIS, Santa Catarina 88095-971 BR gyssels@dekeract.com *Charter- Certification
More informationQuickSpecs. What's New. Models. HP SATA Hard Drives. Overview. HP 6G SATA SmartDrive Carriers
Overview HP SATA drives are designed for the reliability and larger capacities demanded by today's entry server and external storage environments. HP SATA Midline drives are designed with economical reliability
More informationHP Update. Bill Mannel VP/GM HPC & Big Data Business Unit Apollo Servers
Update Bill Mannel VP/GM C & Big Data Business Unit Apollo Servers The most exciting shifts of our time are underway Cloud Security Mobility Time to revenue is critical Big Data Decisions must be rapid
More informationA DEVOPS STATE OF MIND. Chris Van Tuin Chief Technologist, West
A DEVOPS STATE OF MIND Chris Van Tuin Chief Technologist, West cvantuin@redhat.com THE NEED FOR SPEED THE ACCELERATION OF APPLICATION DELIVERY FOR THE BUSINESS In short, software is eating the world. -
More informationThe Red Storm System: Architecture, System Update and Performance Analysis
The Red Storm System: Architecture, System Update and Performance Analysis Douglas Doerfler, Jim Tomkins Sandia National Laboratories Center for Computation, Computers, Information and Mathematics LACSI
More informationNOAA N-Wave Update N-WAVE V. NOAA Research Network. Joint Techs Columbus 13 July 2010
NOAA N-Wave Update N-WAVE V NOAA Research Network Joint Techs Columbus 13 July 2010 Mark Mutz, NOAA mark.mutz@noaa.gov Caren Litvanyi, N-Wave NOC litvanyi@grnoc.iu.edu NOAA s Mission To understand and
More informationNCAR Workload Analysis on Yellowstone. March 2015 V5.0
NCAR Workload Analysis on Yellowstone March 2015 V5.0 Purpose and Scope of the Analysis Understanding the NCAR application workload is a critical part of making efficient use of Yellowstone and in scoping
More informationIFS migrates from IBM to Cray CPU, Comms and I/O
IFS migrates from IBM to Cray CPU, Comms and I/O Deborah Salmond & Peter Towers Research Department Computing Department Thanks to Sylvie Malardel, Philippe Marguinaud, Alan Geer & John Hague and many
More informationCOUNTY OF RIVERSIDE ENTERPRISE SOLUTIONS FOR PROPERTY TAXATION
COUNTY OF RIVERSIDE ENTERPRISE SOLUTIONS FOR PROPERTY TAXATION PRESENTERS PANEL GUESTS DON KENT RIVERSIDE COUNTY TREASURER- TAX COLLECTOR LARRY WARD ASSESSOR - COUNTY CLERK - RECORDER KAN WANG PROPERTY
More informationLustre usages and experiences
Lustre usages and experiences at German Climate Computing Centre in Hamburg Carsten Beyer High Performance Computing Center Exclusively for the German Climate Research Limited Company, non-profit Staff:
More informationOracle Exadata X7. Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer
Oracle Exadata X7 Uwe Kirchhoff Oracle ACS - Delivery Senior Principal Service Delivery Engineer 05.12.2017 Oracle Engineered Systems ZFS Backup Appliance Zero Data Loss Recovery Appliance Exadata Database
More informationWhat Can a Small Country Do? The MeteoSwiss Implementation of the COSMO Suite on the Cray XT4
Eidgenössisches Departement des Innern EDI Bundesamt für Meteorologie und Klimatologie MeteoSchweiz What Can a Small Country Do? The MeteoSwiss Implementation of the COSMO Suite on the Cray XT4 13th ECMWF
More information<Insert Picture Here> Linux: The Journey, Milestones, and What s Ahead Edward Screven, Chief Corporate Architect, Oracle
Linux: The Journey, Milestones, and What s Ahead Edward Screven, Chief Corporate Architect, Oracle SAFE HARBOR STATEMENT The following is intended to outline our general product direction.
More informationCouncil, 26 March Information Technology Report. Executive summary and recommendations. Introduction
Council, 26 March 2014 Information Technology Report Executive summary and recommendations Introduction This report sets out the main activities of the Information Technology Department since the last
More informationDVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011
DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves
More informationIATF Transition Strategy Presenter: Mrs. Michelle Maxwell, IAOB
IATF 16949 Transition Strategy Presenter: Mrs. Michelle Maxwell, IAOB IATF 16949 Transition Strategy IATF 16949 transition strategy was presented at the IATF global stakeholder conference in Rome, Italy
More informationQuickSpecs. What's New. Models. HP SATA Hard Drives. Overview
Overview HP SATA drives are designed for the reliability and larger capacities demanded by today's entry server and external storage environments. HP SATA Midline drives are designed with economical reliability
More informationSWIFT 7.2 & Customer Security. Providing choice, flexibility & control.
SWIFT 7.2 & Customer Security Providing choice, flexibility & control. 0 SWIFT 7.2 UPGRADE: WHAT DO YOU NEED TO KNOW? DECEMBER 6, 2017 Patricia Hines, CTP Senior Analyst, Corporate Banking Celent SWIFT
More informationShort Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy
Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,
More informationPart 2: Computing and Networking Capacity (for research and instructional activities)
National Science Foundation Part 2: Computing and Networking Capacity (for research and instructional activities) FY 2013 Survey of Science and Engineering Research Facilities Who should be contacted if
More informationSun Lustre Storage System Simplifying and Accelerating Lustre Deployments
Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems
More informationUNCLASSIFIED R-1 ITEM NOMENCLATURE
Exhibit R-2, RDT&E Budget Item Justification: PB 2013 Navy DATE: February 2012 COST ($ in Millions) FY 2011 FY 2012 FY 2014 FY 2015 FY 2016 FY 2017 To Program Element 1.511 1.447 1.756-1.756 1.792 1.827
More informationMigrating Oracle from Unix to the Cloud. Dean Bolton Chief Architect VLSS LLC
Migrating Oracle from Unix to the Cloud Dean Bolton Chief Architect VLSS LLC Agenda 1 Industry Trends 2 Strategic Challenges 3 Why Virtualize Oracle? 4 Customer Case Studies 5 Tools 6 Licensing Industry
More informationOutline. March 5, 2012 CIRMMT - McGill University 2
Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started
More informationState of Florida Enterprise
State of Florida Enterprise E-mail Florida House of Representatives Appropriations Committee October 6, 2011 AGENCY FOR ENTERPRISE INFORMATION TECHNOLOGY David Taylor, Executive Director Coleen Birch,
More informationHIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS
HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS OVERVIEW When storage demands and budget constraints collide, discovery suffers. And it s a growing problem. Driven by ever-increasing performance and
More informationStorage Technology Requirements of the NCAR Mass Storage System
Storage Technology Requirements of the NCAR Mass Storage System Gene Harano National Center for Atmospheric Research (NCAR) 1850 Table Mesa Dr. Boulder, CO 80303 Phone: +1-303-497-1203; FAX: +1-303-497-1848
More informationHPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)
HPC and IT Issues Session Agenda Deployment of Simulation (Trends and Issues Impacting IT) Discussion Mapping HPC to Performance (Scaling, Technology Advances) Discussion Optimizing IT for Remote Access
More informationFY Bay Area UASI Risk and Grants Management Program Update. November 14, 2013
FY 2013-2014 Bay Area UASI Risk and Grants Management Program Update November 14, 2013 Overview FY 2013 Bay Area UASI Risk and Grants Management Program May 2013 December 2013 Data Management Analysis
More informationNaval Facilities Engineering Command Washington, DC. Workload Projections
Naval Facilities Engineering Command Washington, DC Workload Projections 14 July 2015 NAVFAC WASHINGTON AOR 2 NAVFAC Washington NAVFAC Washington Construction Work-in-Place (WIP) $1,200 $1,055M $1,135M
More informationAtoS IT Solutions and Services. Microsoft Solutions Summit 2012
Microsoft Solutions Summit 2012 1 Building Private Cloud with Microsoft Solution 2 Building Private Cloud with Microsoft Solution Atos integration Establish a new strategic IT partnership From July 2011
More informationHPC Architectures. Types of resource currently in use
HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us
More informationHow to Derive Value from Business Continuity Planning
How to Derive Value from Continuity Planning Presented by Randall J. Till, Principal Till Continuity Group Spring World 2011 Disaster Recovery Journal March 28, 2011 1 BCM Challenges BCM funding is limited
More informationPreparing GPU-Accelerated Applications for the Summit Supercomputer
Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership
More informationIntegrated Water Resources Science and Services (IWRSS)
TOO MUCH POOR QUALITY TOO LITTLE Integrated Water Resources Science and Services (IWRSS) Collaborative Science, Services and Tools to Support Integrated and Adaptive Water Resources Management April, 2011
More informationEmerging Technologies for HPC Storage
Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional
More informationGOES R Ground Segment Project Update
GOES R Ground Segment Project Update Vanessa Griffin Giffi Project Manager, GOES R Ground Segment Project Atlanta, Georgia 2010 GOES R Ground Segment Objectives Support the primary mission operations goals
More informationBill Boroski LQCD-ext II Contractor Project Manager
Bill Boroski LQCD-ext II Contractor Project Manager boroski@fnal.gov Robert D. Kennedy LQCD-ext II Assoc. Contractor Project Manager kennedy@fnal.gov USQCD All-Hands Meeting Jefferson Lab April 28-29,
More informationIBM Enterprise X-Architecture Technology
Mainframe-inspired technologies for industry-standard servers IBM Enterprise X-Architecture Technology Highlights Innovative technology provides revolutionary scalability to help stretch your IT dollars
More informationThe Energy Challenge in HPC
ARNDT BODE Professor Arndt Bode is the Chair for Computer Architecture at the Leibniz-Supercomputing Center. He is Full Professor for Informatics at TU Mü nchen. His main research includes computer architecture,
More informationIBM CORAL HPC System Solution
IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy
More informationIBM HPC DIRECTIONS. Dr Don Grice. ECMWF Workshop November, IBM Corporation
IBM HPC DIRECTIONS Dr Don Grice ECMWF Workshop November, 2008 IBM HPC Directions Agenda What Technology Trends Mean to Applications Critical Issues for getting beyond a PF Overview of the Roadrunner Project
More informationData storage services at KEK/CRC -- status and plan
Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The
More informationMellanox InfiniBand Training IB Professional, Expert and Engineer Certifications
About Mellanox On-Site courses Mellanox Academy offers on-site customized courses for maximum content flexibility to make the learning process as efficient and as effective as possible. Content flexibility
More informationThe Last Bottleneck: How Parallel I/O can improve application performance
The Last Bottleneck: How Parallel I/O can improve application performance HPC ADVISORY COUNCIL STANFORD WORKSHOP; DECEMBER 6 TH 2011 REX TANAKIT DIRECTOR OF INDUSTRY SOLUTIONS AGENDA Panasas Overview Who
More informationOSIsoft at Lawrence Livermore National Laboratory
OSIsoft at OSIsoft Federal Workshop 2014 June 3, 2014 Marriann Silveira, PE This work was performed under the auspices of the U.S. Department of Energy by under Contract DE-AC52-07NA27344. Lawrence Livermore
More informationLRZ SuperMUC One year of Operation
LRZ SuperMUC One year of Operation IBM Deep Computing 13.03.2013 Klaus Gottschalk IBM HPC Architect Leibniz Computing Center s new HPC System is now installed and operational 2 SuperMUC Technical Highlights
More informationDoD Environmental Security Technology Certification Program (ESTCP) Tim Tetreault DoD August 15, 2017
DoD Energy Testbed DoD Environmental Security Technology Certification Program (ESTCP) Tim Tetreault DoD August 15, 2017 Tampa Convention Center Tampa, Florida About ESTCP Established in 1995 to: Improve
More informationIBM Cluster Systems Management V1.7 extends hardware and operating system support
IBM United States Announcement 207-276, dated November 6, 2007 IBM Cluster Systems Management V1.7 extends hardware and operating system support Reference information... 2 Technical information...2 Ordering
More informationBrutus. Above and beyond Hreidar and Gonzales
Brutus Above and beyond Hreidar and Gonzales Dr. Olivier Byrde Head of HPC Group, IT Services, ETH Zurich Teodoro Brasacchio HPC Group, IT Services, ETH Zurich 1 Outline High-performance computing at ETH
More information