CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

Similar documents
Storage and I/O requirements of the LHC experiments

Grid Computing a new tool for science

Overview. About CERN 2 / 11

CSCS CERN videoconference CFD applications

CERN s Business Computing

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Virtualizing a Batch. University Grid Center

The LHC computing model and its evolution. Dr Bob Jones CERN

CC-IN2P3: A High Performance Data Center for Research

CERN and Scientific Computing

Travelling securely on the Grid to the origin of the Universe

IEPSAS-Kosice: experiences in running LCG site

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

10 Gbit/s Challenge inside the Openlab framework

High-Energy Physics Data-Storage Challenges

The LHC Computing Grid

CouchDB-based system for data management in a Grid environment Implementation and Experience

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

The CMS Computing Model

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

We invented the Web. 20 years later we got Drupal.

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

Summary of the LHC Computing Review

Data Transfers Between LHC Grid Sites Dorian Kcira

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Big Data Analytics and the LHC

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

for the Physics Community

Grid Computing: dealing with GB/s dataflows

ATLAS Experiment and GCE

Andrea Sciabà CERN, Switzerland

Batch Services at CERN: Status and Future Evolution

Distributed e-infrastructures for data intensive science

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

Storage Resource Sharing with CASTOR.

Experience of the WLCG data management system from the first two years of the LHC data taking

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

Volunteer Computing at CERN

Modeling and Validating Time, Buffering, and Utilization of a Large-Scale, Real-Time Data Acquisition System

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Gigabyte Bandwidth Enables Global Co-Laboratories

CERN Lustre Evaluation

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

ISTITUTO NAZIONALE DI FISICA NUCLEARE

PLAN-E Workshop Switzerland. Welcome! September 8, 2016

The Grid: Processing the Data from the World s Largest Scientific Machine

ISTITUTO NAZIONALE DI FISICA NUCLEARE

Data transfer over the wide area network with a large round trip time

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

International Cooperation in High Energy Physics. Barry Barish Caltech 30-Oct-06

Detector Control LHC

Intel Workstation Technology

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Accelerating Throughput from the LHC to the World

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

1. Introduction. Outline

Grid Computing at the IIHE

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

De BiG Grid e-infrastructuur digitaal onderzoek verbonden

Improving Packet Processing Performance of a Memory- Bounded Application

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

Multi-Core Microprocessor Chips: Motivation & Challenges

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

The LHC Computing Grid

Computer Aided Engineering with Today's Multicore, InfiniBand-Based Clusters ANSYS, Inc. All rights reserved. 1 ANSYS, Inc.

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Does the Intel Xeon Phi processor fit HEP workloads?

Offline Tutorial I. Małgorzata Janik Łukasz Graczykowski. Warsaw University of Technology

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Parallelism and Concurrency. COS 326 David Walker Princeton University

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

CERN IT-DB Services: Deployment, Status and Outlook. Luca Canali, CERN Gaia DB Workshop, Versoix, March 15 th, 2011

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Computing at Belle II

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

Data Management for the World s Largest Machine

New Intel 45nm Processors. Reinvented transistors and new products

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

The Cloud Evolution. Tom Kilroy, Vice President General Manager, Digital Enterprise Group

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Scientific Instrumentation using NI Technology

CMS Computing Model with Focus on German Tier1 Activities

THE ATLAS DATA ACQUISITION SYSTEM IN LHC RUN 2

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Open data and scientific reproducibility

How physicists analyze massive data: LHC + brain + ROOT = Higgs. Axel Naumann, CERN - 33C3, 2016 (but almost 2017)

Grid Computing: dealing with GB/s dataflows

Technical Case Study CERN the European Organization for Nuclear Research

2008 International ANSYS Conference

ANSYS HPC. Technology Leadership. Barbara Hutchings ANSYS, Inc. September 20, 2011

Deep Learning Photon Identification in a SuperGranular Calorimeter

CMS High Level Trigger Timing Measurements

Maximizing Six-Core AMD Opteron Processor Performance with RHEL

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Transcription:

CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008

Overview of CERN 2

CERN is the world's largest particle physics centre What is CERN? Particle physics is about: - elementary particles, the constituents all CERN is also: matter in the Universe is made of -2500 staff (physicists, - fundamental forces which hold matter engineers, together technicians, ) Particles physics requires: - special tools to create and study new particles - Accelerators -Particle Detectors -Powerful computers - Some 6500 visiting scientists (half of the world's particle physicists) They come from 500 universities representing 80 nationalities. Intel visit 3

The CERN Site Mont Blanc, 4810 m Downtown Geneva LHCb ATLAS CMS CERN sites ALICE 4

CERN underground 5

What is LHC? The Large Hadron Collider can collide beams of protons at an energy of 14 TeV (inaugurated on 10 September!) Using the latest super-conducting technologies, it operates at about 271 º C, just above the temperature of absolute zero. Four experiments, with detectors as big as cathedrals : ALICE ATLAS CMS LHCb With its 27 km circumference, the accelerator is the largest superconducting installation in the world. 27 March 2006 Intel visit 6

ATLAS General purpose LHC detector 7000 tons 7

Particle Physics - 101 CMS LHCb ATLAS ALICE 8

Data management and computing 9

LHC data (simplified) Per experiment: 40 million beam interactions per second After filtering, 100 collisions of interest per second A Megabyte of digitized information for each collision = recording rate of 0.1 Gigabytes/sec 1 billion collisions recorded = 1 Petabyte/year CMS LHCb ATLAS ALICE Intel visit 10

Computing at CERN today Nowhere near enough! High-throughput computing based on reliable commodity technology About 3500 multi-socket multi-core PC servers running Linux More than 10 Petabytes of data on tape; 30% cached on disk 11

Expected Power Evolution Demand will grow continuously through the LHC era Input Power Evolution (MW) 25 20 MW 15 10 5 0 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 Power limit of present Computer Centre year Other services Processor power(kw) Disk power (KW) 12

LHC Computing Grid Largest Grid service in the world! Almost 200 sites in 39 countries 100 000 IA processor cores (w/linux) Tens of petabytes of storage 13

Background to the CERN openlab Information Technology has ALWAYS moved at an incredible pace During the LEP era (1989 2001) CERN changed its computing infrastructure twice: Mainframes (1x) RISC servers (30x) PC servers (1000x) In openlab, we collaborate to harness the advantages of a continuous set of innovations o for improving scientific computing, such as: 10 Gigabit networks, 64-bit computing, Virtualization Performance improvements (Moore s law): HW and SW Many-core throughput increase, Thermal optimization We work with a long-term perspective: LHC will operate until at least 2020! 14

The ecern and Intel collaboration in openlab 15

Platform Competence Centre Related to today s Xeon- 7400 announcement Intel-related activities: Thermal optimization Servers and entire Computer Centre Virtualization Multi-core performance/throughput improvements 10 Gb networking Sneak peaks at new technologies 16

Thermal management 30 25 20 15 10 5 0 Comparison Netburst vs. Core2 5.76 4.8 SPECInt/Watt SPECInt/(Watt*GHz) Netburst Core2 Quad Difference of factor Joint white paper on Computer Centre efficiency Based on issues with existing building Just issued; see next slide Complementary project to understand thermal characteristics of each server component Processors (frequencies and SKUs) ; Memory (type and size); Disks; I/O cards; Power supplies Project for new Computer Centre Understand all relevant issues (before starting) Aim at 2.5 + 2.5 MW 17

Joint white-paper just issued Reducing Data Center Energy Consumption http://download.intel.com/products/processor/xeon5000/cern_whitepaper_r04.pdf 18

Virtualization Multiple aims: Server consolidation 20 System testing (used in LCG) 15 Improved flexibility and security also in the grid Personalization of images Also: Development of complementary tools: OSFarm Content-Based Transfer mechanism 40 35 30 25 10 5 0 1.0 0.5 0.1 0.0 Hypothetical max bandwidth Measured bandwidth Comparative Performance of Hypervisors/Kernels Benchmarking Normalized Performance 100 95 90 85 80 75 70 65 60 55 50 45 40 35 30 25 20 KVM (2.6.23-rc3) XEN PVM (2.6.18-xen) XEN HVM (SLC4 SMP) 2.6.18 SLC4 (2.6.9-55.0.2.EL.cernsmp) 2.6.18(nonSMP) 15 10 5 0 ROOT Real time ROOT CPU Tim e ROOTMARK S geant4 MMU Test s 19

Cores: From Multi to Many Our high h throughput h t computing is ideally suited: Independent processes can run on each core, provided that: Main memory is added Bandwidth to main memory remains reasonable In openlab, we have had early access to multiple generations: Woodcrest, Clovertown, Harpertown, Dunnington; Montecito Already in November 2006, we were proud to be part of Intel s movement to Quad core All recent acquisitions have been Quad-core systems And we are ready for the next step! Dual core Possible evolution Many/mixed Many-core array Multi-core 20

Multicore comparisons 21

Conclusions As a flagship scientific instrument, the Large Hadron Collider and the corresponding Computing Grid will be around for 15 years It will rely on continued innovation in Information Technology: Thermal improvements Multicore throughput improvements Virtualization improvements Today s Xeon announcement is a great step in this direction! 22

CERN and Intel in openlab People Motivation Technology Science Innovation 14 November 2006 23

Backup 24