Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Size: px
Start display at page:

Download "Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO"

Transcription

1

2 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

3 CERN is the European Laboratory for Particle Physics. CERN openlab CTO

4 The laboratory straddles the Franco- Swiss border near Geneva.

5 Member states A world-wide endevour Associate member states in the pre-stage to membership Associate member states Observers 22 members 8 associates 3 observers Budget (2017) 1100 MCHF Cooperation agreements It has 22 member states and supports a global community of 15,000 researchers.

6 Looking for Antimatter Understanding the very first moments of our Universe after the Big Bang Understanding Dark Matter These researchers are probing the fundamental structure of the Universe.

7 qadvance the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first moments of the Universe s existence? qdevelop new technologies for accelerators and detectors Information technology - the Web and the GRID Medicine - diagnosis and therapy qtrain scientists and engineers of tomorrow qunite people from different countries and cultures CERN s mission: research, technology, education, and collaboration.

8 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

9 The LHC is the world s largest and most powerful particle accelerator.

10 CMS ALICE ATLAS LHCb The LHC is the world s largest and most powerful particle accelerator.

11 CMS ALICE ATLAS LHCb It is built around 100 m underground and has a circumference of 27 km.

12 CMS ALICE ATLAS LHCb The particles are accelerated to close to the speed of light.

13 The FASTEST RACETRACK on the Planet The Most Powerful MAGNETS The Most SOPHISTICATED DETECTORS ever built The Highest VACUUM HOTTEST spots in the galaxy COLDER TEMPERATURES than outer space The LHC is a machine of records!

14 What may come next.

15 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

16 The detectors are like gigantic digital cameras built in cathedral-sized caverns.

17 Experiments are run by collaborations of scientists from institutes all over the world.

18 ATLAS CMS 46m long, 25m diameter weights tonnes 100 million electronic channels, km of cables 22m long, 15m diameter weights tonnes Most powerful superconducting solenoid ever built Two general-purpose detectors cross-confirm discoveries, such as the Higgs boson.

19 ALICE LHCb Studies the «Quark Gluon Plasma», state of matter which existed moments after the Big Bang. Studies the behaviour difference between the b quark and the anti-b quark to explain the matter-antimatter asymmetry in the Universe. ALICE and LHCb experiments have detectors specialised on studying specific phenomena.

20 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

21 Collisions generate particles that decay in complex ways into even more particles.

22 Up to about 1 billion particle collisions can take place every second.

23 Data generated 40 million times per second PB/s 100,000 selections per second TB/s 1,000 selections per second GB/s This can generate up to a petabyte of data per second. Filtering the data in real time, selecting potentially interesting events (trigger).

24 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

25 The CERN data centre processes hundreds of petabytes of data every year.

26 MEYRIN CENTRE (CH) 300,0000 processors cores 180 PB on disk 230 PB on tape WIGNER CENTRE (H) 100,0000 processors cores 100 PB on disk The two centres are connected by three 100 Gb/s fibre-optic links. CERN s data centre in Meyrin is the heart of the laboratory s computing infrastructure.

27 MEYRIN CENTRE (CH) 300,0000 processors cores 180 PB on disk 230 PB on tape WIGNER CENTRE (H) 100,0000 processors cores 100 PB on disk The two centres are connected by three 100 Gb/s fibre-optic links. The Wigner data centre in Budapest serves as an extension to the one in Meyrin.

28 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

29 Physicists must sift through the PBs produced annually by the LHC experiments.

30 ~40 MHz ~ PB/s Online Real time L1 Trigger (HW) ~100 khz HL Trigger (SW) ~1 khz WLCG Raw Data ~ 1-10 GB/s Offline - Asynchronous Physicists must sift through the PBs produced annually by the LHC experiments.

31 Tier-0 (CERN and Hungary): data recording, reconstruction and distribution Tier-1: permanent storage, re-processing, analysis Tier-2: Simulation, end-user analysis The Worldwide LHC Computing Grid integrates computer centres worldwide to combine computing and storage resources into a single infrastructure accessible by all LHC physicists The WLCG gives thousands of physicists across the globe near real-time access.

32 With 170 computing centres in 42 countries, the WLCG is the grid that never sleeps!

33 ~1M Cores CPU delivered ~170 sites, 42 countries ~1M CPU cores ~1EB of storage 3PB/day > 2 million jobs/day Gb/s links 340 Gb/s transatlantic 3PB moved per day The size of WLCG.

34 Making hundreds of petabytes of data accessible globally to scientists is one the biggest challenges of WLCG WLCG Compute Storage Compute cache Storage 1 to 10 Tb links Commercial Cloud Storage cache Compute HPC cache Data Organization, Management and Access in WLCG

35 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

36 The LHC has been designed to follow a carefully set out programme of upgrades.

37 RUN 3 ALICE & LHCb upgrades RUN 4 ATLAS & CMS upgrades The planned upgrades will greatly increase the scientific reach.

38 Rate of new physics is 1 event in Selecting a new physics event is like choosing 1 grain of sand in 20 volley ball courts More collisions help physicists to observe rare processes and study with greater precision.

39 CMS: event from 2017 with 78 reconstructed vertices CMS: event with 78 reconstructed vertices ATLAS: simulation for HL-LHC with 200 vertices The HL-LHC will come online around More collisions and more complex data.

40 LHCb and ALICE will move offline processing closer to the online data collection chain Performing processing and data analysis in near real-time Solutions under investigation New HLT farms for Run3 Flexible and efficient system with ambitious PUE ratio Courtesy of Automation Data Center Facilities The ALICE and LHCb experiments will increase their data acceptance rates for Run 3.

41 By Run4, the detectors will become more granular and more radiation hard. PLACE Reconstructing more particles with more granular detectors will be computationally more expensive. The ATLAS and CMS experiments will be significantly upgraded for the HL-LHC.

42 CPU Resources [khs06*1000] ATLAS Preliminary Resource needs (2017 Computing model) Flat budget model (+20%/year) Run 2 Run 3 Run 4 Disk Storage [PBytes] ATLAS Preliminary Resource needs (2017 Computing model) Flat budget model (+15%/year) Run 2 Run 3 Run Year Year Using current techniques, required computing capacity increases times.

43 CPU Resources [khs06*1000] ATLAS Preliminary Resource needs (2017 Computing model) Flat budget model (+20%/year) Run 2 Run 3 Run 4 Disk Storage [PBytes] ATLAS Preliminary Resource needs (2017 Computing model) Flat budget model (+15%/year) Run 2 Run 3 Run Year Year Data storage needs are expected to be in the order of Exabytes by this time.

44 CPU Resources [khs06*1000] ATLAS Preliminary Resource needs (2017 Computing model) Flat budget model (+20%/year) Run 2 Run 3 Run 4 Disk Storage [PBytes] ATLAS Preliminary Factor 4 Factor 8 Resource needs (2017 Computing model) Flat budget model (+15%/year) Run 2 Run 3 Run Year Year It is vital to explore new technologies and methodologies.

45 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

46

47 Technology Evolution and Improvements Software innovation, New Architectures, Techniques and Methods Improvements in hardware performance and capacity. Innovation and revolutionary thinking. Closing the resource gap in the next decade requires close collaboration with industry.

48 MANAGEMENT JOINT R&D INNOVATION & KNOWLEDGE TRANSFER COMMUNICATION EDUCATION CERN openlab is a unique science-industry partnership, fostering research and innovation.

49 Scale out capacity with public clouds, HPC, new architectures COMPUTING CHALLENGES Increase data centre performance with hardware accelerators (FPGAs, GPUs,..) optimized software New techniques with Machine Learning, Deep Learning, Advanced Data Analytics Three main areas of research and development.

50 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

51 Data centre technologies and infrastructures.

52 Faced with a resource gap of this magnitude: 1. fully exploit available hardware; 2. expand dynamically to new computing environments Data centre technologies and infrastructures.

53 CERN is one of the early adopters and largest contributors to OpenStack 90% of the resources are provided through a private cloud Allows for flexible and dynamic deployment 320k cores Moving to containers for even more flexibility Current investigations within CERN openlab End Users Volunteer Computin g HTCondor Experiment Pilot Factories Public Cloud (LSF) Bare Metal and HPC CI/CD Containers APIs CLIs GUIs IT & Experiment Services VMs OpenStack Resource Provisioning (>1 physical data centre) Layered, virtualized services provide flexibility and efficiency.

54 300k Cores 80k Cores Experiments have demonstrated that it is possible to elastically and dynamically expand production resources to commercial clouds. Large-scale tests with commercial clouds.

55 Joint procurement of R&D cloud services for scientific research.

56 HPC are significant resources and are being tested by the experiments Optimized for highly parallel applications Cores by processing type 500k MC simu Cores by resource type 500k HPC ATLAS reached more than 200k traditional x86 HPC cores for simulation workflows MC reco Grid Data Derivation HLT, Cloud Smooth Tier-0 running on 23k cores All experiments are exploring the use of heterogeneous HPC architectures CERN will partner with EU-PRACE optimizing the use of HPC resources for Demonstrations with large-scale, dedicated HPC resources, too.

57 DEEP-EST: Dynamical Exascale Entry Platform - Extreme Scale Technologies CERN is a partner of DEEP-EST, a blueprint project for heterogeneous HPC systems.

58 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

59 Computing Performance and Software Exploiting heterogeneous resources.

60 HEP has a vast investment in software Significant effort to make efficient multi-threaded and vectorized CPU code Accelerated computing devices (GPUs, FPGAs) offer a different model Complexity of heterogeneous architectures Simultaneously exploring lower performance but lower power alternatives like ARM C Leggett, LBNL Software optimization can gain factors in performance.

61 The landscape is shifting at all levels.

62 Higher data rates require more selective triggering and faster reconstruction LHCb is investigating FPGAs and GPUs to allow reconstruction of 5GB/s of events in real time. CMS is porting heavy offline tasks to real-time processing for HL-LHC Integrate GPUs in the HLT farm to give highquality reconstruction in 100 msec latency (as opposed to tens of sec) 5 TB/s TB/s ~5 GB/s DETECTOR READOUT HLT1 PARTIAL RECO HLT2 FULL RECO 5% FULL 85% TURBO & real-time analysis 10% CALIB Exploiting co-processors for software-based filtering and real-time reconstruction.

63 CERN openlab is engaging in QC with industry Can substantially speed-up training of deep learning and combinatorial searches Well suited for fitting, minimization, optimization Can directly describe basic interactions as well as lattice QCD calculations Quantum Computing is also on the horizon.

64 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

65 Machine learning and advanced data analytics.

66 Experiment and accelerator operations have similar challenges to industrial applications A multitude of Industrial Control Systems Detectors and accelerators infrastructure health needs to be monitored Cooling & Ventilation VACUUM Quality of produced data needs to be validated Resource usage needs to be optimized Cryogenics GAS Working with industry partners to deploy similar techniques and automation Electric Grid LHC Circuit, QPS, WIC, PIC, Monitoring, automation and anomaly detection.

67 With current software and computer an event like HL-LHC takes 10s of seconds IDENTIFICATION Examine the detector hit information and use 3D image recognition techniques to identify objects Recognize physics objects from learned patterns INPUT IMAGES Might dramatically increase the speed for reconstruction Exploring image recognition for reconstruction and object identification.

68 Main R&D areas Adapting the existing code to new computing architectures Replacing complex algorithms with deeplearning approaches (FAST SIMULATION) Looking at adversarial networks to improve speed without giving up accuracy of simulated events One network attempts to simulate events that match a data distribution While a second network tries to distinguish data and simulation Simulation is one of the most resourceintensive computing applications.

69 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

70 CERN is collaborating with other communities who share similar computing challenges.

71 The Square Kilometre Array (SKA) observatory s two telescopes will enable astronomers to study the sky in unprecedented detail. First phase will be operational in the mid 2020s; observatory will function for 50 years. South Africa Western Australia Joint exascale data-storage and processing challenge between HL-LHC and SKA.

72 CERN-MEDICIS: production of innovative isotopes for medical research Accelerator design for future hadron therapy facilities Medical imaging Dosimetry Computing & simulation for health applications CDERN MEDICIS BioDynamo Accelerating innovation and knowledge transfer to medical applications.

73 Tackling tomorrow s computing challenges today at CERN CERN openlab CTO

74 Tackling tomorrow s computing challenges today at CERN CERN has been pushing the boundaries of knowledge and technology for more than 60 years. The next phase of the programme will include unprecedented computing challenges. We look forward to tackling these challenges through open collaboration and innovation with industry and other scientific communities. CERN openlab CTO

75 Tackling tomorrow s computing challenges today at CERN «Magic is not happening at CERN, magic is being explained at CERN.» Tom Hanks Thank you! Credit for the slide layout to Andrew Purcell, CERN CERN openlab CTO

76 Tackling tomorrow s computing challenges today at CERN Home page: home.cern openlab.cern/ whitepaper Home page: openlab.cern/ education CERN openlab CTO

77 Tackling tomorrow s computing challenges today at CERN Backup Slides CERN openlab CTO

78 Introduction of NVRAM provides much faster access to data and applications in memory High Performance and long duration SSD improves the access on large datasets Low cost and high capacity SSD could revolutionize long term storage Larger spinning disk brings us to exabytes of storage Storage has many more layers and improvements in access.

79 Raw data: Was a detector element hit? How much energy? What time? Reconstructed data: Momentum of tracks (4- vectors) Origin of collision Energy in clusters (jets) Type of Particle Calibration information n 150 Million sensors deliver data 40 Million times per second The data at the LHC.

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

CERN s Business Computing

CERN s Business Computing CERN s Business Computing Where Accelerated the infinitely by Large Pentaho Meets the Infinitely small Jan Janke Deputy Group Leader CERN Administrative Information Systems Group CERN World s Leading Particle

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

CSCS CERN videoconference CFD applications

CSCS CERN videoconference CFD applications CSCS CERN videoconference CFD applications TS/CV/Detector Cooling - CFD Team CERN June 13 th 2006 Michele Battistin June 2006 CERN & CFD Presentation 1 TOPICS - Some feedback about already existing collaboration

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Distributed e-infrastructures for data intensive science

Distributed e-infrastructures for data intensive science Distributed e-infrastructures for data intensive science Bob Jones CERN Bob.Jones CERN.ch Overview What is CERN The LHC accelerator and experiments The Computing needs of the LHC The World wide LHC

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data Maaike Limper Emil Pilecki Manuel Martín Márquez About the speakers Maaike Limper Physicist and project leader Manuel Martín

More information

Visita delegazione ditte italiane

Visita delegazione ditte italiane Visita delegazione ditte italiane CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data Storage Services group Innovation in Computing in High-Energy

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

IT Challenges and Initiatives in Scientific Research

IT Challenges and Initiatives in Scientific Research IT Challenges and Initiatives in Scientific Research Alberto Di Meglio CERN openlab Deputy Head DOI: 10.5281/zenodo.9809 LHC Schedule 2009 2010 2011 2011 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Data Reconstruction in Modern Particle Physics

Data Reconstruction in Modern Particle Physics Data Reconstruction in Modern Particle Physics Daniel Saunders, University of Bristol 1 About me Particle Physics student, final year. CSC 2014, tcsc 2015, icsc 2016 Main research interests. Detector upgrades

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France

Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France ERF, Big data & Open data Brussels, 7-8 May 2014 EU-T0, Data

More information

Towards a Strategy for Data Sciences at UW

Towards a Strategy for Data Sciences at UW Towards a Strategy for Data Sciences at UW Albrecht Karle Department of Physics June 2017 High performance compu0ng infrastructure: Perspec0ves from Physics Exis0ng infrastructure and projected future

More information

PoS(High-pT physics09)036

PoS(High-pT physics09)036 Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform

More information

Detector Control LHC

Detector Control LHC Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39 Detectors in High Energy

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

The EuroHPC strategic initiative

The EuroHPC strategic initiative Amsterdam, 12 December 2017 The EuroHPC strategic initiative Thomas Skordas Director, DG CONNECT-C, European Commission The European HPC strategy in Horizon 2020 Infrastructure Capacity of acquiring leadership-class

More information

Europe and its Open Science Cloud: the Italian perspective. Luciano Gaido Plan-E meeting, Poznan, April

Europe and its Open Science Cloud: the Italian perspective. Luciano Gaido Plan-E meeting, Poznan, April Europe and its Open Science Cloud: the Italian perspective Luciano Gaido (gaido@to.infn.it) Plan-E meeting, Poznan, April 27 2017 Background Italy has a long-standing expertise and experience in the management

More information

International Cooperation in High Energy Physics. Barry Barish Caltech 30-Oct-06

International Cooperation in High Energy Physics. Barry Barish Caltech 30-Oct-06 International Cooperation in High Energy Physics Barry Barish Caltech 30-Oct-06 International Collaboration a brief history The Beginning of Modern Particle Physics The Discovery of the π Meson (1947)

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 3 May 2012 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Volunteer Computing at CERN

Volunteer Computing at CERN Volunteer Computing at CERN BOINC workshop Sep 2014, Budapest Tomi Asp & Pete Jones, on behalf the LHC@Home team Agenda Overview Status of the LHC@Home projects Additional BOINC projects Service consolidation

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Fault Detection using Advanced Analytics at CERN's Large Hadron Collider

Fault Detection using Advanced Analytics at CERN's Large Hadron Collider Fault Detection using Advanced Analytics at CERN's Large Hadron Collider Antonio Romero Marín Manuel Martin Marquez USA - 27/01/2016 BIWA 16 1 What s CERN USA - 27/01/2016 BIWA 16 2 What s CERN European

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing

More information

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products What is big data? Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

Summary of Data Management Principles

Summary of Data Management Principles Large Synoptic Survey Telescope (LSST) Summary of Data Management Principles Steven M. Kahn LPM-151 Latest Revision: June 30, 2015 Change Record Version Date Description Owner name 1 6/30/2015 Initial

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

We invented the Web. 20 years later we got Drupal.

We invented the Web. 20 years later we got Drupal. We invented the Web. 20 years later we got Drupal. CERN s perspective on adopting Drupal as a platform. DrupalCon, London 2011 Today we ll look at. What is CERN? Challenges of the web at CERN Why Drupal

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan Philippe Laurens, Michigan State University, for USATLAS Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan ESCC/Internet2 Joint Techs -- 12 July 2011 Content Introduction LHC, ATLAS,

More information

Accelerating Throughput from the LHC to the World

Accelerating Throughput from the LHC to the World Accelerating Throughput from the LHC to the World David Groep David Groep Nikhef PDP Advanced Computing for Research v5 Ignatius 2017 12.5 MByte/event 120 TByte/s and now what? Kans Higgs deeltje: 1 op

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

EGI: Linking digital resources across Eastern Europe for European science and innovation

EGI: Linking digital resources across Eastern Europe for European science and innovation EGI- InSPIRE EGI: Linking digital resources across Eastern Europe for European science and innovation Steven Newhouse EGI.eu Director 12/19/12 EPE 2012 1 EGI European Over 35 countries Grid Secure sharing

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

Smart Data for. Industrial Control Systems. CERN Technical Workshop

Smart Data for. Industrial Control Systems. CERN Technical Workshop Smart Data for Industrial Control Systems CERN Technical Workshop Filippo Tilaro, Fernando Varela (BE/ICS) in collaboration with Siemens AG CT Munich, St. Petersburg, Brasov 09/01/2018 1 Data Analytics

More information

Technical Case Study CERN the European Organization for Nuclear Research

Technical Case Study CERN the European Organization for Nuclear Research Technical Case Study CERN the European Organization for Nuclear Research How CERN helps physicists unlock the secrets of the universe running critical operations on a foundation of Oracle databases on

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group

T0-T1-T2 networking. Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group T0-T1-T2 networking Vancouver, 31 August 2009 LHCOPN T0-T1-T2 Working Group 1 Requirements from the experiments 2 T0-T1 traffic 1. For all the experiments, the most important thing is to save a second

More information

New Approach to Unstructured Data

New Approach to Unstructured Data Innovations in All-Flash Storage Deliver a New Approach to Unstructured Data Table of Contents Developing a new approach to unstructured data...2 Designing a new storage architecture...2 Understanding

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016 HIGH ENERGY PHYSICS ON THE OSG Brian Bockelman CCL Workshop, 2016 SOME HIGH ENERGY PHYSICS ON THE OSG (AND OTHER PLACES TOO) Brian Bockelman CCL Workshop, 2016 Remind me again - WHY DO PHYSICISTS NEED

More information

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction

Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Precision Timing in High Pile-Up and Time-Based Vertex Reconstruction Cedric Flamant (CERN Summer Student) - Supervisor: Adi Bornheim Division of High Energy Physics, California Institute of Technology,

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

CERN Lustre Evaluation

CERN Lustre Evaluation CERN Lustre Evaluation Arne Wiebalck Sun HPC Workshop, Open Storage Track Regensburg, Germany 8 th Sep 2009 www.cern.ch/it Agenda A Quick Guide to CERN Storage Use Cases Methodology & Initial Findings

More information

Green Supercomputing

Green Supercomputing Green Supercomputing On the Energy Consumption of Modern E-Science Prof. Dr. Thomas Ludwig German Climate Computing Centre Hamburg, Germany ludwig@dkrz.de Outline DKRZ 2013 and Climate Science The Exascale

More information

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization In computer science, storage virtualization uses virtualization to enable better functionality

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Gigabyte Bandwidth Enables Global Co-Laboratories

Gigabyte Bandwidth Enables Global Co-Laboratories Gigabyte Bandwidth Enables Global Co-Laboratories Prof. Harvey Newman, Caltech Jim Gray, Microsoft Presented at Windows Hardware Engineering Conference Seattle, WA, 2 May 2004 Credits: This represents

More information

Bringing OpenStack to the Enterprise. An enterprise-class solution ensures you get the required performance, reliability, and security

Bringing OpenStack to the Enterprise. An enterprise-class solution ensures you get the required performance, reliability, and security Bringing OpenStack to the Enterprise An enterprise-class solution ensures you get the required performance, reliability, and security INTRODUCTION Organizations today frequently need to quickly get systems

More information

Transient Compute ARC as Cloud Front-End

Transient Compute ARC as Cloud Front-End Digital Infrastructures for Research 2016 2016-09-29, 11:30, Cracow 30 min slot AEC ALBERT EINSTEIN CENTER FOR FUNDAMENTAL PHYSICS Transient Compute ARC as Cloud Front-End Sigve Haug, AEC-LHEP University

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information