Data Handling for LHC: Plans and Reality

Size: px
Start display at page:

Download "Data Handling for LHC: Plans and Reality"

Transcription

1 Data Handling for LHC: Plans and Reality Tony Cass Leader, Database Services Group Information Technology Department 11 th July

2 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 2

3 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 3

4 Familiar, but not Fundamental Periodic Table courtesy of wikipedia 4

5 5 5

6 The Standard Model Fundamental and well tested, but Why do particles have mass? Why is there no antimatter? Are these the only particles? 4 th Generation? LEP Discovery Do Fermions have bosonic partners and vice-versa? How does Gravity fit in? 6

7 Other interesting questions How do quarks LEP and gluons behave at ultra-high temperatures and densities? LHC What is dark matter? Supersymmetric particles? 7

8 How to find the answers? Smash things together! Images courtesy of hyperphysics 8

9 CERN Methodology The fastest racetrack on the planet Trillions of protons will race around the 27km ring in opposite directions over 11,000 times a second, travelling at per cent the speed of light. 9

10 Energy of a 1TeV Proton 10 10

11 Energy of 7TeV Beams Two nominal beams together can melt ~1,000kg of copper. Current beams: ~100kg of copper

12 CERN Methodology The emptiest space in the solar system To accelerate protons to almost the speed of light requires a vacuum as empty as interplanetary space. There is 10 times more atmosphere on the moon than there will be in the LHC. 12

13 CERN Methodology One of the coldest places in the universe With an operating temperature of about -271 degrees Celsius, just 1.9 degrees above absolute zero, the LHC is colder than outer space. 13

14 CERN Methodology The hottest spots in the galaxy When two beams of protons collide, they will generate temperatures 1000 million times hotter than the heart of the sun, but in a minuscule space. 14

15 CERN Methodology The biggest most sophisticated detectors ever built To sample and record the debris from up to 600 million proton collisions per second, scientists are building gargantuan devices that measure particles with micron precision. 15

16 Compact Detectors! 16

17 17

18

19 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 19

20 We are looking for rare events! number of events = Luminosity Cross section 2010 Luminosity: 45pb billion pb à 3 trillion events! * * N.B. only a very small fraction saved! ~250x more events to date Higgs (m H =120 GeV) : 17 pb à 750 events 7 e.g. potentially ~1 Higgs in every 300 billion interactions! Emily Nurse ATLAS 20

21 So the four LHC Experiments ATLAS - General purpose - Origin of mass - Supersymmetry - 2,000 scientists from 34 countries CMS - General purpose - Origin of mass - Supersymmetry - 1,800 scientists from over 150 institutes ALICE LHCb - heavy ion collisions, to create quark-gluon plasmas - to study the differences between matter and antimatter - 50,000 particles in each collision - will detect over 100 million b and b-bar mesons each year 21

22 So the four LHC Experiments 22

23 generate lots of data The accelerator generates 40 million particle collisions (events) every second at the centre of each of the four experiments detectors 23

24 generate lots of data CASTOR data wri.en, reduced by online computers to01/01/2010 to 29/6/2012 (in PB) 60 a few hundred good events per 50 second. 40 ATLAS Zà μμ event from 2012 data with 25 reconstructed vertices USER NTOF NA61 NA48 LHCB 30 COMPASS CMS 20 ATLAS Which are recorded on disk and magnetic tape Zà μμ at 100-1,000 MegaBytes/sec ~15 PetaBytes per year 0 for all four experiments 10 AMS ALICE Current forecast ~ PB / year, M files / year ~ 20-25K 1 TB tapes / year Archive will need to store 0.1 EB in 2014, ~1Billion files in

25 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 25

26 What is the technique? Break up a Massive Data Set 26

27 What is the technique? into lots of small pieces and distribute them around the world 27

28 What is the technique? analyse in parallel 28

29 What is the technique? gather the results 29

30 What is the technique? a and discover the Higgs boson: Nice result, but is it novel? 30

31 Is it Novel? Maybe not novel as such, but the implementation is Terrascale computing that is widely appreciated! 31

32 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 32

33 Requirements! Computing Challenges Summary of Computing Resource Requirements All experiments From 100,000 LCG TDR - June PCs PB/year to tape CERN All Tier-1s All Tier-2s Total CPU (MSPECint2000s) Disk O(100PB) (PetaBytes) disk cache Tape (PetaBytes) Worldwide Collaboration A Problem and a Solution Tier1s 4,000HS06 = 1MSPECint

34 Timely Technology! The WLCG project deployed to meet LHC computing needs. The EDG and EGEE projects organised development in Europe. (OSG and others in the US.) The Grid 34

35 Compute Element Grid Middleware Basics Standard interface to local workload management systems (batch scheduler) Storage Element Standard interface to local mass storage systems Resource Broker Tool to analyse user job requests (input data sets, cpu time, data output requirements) and route these to sites according to data and cpu time availability. Many implementations of the basic principles: Globus, VDT, EDG/EGEE, NorduGrid, OSG 35

36 Job Scheduling in Practice Issue Grid sites generally want to maintain a high average CPU utilisation; easiest to do this if there is a local queue of work to select from when another job ends. Users are generally interested in turnround times as well as job throughput. Turnround is reduced if jobs are held centrally until a processing slot is known to be free at a target site. Solution: Pilot job frameworks. Per-experiment code submits a job which chooses a work unit to run from a per-experiment queue when it is allocated an execution slot at a site. Pilot job frameworks separate out site responsibility for allocating CPU resources from Experiment responsibility for allocating priority between different research sub-groups

37 Data Issues Reception and long-term storage Delivery for processing and export Distribution Metadata distribution 700MB/s 700MB/s 420MB/s 2600MB/s (3600MB/s) (>4000MB/s) 1430MB/s Scheduled work only and we need ability to support 2x for recovery! 37

38 (Mass) Storage Systems After evaluation of commercial alternatives in the late 1990s, two tape-capable Mass storage systems have been developed for HEP: CASTOR: an integrated mass storage system dcache: a disk pool manager that interfaces to multiple tape archives FNAL, IBM s TSM) dcache is also used a basic disk storage manager Tier2s along with the simpler DPM 38

39 A Word About Tape Our data set may be massive, but CERN Archive file size distribution, in % ~195MB average only increasing slowly after LHC startup! It is made up of many small files Drive write performance, CASTOR tape format (ANSI AUL) which is bad for tape speeds: Write speed (KB/s) IBM AUL SUN AUL Average write drive speed: < 40MB/s (cf native drive speeds: MB/s) Small increases with new drive generations file size (MB) 39

40 Tape Drive Efficiency So we have to change tape writing policy Drive write performance, buffered vs nonbuffered tape marks 140 Average drive performance (MB/s) for CERN Archive files speed, MB/s CASTOR present (3sync/file) CASTOR new (1sync/ file) CASTOR future (1 sync / 4GB) file size, MB 0 3 sync/file 1 sync/file 1 sync / 4GB 40

41 Users aren t the only writers! Bulk data storage requires space! Fortunately Tape capacity Repack in 1 will year: continue to double every 2-3yrs 35 & 35 MB/s tape demonstrations in 2010 CERN has ~50K slots: ~0.25EB with new T10KC cartridges Unfortunately You have to copy data Repack from in 1 year: old cartridges to 500M-1G new or ~28 drives you run out of 63 MB/s 100M-500M 10M-100M Data rates for repack will soon Repack exceed in 1 year: LHC rates drive / days Repack in 1 year small files (<500M) 2012: 55PB = 1.7GB/s sustained 2015: 120PB = 3.8GB/s sustained 5000 time to migrate 55 PB (2012), drive/days, by file size ~55 drives ~ MB/s C.f. PP LHC rates: ~0.7GB/s; PbPb peak rate of 2.5GB/s And! 0 All LEP data fits on ~150 cartridges, or 30 new T10KCs 3 TM / file 1 TM / file TM / 4GB Automatic data duplication becomes a necessity >2G 1G-2G 1M-10M 100K-1M 10K-100K <10K 41 41

42 Media Verification Data in the archive cannot just be written and forgotten about. Q: can you retrieve my file? A: let me check err, sorry, we lost it. Proactive and regular verification of archive data required Ensure cartridges can be mounted Ensure data can be read and verified against metadata (checksum, size, ) Do not wait until media migration to detect problems Opportunistic scanning when resources available

43 Storage vs Recall Efficiency Efficient data acceptance: Have lots of input streams, spread across a number of storage servers, wait until the storage servers are ~full, and write the data from each storage server to tape. Result: data recorded at the same time is scattered over many tapes. How is the data read back? Generally, files grouped by time of creation. How to optimise for this? Group files on to a small number of tapes. Ooops 43 43

44 Keep users away from tape 44 44

45 CASTOR & EOS 45

46 Data Access Realism Mass Storage systems work well for recording, export and retrieval of production data. Good: This is what they were designed for! But some features of the CASTOR system developed at CERN are unused or ill-adapted experiments want to manage data availability file sizes, file-placement policies and access patterns interact badly alleviated by experiment management of data transfer between tape and disk analysis use favours low latency over guaranteed data rates aggravated by experiment management of data; automated replication of busy datasets is disabled. But we should not be too surprised: storage systems were designed many years before analysis patterns were understood. (If they are even today ) 46 46

47 Data Distribution The LHC experiments need to distribute millions of files between the different sites. The File Transfer System automates this handling failures of the underlying distribution technology (gridftp) ensuring effective use of the bandwidth with multiple streams, and managing the bandwidth use ensuring ATLAS, say, is guaranteed 50% of the available bandwidth between two sites if there is data to transfer 47

48 Data Distribution FTS uses the Storage Resource Manager as an abstract interface to the different storage systems A Good Idea but this is not (IMHO) a complete storage abstraction layer and anyway cannot hide fundamental differences in approaches to MSS design Lots of interest in the Amazon S3 interface these days; this doesn t try to do as much as SRM, but HEP should try to adopt de facto standards. Once you have distributed the data, a file catalogue is needed to record which files are available where. LFC, the LCG File Catalogue was designed for this role as a distributed catalogue to avoid a single point of failure, but other solutions are also used And as many other services rely on CERN, the need for a distributed catalogue is no longer (seen as ) so important. 48

49 Looking more widely I Only a small subset of data distributed is actually used Experiments don t know a priori which dataset will be popular CMS has 8 orders magnitude in access between most and least popular Dynamic data replication: create copies of popular datasets at multiple sites

50 University n.10 6 MIPS m Tbyte Robot" Looking more widely II Network capacity is readily available 622 Mbits/s" and it is reliable: FNAL MIPS Desk" 110 Tbyte So let s simply tops" Robot" copy data from another site if Desk" tops" it is not available locally rather than recalling from tape or failing the job. N x 622 Mbits/s" Inter-connectedness is increasing with the design of LHCOne to deliver (multi-) 10Gb links Desk" CERN between tops" Tier2s. n.10 7 MIPS m Pbyte Robot" MONARC Fibre 2000 cut during tests in 2009 Capacity reduced, but alternative links took over 50 50

51 Metadata Distribution Conditions data is needed to make sense of the raw data from Average the experiments Streams Throughput Data on items such as temperatures, detector voltages and gas compositions is needed to turn the ~100M Pixel image of the event into a meaningful description in terms of particles, tracks and momenta. LCR/s This data is in an RDBMS, Oracle at CERN, and presents interesting distribution challenges One 5000 cannot tightly 4600 couple databases across the loosely coupled 0 WLCG sites, for example row size = 100B row size = 500B row size = 1000B Oracle streams technology improved to deliver the Oracle 10g Oracle 11gR2 Oracle 11g R2 (opnmized) necessary performance, and http caching systems developed to address need for cross-dbms distribution. 51

52 Job Execution Environment Jobs submitted to sites depend on large, rapidly changing libraries of experiment specific code Major problems ensue if updated code is not distributed to every server across the grid (remember, there are x0,000 servers ) Shared filesystems can become a bottleneck if used as a distribution mechanism within a site. Approaches 2011 ATLAS Today: 22/1.8M files ATLAS Today: 921/115GB Pilot job framework can check to see if the execution host has the correct environment A global caching file system: CernVM-FS

53 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 53

54 Learning from our mistakes We have just completed a review of WLCG operations and services based on 2+ years of operations with the aim to simplify and harmonise during the forthcoming long shutdown. Key areas to improve are data management & access and exploiting many/multi-core architectures, especially with use of virtualisation. Clouds Towards the Future Identity Management 54

55 Learning from our mistakes We have just completed a review of WLCG operations and services based on 2+ years of operations with the aim to simplify and harmonise during the forthcoming long shutdown. Key areas to improve are data management & access and exploiting many/multi-core architectures, especially with use of virtualisation. Clouds Towards the Future Identity Management 55

56 Learning from our mistakes We have just completed a review of WLCG operations and services based on 2+ years of operations with the aim to simplify and harmonise during the forthcoming long shutdown. Key areas to improve are data management & access and exploiting many/multi-core architectures, especially with use of virtualisation. Clouds Towards the Future Identity Management 56

57 Integrating With The Cloud? User Site A Slide courtesy of Ulrich Schwickerath Central Task Queue Payload pull Instance requests VO service Site B Site C Shared Image Repository (VMIC) Image maintainer Cloud bursting Commercial cloud 57

58 Learning from our mistakes We have just completed a review of WLCG operations and services based on 2+ years of operations with the aim to simplify and harmonise during the forthcoming long shutdown. Key areas to improve are data management & access and exploiting many/multi-core architectures, especially with use of virtualisation. Clouds Towards the Future Identity Management 58

59 Learning from our mistakes We have just completed a review of WLCG operations and services based on 2+ years of operations with the aim to simplify and harmonise during the forthcoming long shutdown. Key areas to improve are data management & access and exploiting many/multi-core architectures, especially with use of virtualisation. Clouds Towards the Future Identity Management 59

60 Compute Element Grid Middleware Basics Standard interface to local workload management systems (batch scheduler) Storage Element Standard interface to local mass storage systems Resource Broker Tool to analyse user job requests (input data sets, cpu time, data output requirements) and route these to sites according to data and cpu time availability. Many implementations of the basic principles: Globus, VDT, EDG/EGEE, NorduGrid, OSG 60

61 Trust! 61

62 One step beyond? 62

63 HEP, CERN, LHC and LHC Experiments LHC Computing Challenge The Technique In outline In more detail Towards the Future Summary Outline 63

64 Summary WLCG has delivered the capability to manage and distribute the large volumes of data generated by the LHC experiments and the excellent WLCG performance has enabled physicists to deliver results rapidly. HEP datasets may not be the most complex or (any longer) massive, but in addressing the LHC computing challenges, the community has delivered the world s largest computing Grid, practical solutions to requirements for large-scale data storage, distribution and access, and a global trust federation enabling world-wide collaboration

65 Thank You! And thanks to Vlado Bahyl, German Cancio, Ian Bird, Jakob Blomer, Eva Dafonte Perez, Fabiola Gianotti, Frédéric Hemmer, Jan Iven, Alberto Pace and Romain Wartel of CERN, Elisa Lanciotti of PIC and K. De, T. Maeno, and S. Panitkin of ATLAS for various unattributed graphics and slides. 65

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez

Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data. Maaike Limper Emil Pilecki Manuel Martín Márquez Using the In-Memory Columnar Store to Perform Real-Time Analysis of CERN Data Maaike Limper Emil Pilecki Manuel Martín Márquez About the speakers Maaike Limper Physicist and project leader Manuel Martín

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

CERN s Business Computing

CERN s Business Computing CERN s Business Computing Where Accelerated the infinitely by Large Pentaho Meets the Infinitely small Jan Janke Deputy Group Leader CERN Administrative Information Systems Group CERN World s Leading Particle

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

and the GridKa mass storage system Jos van Wezel / GridKa

and the GridKa mass storage system Jos van Wezel / GridKa and the GridKa mass storage system / GridKa [Tape TSM] staging server 2 Introduction Grid storage and storage middleware dcache h and TSS TSS internals Conclusion and further work 3 FZK/GridKa The GridKa

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO Tackling tomorrow s computing challenges today at CERN CERN openlab CTO CERN is the European Laboratory for Particle Physics. CERN openlab CTO The laboratory straddles the Franco- Swiss border near Geneva.

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

CERN Tape Archive (CTA) :

CERN Tape Archive (CTA) : CERN Tape Archive (CTA) : From Development to Production Deployment Michael Davis, Vladimír Bahyl, Germán Cancio, Eric Cano, Julien Leduc and Steven Murray CHEP 2018, Sofia, Bulgaria 9 July 2018 Changing

More information

Grid Data Management

Grid Data Management Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

Data Storage. Paul Millar dcache

Data Storage. Paul Millar dcache Data Storage Paul Millar dcache Overview Introducing storage How storage is used Challenges and future directions 2 (Magnetic) Hard Disks 3 Tape systems 4 Disk enclosures 5 RAID systems 6 Types of RAID

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Università degli Studi di Ferrara

Università degli Studi di Ferrara Università degli Studi di Ferrara DOTTORATO DI RICERCA IN INFORMATICA Ciclo XXVI Coordinatore Prof. Massimiliano Mella Enabling parallel and interactive distributed computing data analysis for the ALICE

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products What is big data? Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Evaluation of the Huawei UDS cloud storage system for CERN specific data

Evaluation of the Huawei UDS cloud storage system for CERN specific data th International Conference on Computing in High Energy and Nuclear Physics (CHEP3) IOP Publishing Journal of Physics: Conference Series 53 (4) 44 doi:.88/74-6596/53/4/44 Evaluation of the Huawei UDS cloud

More information

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES 1 THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES Vincent Garonne, Mario Lassnig, Martin Barisits, Thomas Beermann, Ralph Vigne, Cedric Serfon Vincent.Garonne@cern.ch ph-adp-ddm-lab@cern.ch XLDB

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP

RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP RADU POPESCU IMPROVING THE WRITE SCALABILITY OF THE CERNVM FILE SYSTEM WITH ERLANG/OTP THE EUROPEAN ORGANISATION FOR PARTICLE PHYSICS RESEARCH (CERN) 2 THE LARGE HADRON COLLIDER THE LARGE HADRON COLLIDER

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

Philippe Charpentier PH Department CERN, Geneva

Philippe Charpentier PH Department CERN, Geneva Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

The PanDA System in the ATLAS Experiment

The PanDA System in the ATLAS Experiment 1a, Jose Caballero b, Kaushik De a, Tadashi Maeno b, Maxim Potekhin b, Torre Wenaus b on behalf of the ATLAS collaboration a University of Texas at Arlington, Science Hall, PO Box 19059, Arlington, TX

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016 HIGH ENERGY PHYSICS ON THE OSG Brian Bockelman CCL Workshop, 2016 SOME HIGH ENERGY PHYSICS ON THE OSG (AND OTHER PLACES TOO) Brian Bockelman CCL Workshop, 2016 Remind me again - WHY DO PHYSICISTS NEED

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information

arxiv: v1 [cs.dc] 20 Jul 2015

arxiv: v1 [cs.dc] 20 Jul 2015 Designing Computing System Architecture and Models for the HL-LHC era arxiv:1507.07430v1 [cs.dc] 20 Jul 2015 L Bauerdick 1, B Bockelman 2, P Elmer 3, S Gowdy 1, M Tadel 4 and F Würthwein 4 1 Fermilab,

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

Overcoming Obstacles to Petabyte Archives

Overcoming Obstacles to Petabyte Archives Overcoming Obstacles to Petabyte Archives Mike Holland Grau Data Storage, Inc. 609 S. Taylor Ave., Unit E, Louisville CO 80027-3091 Phone: +1-303-664-0060 FAX: +1-303-664-1680 E-mail: Mike@GrauData.com

More information

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors

We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists. International authors and editors We are IntechOpen, the world s leading publisher of Open Access books Built by scientists, for scientists 4,100 116,000 120M Open access books available International authors and editors Downloads Our

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

Accelerating Throughput from the LHC to the World

Accelerating Throughput from the LHC to the World Accelerating Throughput from the LHC to the World David Groep David Groep Nikhef PDP Advanced Computing for Research v5 Ignatius 2017 12.5 MByte/event 120 TByte/s and now what? Kans Higgs deeltje: 1 op

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

LHCb Computing Resource usage in 2017

LHCb Computing Resource usage in 2017 LHCb Computing Resource usage in 2017 LHCb-PUB-2018-002 07/03/2018 LHCb Public Note Issue: First version Revision: 0 Reference: LHCb-PUB-2018-002 Created: 1 st February 2018 Last modified: 12 th April

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

Visita delegazione ditte italiane

Visita delegazione ditte italiane Visita delegazione ditte italiane CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Massimo Lamanna/CERN IT department - Data Storage Services group Innovation in Computing in High-Energy

More information

WORK PROJECT REPORT: TAPE STORAGE AND CRC PROTECTION

WORK PROJECT REPORT: TAPE STORAGE AND CRC PROTECTION WORK PROJECT REPORT: TAPE STORAGE AND CRC PROTECTION CERN Summer Student Programme 2014 Student: Main supervisor: Second supervisor: Division: Karel Ha Julien Marcel Leduc

More information

Data Access and Data Management

Data Access and Data Management Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information