Data handling and processing at the LHC experiments

Size: px
Start display at page:

Download "Data handling and processing at the LHC experiments"

Transcription

1 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco

2 2 The presentation will be LHC centric, which is very relevant for the current phase that we are now less emphases will be given to Astronomy and Bio-informatic Narrowing the scope of the presentation to the perspective of the physicists, discussing issues that affects them directly Outlines Motivation and requirements Data management From Trigger up to offline passing by Condition Database Reprocessing chain Distributed Analysis Analysis Data flow End-user interfaces descriptions and Monitoring Data handling and processing aspects on Astronomy Brief introduction to Bio-informatic and grid computing

3 3

4 4 LHC to find the Higgs boson & new physics beyond the Standard Model Alice LHC CMS Nominal working condition p-p beams: s=14 TeV; L=10 34 cm -2 s -1 ; Bunch Cross every 25 ns Pb-Pb beams: s=5.5 TeV; L=10 27 cm -2 s -1 SPS ALICE dedicated to heavy-ion physics. Study of QCD under extreme conditions ATLAS PS LHCb 2010 s=7 TeV (first collisions on March, 30 th ) Peak L~10 32 cm -2 s -1 (Novembre) Recorded luminosity = pb -1 First ion collisions recorded on November

5 5 LHC Data Challenge The LHC generates 40.10*6 collisions/s Combined the 4 experiments record: 100 interesting collision per second ~ 10 PB (10*16 B) per year (10*10 collisions/y) LHC data correspond to 20 10*6 DVD s /year! Space equivalent to large PC disks Computing power ~ 105 of today s PC Using the parallelism or hierarchical architecture is the only way to analyze this amount of Data in a reasonable amount of time

6 6 The way LHC experiments uses the GRID Tier-0 Store RAW data Served RAW data to Tier1s Run first-pass calib/align Run first-pass reconstruction Data Distribution Tier1s Tier-1s Store RAW data (forever) Re-reconstruction Served copy of RECO Archival of Simulation Data Distribution Tier2s Tier-2s Primary Resources for Physics Analysis And Detectors Studies by users MC Simulation distribution Tier1s

7 7 LHC Computing Model: the Grid interfaces and main elements The LHC experiments Grid tools interface to all middleware types and provide uniform access to the Grid environment: The VOMS (Virtual Organization Membership Service) database contains the privileges of all collaboration members; it is used to allow collaboration jobs to run on experiment resources and store their output files on disks Distributed Data Management system catalogues all collaboration data and manages the data transfers The Production system schedules all organized data processing and simulation activities The tools interfaces allow the analysis job submission: jobs go the sites holding input data and output data to be stored locally or sent back to the submitting site Such a complex system is very powerful but presents challenges for ensuring quality Failures are expected and must be managed

8 8

9 9 Requirements for a Reconstruction Software The LHC collisions will occur@40mhz while the offline system can stream data to disk only at Hz Offline Operation workflows: Trigger Strategy: trigger sequence in which, after a L1 (hardware based) response, reducing the events from 40 MHz to 100 khz, the offline reconstruction code runs to provide the factor 1000 reduction to Hz offline reconstruction must provide both: prompt feedback on detector status and data quality sample for physics analysis, provide up-to-date alignment & calibration calibration workflows with short latency provide samples for calibration purposes, data validation and certification for analysis data quality monitoring (DQM)

10 10 Before Tier0 Data are organized into inclusive streams, based on trigger chains: ~200Hz Physics streams ~20Hz Express streams ~20Hz Calibration/Monitoring streams Several streams are designed to handle calibration and alignment data efficiently Alignment and calibration payloads must be provided in a timely manner in order to proceed in the reconstruction chain. Luminosity only known for lumi sections Data split across multiple streamer files in lumi section

11 11 Trigger system LEVEL1 reduces rate from 40 MHz 100kHz hardware based on fast decision logic Uses only coarse reconstruction If trigger decision positive L1Accept L1Accept High Level Trigger HLT reduces rate 100 khz O(100 Hz) uses full detector data (including tracker data) event processing with programs running in a computer farm Reconstructs μ, e/γ, jets, Et,. subdivides processed data in data streams according to physics needs, calibration, alignment and Data quality LVL1 Trigger <100 khz High Level Triggers Software based LVL2 Trigger ~3 khz Event Filter ~200 Hz Coarse granularity data Calorimeter and Muon based Identifies Regions of Interest Partial event reconstruction in Regions of Interest Full granularity data Trigger algorithms optimized for fast rejection Full event reconstruction seeded by LVL2 Trigger algorithms similar to offline

12 12 What do we have to do with the data? First pass of data reconstruction is done at Tier-0 Software and calibration constants are updated ~ daily Express stream: -Subset of the physics data used to check the data quality, and calculate calibration constants Calibration streams: Partial events, used by specific subdetectors. Physics streams based on trigger Express Processing: Provide fully reconstructed of events within about 1 hour for monitoring and fast physics analysis Prompt Processing First pass reconstruction is performed on the RAW Physics datasets can be held up to 48h to allow PromptCalibration workflows to run and produce new conditions

13 13 Distributed Database: Conditions DB LHC data processing and analysis require access to large amounts of the non-event Data : detector conditions, calibrations, etc. stored in relational databases Conditions DB is critical for data reconstruction at CERN using alignment and calibration constants produced within 24 hours: the first pass processing Conditions which need continuous updates: beam-spot position measured every 23s tracker problematic channels Conditions which need monitoring: calorimeter problematic channels mask hot channels tracker alignment monitor movements of large structures LHC experiments use different technologies to replicate Conditions DB to all Tier1 sites via continuous real-time Updates

14 14 CERN Analysis Facility The CERN Analysis Facility (CAF) farm is dedicated to the LHC experiments latency critical activities like: Calibration and Alignment, Detector and/ortrigger Commissioning, or High Priority Physics Analysis CAF access is restricted to users dedicated to these activities CAF supported workflow The first workflow that is being supported is the beam spot determination The beam spot is the luminous region produced by the collisions of the LHC proton beams it needs to be measured precisely for a correct offline data reconstruction The data source is the Tier-0 for the beam spot workflow

15 15 ALICE and CMS Data type CMS hierarchy of Data Tiers Raw Data: as from the Detector Full Event: contains Raw plus all the objects created by the Reconstruction pass RECO: contains a subset of the Full Event, sufficient for reapplying calibrations after reprocessing Refitting but not re-tracking AOD: a subset of RECO, sufficient for the large majority of standard physics analyses Contains tracks, vertices etc and in general enough info to (for example) apply a different b-tagging Can contain very partial hit level information RAW RECO AOD ~1.5 MB/event ~ 500 kb/event ~ 100 kb/event ALICE has almost similar data type, content and format as CMS

16 16 ATLAS Data type RAW Event data from TDAQ ESD (Event Summary Data): output of reconstruction: Calorimeter cells, track hits, vertices, Particle ID, etc AOD (Analysis Object Data): physics objects for analysis such as e, µ, jets, etc DPD (Derived Physics Data): equivalent of old ntuples (format to be finalized) TAG Reduced set of information for event selection Collaboration production Group/user activity S RAW DPD

17 17 LHCb Data type Distribution to Tier1s (RAW) RAW Data is reconstructed Reconstruction (SDST) Calorimeter energy clusters Stripping and streaming (DST) Particle ID Group-level production (µdst) Tracks... At reconstruction only enough information is stored to allow a physics pre-selection to run at a later stage: stripping DST (SDST) User physics analysis performed on the stripped data Output of the stripping is self contained, i.e. no need to navigate through files Analysis generates semi-private data: ntuple and/or personal DST

18 18 Data Quality - Aims Knowledge of the quality of data underpins all particle physics results Only good data can be used to produce valid physics results Careful monitoring necessary to understand data conditions, diagnose and eliminate detector problems The Data Quality (DQ) system provides the means to: Allow experts and shifters to investigate data shortly after it is recorded in accessible formats, Derive calibrations and other necessary reconstruction parameters, Mask or fix any detector issues found Provide a calibrated set of processed physics event streams rapidly, Determine the data quality for each DQ region ( 100 in total) and the suitability of any run for physics analysis: Using a flag (good, bad, etc) Record these and allow data analysis teams to make selections on combinations of these flags conveniently

19 19 Data Reprocessing (1) When Software and/or Calibration constants get better collaborations need to organize data processing for physics groups in the most efficient way As the LHC experiments computing resources are on the Grid, reprocessing is managed by the central Production System Needs dedicated efforts to ensure high quality results Reconstruction results are input to additional physics -specific treatment by Physics working groups This step also requires massive data access and a lot of CPU, and in addition it often needs a rapid software update Reconstruct on the grid, produce and distribute bulk outputs to the collaboration for analysis required:

20 20 Data Reprocessing (2) Efficient usage of computing resources on the grid which needs a stable and flexible production system Full integration with Data Management system allows automated data delivery to the final destination Prevent bottlenecks in large-scale data access to conditions DB Exclude site-dependent failures, like unavailable resources

21 21 Monte Carlo (MC) Production MC production is crucial for detector studies and physics analysis Mainly used for identifying background and evaluating acceptances and efficiencies Event simulation and reconstruction is managed by the Central production System The production chain is: Generation: no input, small output (10 to 50 MB ntuples) pure CPU: few minutes, up to few hours if hard filtering present Simulation (hits): GEANT4 small input CPU and memory intensive: 24 to 48 hours large output: ~500 MB, the smallest is ~ 100 KB! Digitization: lower CPU/memory requirements: 5 to 10 hours I/O intensive: persistent reading of PU through LAN large output: similar to simulation Reconstruction: even less CPU: ~5 hours smaller output: ~200 MB

22 22

23 23 Data Analysis and LHC Analysis Flow The full data processing chain from reconstructed event data up to producing the final plots for publication Data analysis is an iterative process Reduce data samples to more interesting subsets (selection) Compute higher level information, redo some reconstruction, etc. Calculate statistical entities For the LHC experiments data is generated at the experiments, process and arrange in Tiers geographically distributed (T1, T2, T3) The analysis will process, reduce, transform and select parts of the data iteratively until it can fit in a single computer How this is realized?

24 24 From the user point of view The LHC experiments developed a number of experiment specific middleware using a small set of basic services (backends) E.g. DIRAC, PanDA, AliEn, Glide-In these special middleware allow the job To benefit from being run in the grid environment They Developed the user-friendly and intelligent interfaces to hide the complexity and provide the transparent usage of the distributed system E.g. CRAB, GANGA Allowing a large-scale data processing on distributed resources (Grid) [LHC experiment specific] Front-end interface LHC experiment specific software Grid middleware Basic Services Computing & Storage resources User Output

25 25 LHC experiments specific Framwork Specialization of the LHC experiments Frameworks and Data Models for data analysis to process ESD/AOD: CMS Physics Analysis Toolkit (PAT) ATLAS Analysis Framework, LHCb DaVinci/LoKi/Bender, ALICE Analysis Framework In same cases selecting subset of Framework libraries Collaboration approved analysis algorithms and tools User typically develops its own Algorithm(s) based on these frameworks but also is willing to replace parts of the official release

26 26 Distributed Data Analysis Flow Distributed analysis complicates the life of the physicists In addition to the analysis code he/she has to worry about many other technical issues The Distributed Analysis model is data location driven : the users analysis runs where data are located User runs interactively on small data sample developing the analysis code User selects large data sample to run the very same code User s analysis code is shipped to the site where sample is located Results are made available to the user for the final plot production Final analysis performs locally on Small cluster single computer

27 27 Front-End Tools Pathena/GANGA ALICE ATLAS Goal is to ensure users are able to efficiently access all available resources (local, batch, grid, etc) Easy job management and application configuration CRAB CMS

28 28 Input Data The user specifies on what data to run the analysis using the LHC experiments specific dataset catalogs Specification is based on a query The front-end interfaces provide functionality to facilitate the catalog queries Each experiment has developed Event Tags mechanisms for sparse input data selection TAG An important goal of TAG is enabling the storage of massive stores of raw data in central locations that have sufficiently capable storage, processing, and network infrastructure to handle it, while also permitting remote scientists to work with the data by using TAG metadata to select smaller-scale, higher-quality data that can feasibly be downloaded and processed at locations with more modest resources

29 29

30 30 Monitoring system Web monitoring is crucial feature both for users and administrator The LHC experiments developed Powerful and flexible monitoring system Activities: Follow specific analysis jobs and tasks Identify inefficiencies and failures Investigate inefficiencies and failures Commission sites and services Identify trends, predict future requirements Targets: Data transfers, Job and Task processing, Site and Service availability

31 31 Task Monitoring Dashboard generates a Wide selection of plots

32 32 Positive impact of monitoring on infrastructure quality Dashboard generates weekly reports with monitoring metrics related to data analysis on the GRID the LHC experiments takes action in order to improve the success rate of user analysis jobs. Successes Application Failures User configurations errors Remote stage out issues Few % of failures reading data at site Grid Failures

33 33

34 34 Astronomy with high-energy particles Astronomy aims to answer the following questions: What is the Universe made of? What are the properties of neutrinos? What is their role in cosmic evolution? What do neutrinos tell us about the interior of Sun and Earth, and about Supernova explosions? What's the origin of high energy cosmic rays? What's the sky view at extreme energies? Can we detect gravitational waves? What will they tell us about violent cosmic processes and basic physics laws?

35 35 Astronomy with high-energy particles Astrophysics Astroparticle physics Sources Messengers Stars (evolution), galaxies, clusters, CMBR Electromagnetic (radio, IR, VIS-UV, X-Ray) Supernova remnants, GRBs, AGNs, dark matter annihilations, Elementary particles (γ, ν, p, e) Datasets Image-based Event-based Detectors Optical / radio telescopes Particle telescopes

36 36 Astroparticle data Flow Signals from the detectors are digitized and packaged into events which then must undergo processing to reconstruct the physical meaning of the event Typically fast acquisition, a lot of storage needed, RAW + calibration data, post-processing of a selection of events (event by event) The typical steps in an experiment are: 1. Register passage of particle in detector element 2. Digitize the signals 3. Trigger on interesting signals 4. Readout detector elements and build into an event written to disk/tape 5. Perform higher level triggering/ filtering on events - perhaps long after they are recorded 6. Reconstruct the particle hypotheses - usually via non-linear fits 7. Statistical analysis of extracted observations

37 37 Astronomy and Grid computing Astronomy experiments produce petabytes of data, They have challenging goals for efficient access to this data Data reduction and analysis require lots of computing resources Must distribute data to all collaborators across Europe User access to shared resources and standardized analysis tool Better and easier data management Many Astronomy experiments have adopted Grid as a computing model and optimized their applications needed to extract a final result such as: Simulation Data processing, reconstruction Data transfer Storage Data analysis

38 38 Bio-informatic and Grid Formal representation of biological knowledge Maintenance of biological databases Simulations Molecular dynamics Biochemical pathways One of the major challenges for the bioinformatics community is to provide the means for biologists to analyse the sequences provided by the complete genome sequencing projects. Grid technology is an opportunity to normalize the access for an integrated exploitation allows to present software, servers and information systems with homogenous means.

39 39 Bio-informatic and Grid Gridification of the bio applications: Allowing distribution of large datasets over different sites and avoiding single points of failure or bottlenecks; Enforcing the use of common standards for data exchanges and making exchanges between sites easier; Enlarging the datasets available for large scale studies by breaking the barriers between remote sites; In addition Allowing a distributed community to share its computational resources so that a small laboratory can proceed with large scale experiments if needed; Opening new application fields that were not even thinkable without a common grid infrastructure

40 40 Summary LHC will provide access to conditions not seen since the early Universe Analysis of LHC data has potential to change how we view the world Substantial computing and sociological challenges The LHC will generate data on a scale not seen anywhere before LHC experiments will critically depend on parallel solutions to analyze their enormous amounts of data A lot of sophisticated data management tools have been developed Many Scientific applications benefit from the powerful grid computing to share resources used to obtain a scientific result

41 41

42 42 Major Differences Both Ganga and ALICE provide an interactive shell to configure and automate analysis jobs (Python, CINT) In addition Ganga provides a GUI Crab has a thin client. Most of the work (automation, recovery, monitoring, etc) is done in a server This functionality is delegated to the VO specific WMS for the other cases Ganga offers a convenient overview of all user jobs (job repository) enabling automation Both Crab and Ganga are able to pack local user libraries and environment automatically making use of the configuration tool knowledge For ALICE the user provides.par files with the sources

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

Computing at Belle II

Computing at Belle II Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

ATLAS PILE-UP AND OVERLAY SIMULATION

ATLAS PILE-UP AND OVERLAY SIMULATION ATLAS PILE-UP AND OVERLAY SIMULATION LPCC Detector Simulation Workshop, June 26-27, 2017 ATL-SOFT-SLIDE-2017-375 22/06/2017 Tadej Novak on behalf of the ATLAS Collaboration INTRODUCTION In addition to

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop 1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data

More information

First LHCb measurement with data from the LHC Run 2

First LHCb measurement with data from the LHC Run 2 IL NUOVO CIMENTO 40 C (2017) 35 DOI 10.1393/ncc/i2017-17035-4 Colloquia: IFAE 2016 First LHCb measurement with data from the LHC Run 2 L. Anderlini( 1 )ands. Amerio( 2 ) ( 1 ) INFN, Sezione di Firenze

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Machine Learning in Data Quality Monitoring

Machine Learning in Data Quality Monitoring CERN openlab workshop on Machine Learning and Data Analytics April 27 th, 2017 Machine Learning in Data Quality Monitoring a point of view Goal Maximize the best Quality Data for physics analysis Data

More information

The ATLAS Distributed Analysis System

The ATLAS Distributed Analysis System The ATLAS Distributed Analysis System F. Legger (LMU) on behalf of the ATLAS collaboration October 17th, 2013 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Amsterdam

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

Update of the Computing Models of the WLCG and the LHC Experiments

Update of the Computing Models of the WLCG and the LHC Experiments Update of the Computing Models of the WLCG and the LHC Experiments September 2013 Version 1.7; 16/09/13 Editorial Board Ian Bird a), Predrag Buncic a),1), Federico Carminati a), Marco Cattaneo a),4), Peter

More information

Data and Analysis preservation in LHCb

Data and Analysis preservation in LHCb Data and Analysis preservation in LHCb - March 21, 2013 - S.Amerio (Padova), M.Cattaneo (CERN) Outline 2 Overview of LHCb computing model in view of long term preservation Data types and software tools

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

Experience with Data-flow, DQM and Analysis of TIF Data

Experience with Data-flow, DQM and Analysis of TIF Data Experience with Data-flow, DQM and Analysis of TIF Data G. Bagliesi, R.J. Bainbridge, T. Boccali, A. Bocci, V. Ciulli, N. De Filippis, M. De Mattia, S. Dutta, D. Giordano, L. Mirabito, C. Noeding, F. Palla,

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

CMS Alignement and Calibration workflows: lesson learned and future plans

CMS Alignement and Calibration workflows: lesson learned and future plans Available online at www.sciencedirect.com Nuclear and Particle Physics Proceedings 273 275 (2016) 923 928 www.elsevier.com/locate/nppp CMS Alignement and Calibration workflows: lesson learned and future

More information

ATLAS Analysis Workshop Summary

ATLAS Analysis Workshop Summary ATLAS Analysis Workshop Summary Matthew Feickert 1 1 Southern Methodist University March 29th, 2016 Matthew Feickert (SMU) ATLAS Analysis Workshop Summary March 29th, 2016 1 Outline 1 ATLAS Analysis with

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

TAG Based Skimming In ATLAS

TAG Based Skimming In ATLAS Journal of Physics: Conference Series TAG Based Skimming In ATLAS To cite this article: T Doherty et al 2012 J. Phys.: Conf. Ser. 396 052028 View the article online for updates and enhancements. Related

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS EventIndex: Full chain deployment and first operation The ATLAS EventIndex: Full chain deployment and first operation Álvaro Fernández Casaní Instituto de Física Corpuscular () Universitat de València CSIC On behalf of the ATLAS Collaboration 1 Outline ATLAS

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

Reprocessing DØ data with SAMGrid

Reprocessing DØ data with SAMGrid Reprocessing DØ data with SAMGrid Frédéric Villeneuve-Séguier Imperial College, London, UK On behalf of the DØ collaboration and the SAM-Grid team. Abstract The DØ experiment studies proton-antiproton

More information

The CMS data quality monitoring software: experience and future prospects

The CMS data quality monitoring software: experience and future prospects The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data

More information

CSCS CERN videoconference CFD applications

CSCS CERN videoconference CFD applications CSCS CERN videoconference CFD applications TS/CV/Detector Cooling - CFD Team CERN June 13 th 2006 Michele Battistin June 2006 CERN & CFD Presentation 1 TOPICS - Some feedback about already existing collaboration

More information

Data Reconstruction in Modern Particle Physics

Data Reconstruction in Modern Particle Physics Data Reconstruction in Modern Particle Physics Daniel Saunders, University of Bristol 1 About me Particle Physics student, final year. CSC 2014, tcsc 2015, icsc 2016 Main research interests. Detector upgrades

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization

Deferred High Level Trigger in LHCb: A Boost to CPU Resource Utilization Deferred High Level Trigger in LHCb: A Boost to Resource Utilization The use of periods without beam for online high level triggers Introduction, problem statement Realization of the chosen solution Conclusions

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

The Database Driven ATLAS Trigger Configuration System

The Database Driven ATLAS Trigger Configuration System Journal of Physics: Conference Series PAPER OPEN ACCESS The Database Driven ATLAS Trigger Configuration System To cite this article: Carlos Chavez et al 2015 J. Phys.: Conf. Ser. 664 082030 View the article

More information

Data Quality Monitoring Display for ATLAS experiment

Data Quality Monitoring Display for ATLAS experiment Data Quality Monitoring Display for ATLAS experiment Y Ilchenko 1, C Cuenca Almenar 2, A Corso-Radu 2, H Hadavand 1, S Kolos 2, K Slagle 2, A Taffard 2 1 Southern Methodist University, Dept. of Physics,

More information

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN

PoS(EPS-HEP2017)523. The CMS trigger in Run 2. Mia Tosi CERN CERN E-mail: mia.tosi@cern.ch During its second period of operation (Run 2) which started in 2015, the LHC will reach a peak instantaneous luminosity of approximately 2 10 34 cm 2 s 1 with an average pile-up

More information

DQ2 - Data distribution with DQ2 in Atlas

DQ2 - Data distribution with DQ2 in Atlas DQ2 - Data distribution with DQ2 in Atlas DQ2 - A data handling tool Kai Leffhalm DESY March 19, 2008 Technisches Seminar Zeuthen Kai Leffhalm (DESY) DQ2 - Data distribution with DQ2 in Atlas March 19,

More information

Experience of the WLCG data management system from the first two years of the LHC data taking

Experience of the WLCG data management system from the first two years of the LHC data taking Experience of the WLCG data management system from the first two years of the LHC data taking 1 Nuclear Physics Institute, Czech Academy of Sciences Rez near Prague, CZ 25068, Czech Republic E-mail: adamova@ujf.cas.cz

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

HammerCloud: A Stress Testing System for Distributed Analysis

HammerCloud: A Stress Testing System for Distributed Analysis HammerCloud: A Stress Testing System for Distributed Analysis Daniel C. van der Ster 1, Johannes Elmsheuser 2, Mario Úbeda García 1, Massimo Paladin 1 1: CERN, Geneva, Switzerland 2: Ludwig-Maximilians-Universität

More information

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization

Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization Scientifica Acta 2, No. 2, 74 79 (28) Physics CMS Muon High Level Trigger: Level 3 reconstruction algorithm development and optimization Alessandro Grelli Dipartimento di Fisica Nucleare e Teorica, Università

More information

Tracking and flavour tagging selection in the ATLAS High Level Trigger

Tracking and flavour tagging selection in the ATLAS High Level Trigger Tracking and flavour tagging selection in the ATLAS High Level Trigger University of Pisa and INFN E-mail: milene.calvetti@cern.ch In high-energy physics experiments, track based selection in the online

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

b-jet identification at High Level Trigger in CMS

b-jet identification at High Level Trigger in CMS Journal of Physics: Conference Series PAPER OPEN ACCESS b-jet identification at High Level Trigger in CMS To cite this article: Eric Chabert 2015 J. Phys.: Conf. Ser. 608 012041 View the article online

More information

Track reconstruction with the CMS tracking detector

Track reconstruction with the CMS tracking detector Track reconstruction with the CMS tracking detector B. Mangano (University of California, San Diego) & O.Gutsche (Fermi National Accelerator Laboratory) Overview The challenges The detector Track reconstruction

More information

arxiv: v1 [physics.ins-det] 1 Oct 2009

arxiv: v1 [physics.ins-det] 1 Oct 2009 Proceedings of the DPF-2009 Conference, Detroit, MI, July 27-31, 2009 1 The CMS Computing System: Successes and Challenges Kenneth Bloom Department of Physics and Astronomy, University of Nebraska-Lincoln,

More information

Managing Petabytes of data with irods. Jean-Yves Nief CC-IN2P3 France

Managing Petabytes of data with irods. Jean-Yves Nief CC-IN2P3 France Managing Petabytes of data with irods Jean-Yves Nief CC-IN2P3 France Talk overview Data management context. Some data management goals: Storage virtualization. Virtualization of the data management policy.

More information

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland

CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland Available on CMS information server CMS NOTE 2001/037 The Compact Muon Solenoid Experiment CMS Note Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland July 16, 2001 CMS Data Grid System Overview

More information

Design of the new ATLAS Inner Tracker (ITk) for the High Luminosity LHC

Design of the new ATLAS Inner Tracker (ITk) for the High Luminosity LHC Design of the new ATLAS Inner Tracker (ITk) for the High Luminosity LHC Jike Wang (DESY) for the ATLAS Collaboration May/2017, TIPP 2017 LHC Machine Schedule In year 2015, ATLAS and CMS went into Run2

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Clustering and Reclustering HEP Data in Object Databases

Clustering and Reclustering HEP Data in Object Databases Clustering and Reclustering HEP Data in Object Databases Koen Holtman CERN EP division CH - Geneva 3, Switzerland We formulate principles for the clustering of data, applicable to both sequential HEP applications

More information

Gamma-ray Large Area Space Telescope. Work Breakdown Structure

Gamma-ray Large Area Space Telescope. Work Breakdown Structure Gamma-ray Large Area Space Telescope Work Breakdown Structure 4.1.D Science Analysis Software The Science Analysis Software comprises several components: (1) Prompt processing of instrument data through

More information

The GAP project: GPU applications for High Level Trigger and Medical Imaging

The GAP project: GPU applications for High Level Trigger and Medical Imaging The GAP project: GPU applications for High Level Trigger and Medical Imaging Matteo Bauce 1,2, Andrea Messina 1,2,3, Marco Rescigno 3, Stefano Giagu 1,3, Gianluca Lamanna 4,6, Massimiliano Fiorini 5 1

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

The performance of the ATLAS Inner Detector Trigger Algorithms in pp collisions at the LHC

The performance of the ATLAS Inner Detector Trigger Algorithms in pp collisions at the LHC X11 opical Seminar IPRD, Siena - 7- th June 20 he performance of the ALAS Inner Detector rigger Algorithms in pp collisions at the LHC Mark Sutton University of Sheffield on behalf of the ALAS Collaboration

More information

ATLAS, CMS and LHCb Trigger systems for flavour physics

ATLAS, CMS and LHCb Trigger systems for flavour physics ATLAS, CMS and LHCb Trigger systems for flavour physics Università degli Studi di Bologna and INFN E-mail: guiducci@bo.infn.it The trigger systems of the LHC detectors play a crucial role in determining

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests

LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb Computing Resources: 2019 requests and reassessment of 2018 requests LHCb-PUB-2017-019 09/09/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-019 Created: 30 th August 2017 Last

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

Real-time dataflow and workflow with the CMS tracker data

Real-time dataflow and workflow with the CMS tracker data Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Computing Resources Scrutiny Group

Computing Resources Scrutiny Group CERN RRB 17 056 April 17 Computing Resources Scrutiny Group C Allton (UK), V Breton (France), G Cancio Melia (CERN), A Connolly(USA), M Delfino (Spain), F Gaede (Germany), J Kleist (Nordic countries),

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Event Displays and LArg

Event Displays and LArg Event Displays and LArg Columbia U. / Nevis Labs Slide 1 Introduction Displays are needed in various ways at different stages of the experiment: Software development: understanding offline & trigger algorithms

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

Offline Tutorial I. Małgorzata Janik Łukasz Graczykowski. Warsaw University of Technology

Offline Tutorial I. Małgorzata Janik Łukasz Graczykowski. Warsaw University of Technology Offline Tutorial I Małgorzata Janik Łukasz Graczykowski Warsaw University of Technology Offline Tutorial, 5.07.2011 1 Contents ALICE experiment AliROOT ROOT GRID & AliEn Event generators - Monte Carlo

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

Muon Reconstruction and Identification in CMS

Muon Reconstruction and Identification in CMS Muon Reconstruction and Identification in CMS Marcin Konecki Institute of Experimental Physics, University of Warsaw, Poland E-mail: marcin.konecki@gmail.com An event reconstruction at LHC is a challenging

More information

Analysis & Tier 3s. Amir Farbin University of Texas at Arlington

Analysis & Tier 3s. Amir Farbin University of Texas at Arlington Analysis & Tier 3s Amir Farbin University of Texas at Arlington Introduction Tug of war between analysis and computing requirements: Analysis Model Physics Requirements/User preferences (whims) Organization

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information