PRP. Frank Würthwein (SDSC/UCSD) Jason Nielsen (UCSC) Owen Long (UCR) Chris West (UCSB) Anyes Taffard (UCI) Maria Spiropolu (Caltech)

Size: px
Start display at page:

Download "PRP. Frank Würthwein (SDSC/UCSD) Jason Nielsen (UCSC) Owen Long (UCR) Chris West (UCSB) Anyes Taffard (UCI) Maria Spiropolu (Caltech)"

Transcription

1 PRP Frank Würthwein (SDSC/UCSD) Jason Nielsen (UCSC) Owen Long (UCR) Chris West (UCSB) Anyes Taffard (UCI) Maria Spiropolu (Caltech) 10/14/15 PRP Workshop 1

2 ATLAS & CMS CMS ATLAS Collaborations span ~3000 scientists across ~200 institutions in ~40 countries. Experiments comprised of 100M electronic channels recording proton-proton collisions every 25ns. 10/14/15 PRP Workshop 2

3 The Path to Discovery Private Data Publication Detector Simulation Reconstruction Public Data Sets Public Data Sets Private Data Private Data Publication Publication centrally organized production of 10 s of PB of data per collaboration. All members have ~ equal access. Each group produces their own private data. More than one group may contribute to a paper. A group may use their private data to contribute to more than one publication. Private Data each group produces ~4-40TB per publication Publication ~1000 publications from Run 1 Data ( ) PRP makes public data accessible from home, and focuses on the last mile problem from private data to publication. 10/14/15 PRP Workshop 3

4 UW Seattle NERSC Compute Resource SLAC Data & Compute Resource UCSC UCD LHC Scientists across nine West Coast Universities to benefit from CSU Fresno Petascale Data & Compute Resources across PRP UCSB Caltech Data & Compute Resource UCI UCSD & SDSC Data & Compute Resources UCR 10/14/15 PRP Workshop 4

5 West Coast LHC community may use five major data & compute resources in CA: SLAC, NERSC, Caltech, UCSD, SDSC aggregate Petabytes of disk space & Petaflops of compute power. LHC Scientists across West Coast want to transparently compute on data at home institutions & these five major centers to accelerate science from idea to discovery Uniform execution environment Xrootd Data Federations for ATLAS & CMS serving local disks outbound to remotely running jobs caching remote data inbound for locally running jobs HTCondor overflow of jobs from local cluster to major centers satisfy peak needs to accelerate path from idea to publication Collaboration of PRP, SDSC, and Open Science Grid PRP Builds on SDSC LHC@UC Project 10/14/15 5

6 The DTN we ship(ed) HTCondor system with 40 batch slots fully integrated into Campus Cluster, and 5 major centers. Login node for Researchers 12 x 4TB data disks Apps & Libs Cache Data Cache Origin Server All services remotely administered 6

7 Xrootd Data Federation ATLAS FAX Global XRootd Federation Redirector SDSC XRootd XRootd Data Server Xrootd local Redirector XRootd Data Cache Services on the new DTN XRootd XRootd Data Server XRootd Data Server XRootd Data Server Data Server Pre-existing infrastructure at UCI Data Cache Example UCI 10/14/15 PRP Workshop 7

8 OSG Compute Federation Other compute Resources OSG gwms ssh OSG Comet HTCondor batch system SLURM batch system Services on the new DTN hardware@uci Pre-existing infrastructure at UCI Example UCI 10/14/15 PRP Workshop 8

9 Jason Nielsen - UCSC LHC/CERN ATLAS 3-level trigger 20 MHz 60 khz 6 khz 500 Hz Trigger & DAQ raw data PB/year ATLAS Data Flow (R. Reece / K. Cranmer) 3. The LHC and LHC ATLAS Worldwide Computing Grid Local resources 100k CPUs over 100 PB Monte Carlo production 78 Athena Framework Detector Simulation Generator generated MC simulated MC reconstructed ATLAS Computing QFT matrix element primary kinematics detector tracks, hits3.24: TODO clusters, jets Figure [296]. Ryan Reece (UCSC) 10/14/15Generator ROD Emulation HepMC ROD Input Particle Filter PRP Workshop plots/ tables Technical Design Report 20 Results! June 2005 GB-TB ntuple data/mc MCTruth (Gen) 4 Simulation MCTruth 9

10 Jason Nielsen - UCSC Ryan Reece (UCSC) Data Reduction ATLAS software Athena DerivationFramework QuickAna, SUSYTools, CxAOD,... (Py)ROOT wild-west condor athena reconstruction derivation/skim Tier-3 cluster or Grid? Tier-3 cluster desktop/laptop World-wide LHC Computing Grid CP tools event loop merge/scale visualize plots xaod DxAOD CxAOD hists.root hists.merged.root (R.Reece)' ~PB ~TB ~TB ~GB GB 10/14/15 PRP Workshop

11 Jason Nielsen - UCSC Preparing'for'Larger'LHC'Datasets' LHC'Run'2'(2015<2018):'5x'current'dataset,'at' roughly'double'the'energy'(8<>14'tev)' Unique'physics'opportuniLes'with'new'data' Measure'Higgs'boson'properLes' Search'for'rare'new'parLcle'producLon' (supersymmetry,'exolca)' Challenge'of'scaling'compuLng'access'to'allow' repeated'filtering'and'analysis'of'dataset' LHC'Run'4'(2025<'):'100x'current'dataset!' 10/14/15 PRP Workshop 11

12 UCR CMS Physics Searches for Supersymmetry (SUSY) Addresses big questions: dark matter, grand unification, stabilization of the Higgs mass. CM Energy 8 TeV -> 13 TeV means significant enhancement in sensitivity. Possible outcomes from analyzing run2 data: We find SUSY! No sign of SUSY. This won't kill it but it will make it less relevant. Searches for heavy Majorana neutrinos. Higgs physics: H->γγ, µµ, 4τ. Top quark physics: precision mass and cross section, rare processes (4t production). Owen Long, UCR 10/14/15 PRP Workshop 12

13 The UCR CMS T3 Cluster 512 computing cores total, half new, half old. Old Cores 256 (16 16-core boards) 2.4 GHz AMD Opteron GB RAM / node ~400 W / node New Cores 256 (8 32-core boards) 2.8 GHz AMD Opteron GB RAM / node ~1000 W / node 2 GridFTP servers connected to Science DMZ at 10 Gb/s. HDFS and NSF interconnects: 10 Gb/s, management 1 Gb/s. 240 TB raw HDFS disk ~30 TB other disk To be added: xrootd cache appliance. Owen Long, UCR 10/14/15 PRP Workshop 13

14 UCR CMS Analysis and PRP Analysis workflow in past (Run1). Submit 1000s of jobs running on reconstructed real and simulated data. Jobs run all over the world at various CMS computing s. Results trickle in to UCR T3. Bottleneck issues with file transfers. Often needed to do a few resubmissions to get last few %. Long tedious painful process Because previous step is so painful, output is large (everything you can think of wanting later) to avoid having to do it again and again and again. Further data reduction at UCR T3 eventually down to 10s of GB (laptop size). Current situation and Impact of PRP More compact analysis format ("miniaod") centrally produced. No need for giant analysis-specific nutples. Eliminates one significant intermediate step. If miniaod for important datasets stored on PRP network, expect vast improvements in speed and reliability. Access miniaod through PRP network xrootd servers. Very large pool of local computing resources in PRP network. Barrier for analysis iterations significantly lowered. Faster pace for innovation. Owen Long, UCR 10/14/15 PRP Workshop 14

15 Overview of UCSB CMS Tier-3 computing center CMS groups that use the computing resources at UCSB focus on SUSY searches, particularly for gluinos ~200 cores, ~200 TB disk 1 Gbps NICs on nodes in data center, some bonded to provide 2 Gbps 100 Gbps campus WAN connection via CENIC (recently upgraded from 10 Gbps) Usage: Run I primarily for processing bare ROOT ntuples generated at other s Run II creation of a smaller CMS data tier MINIAOD (~15-50 kb/event) makes it possible to run the same analysis on the CMS data itself Also used by LUX/LZ colleagues who generate LUX MC and occasionally transfer ~1 TB data samples from SLAC/UCD/Brown/SURF The small size of our makes our needs somewhat different from that of other institutions 10/14/15 PRP Workshop 15 Chris West October 15, 2015 Pacific Research Platform Workshop

16 Transfers to UCSB Two main types of transfers: Transfers of MINIAOD to process at UCSB The output of jobs run on MINIAOD at other s CMS data Processed data Frequency When CMS taking data Irregular Rate ~30-60 MB/s up to 1 Gbps Tool Data received from PhEDEx (srm-cp/gridftp) Mainly US s CRAB3/FTS (gfal-cp/gridftp) Wherever data is located/ processed 10/14/15 PRP Workshop 16 Chris West October 15, 2015 Pacific Research Platform Workshop

17 Wish list Minimal maintenance Manpower is an important limitation, particularly for a small Performance optimizations sometimes not worth the effort We are not currently connected to the LHC-ONE network due to additional work needed to guarantee that only LHC data travels across this network would be nice to have the PRP simplify the connection to LHC- ONE Improved performance in transfers from distant nodes Ability to use resources (disk, and particularly CPU) at UCSD semi-transparently All CMS groups at UCSB also use resources at UCSD Example use case: compute-intensive jobs (such as systematics computations) on data stored at UCSB Will have a node dedicated exclusively to CMS connections to UCSD (thanks to Frank W., et. al.) but we expect LUX/LZ needs to grow and PRP will be important for that connectivity 10/14/15 PRP Workshop 17 Chris West October 15, 2015 Pacific Research Platform Workshop

18 How UCI Works UCI is active in searches for Supersymmetry (SUSY) at ATLAS Modus Operandi - Develop lightweight analysis framework and custom data format ( SusyNtuple ) Process the ATLAS-wide data format ( xaod ) to produce SusyNtuple Download output data to local T3 Develop analysis and search for new physics - Typical submission: ~O(1000) jobs - Failures of submission not uncommon (fault on side of the grid s) - Download can take ~days (unresponsive grid s/unreliable grid-ware used to process the downloads) Constant babysitting of submission & download status Several institutes involved in the ATLAS-SUSY group use the UCI analysis framework and rely on a smooth operation and production turn-over full dataset in xaod: O(10 TB) xaod grid Grid Submit full dataset in SusyNtuple: O(50 Gb) z z z z SusyNtuple z UCI 10/14/15 PRP Workshop 18 Anyes Taffard & Daniel Antrim (UC Irvine) grid

19 Experience so Far Brick installed at UCI T3 - Bottlenecks in setting up a complete workarea are promptly fixed and addressed thanks to support team at UCSD (thanks Edgar and Jeff!) Tested Condor + XRootD (FAX) jobs using the cached datasets - Painful download step essentially removed from user s POV - Output datasets run on the grid are registered to FAX automatically We can simultaneously begin analysis and caching of datasets in a time span that is shorter than the time needed to download the same datasets locally - Ability to distribute compute power over many s removes bottleneck issues of our T3 s queue system Can easily run CPU/IO intensive Monte Carlo simulation processes simultaneous to processing large analysis n-tuples 10/14/15 PRP Workshop 19 3 xaod grid Grid Submit z z z z SusyNtuple z cache ability to use cached datasets in addition to distributing cluster/batch resources looks to already be a game changer for our typical operations

20 On Going Tests Tests to ~remove user interaction with the grid underway - Cache and process ATLAS-wide datasets typically processed on the grid using Condor and cache the output datasets for easy access later on All steps of our data-processing will be more directly under our control Potential to avoid the layers of obfuscation and grid-management that can disrupt smooth data flow - Less downtime between when new data from ATLAS becomes available and when we can access it - More time for thinking about and can be done in background xaod cache Condor Sub local register to grid z z z z SusyNtuple z cache local register output files to a to be doing physics! accessed via FAX later on 10/14/15 PRP Workshop 20 4 local = in the user s work area on the brick

21 Maria Spiropolu (Caltech) 126 GEV HIGGS AND OTHER PUZZLES E SM is the Higgs the SM one? are there more? where is SUSY? without SUSY we don t understand how the Higgs boson can exist without violating basic mechanisms of quantum physics; is the Higgs connected with neutrinos? dark matter? dark energy? MORE DATA FROM MANY SOURCES (PARTICLE, ASTRO, COSMO) WILL GUIDE US SUSY Compo H Oct , PRP Big Data Freeway, smaria@caltech.edu 10/14/15 PRP Workshop 21 1

22 Maria Spiropolu (Caltech) DATA HYPERLOOPS The largest data- & network-intensive programs (LHC and HL LHC, LSST, DESI, LCLS II, Joint Genome Institute etc) face unprecedented challenges in!global data distribution,!processing,!access,!analysis,!coordinated use of CPU,!storage and!network resources. High-performance networking is a key enabling technology for this research: global science collaborations depend on fast and reliable data transfers and access on regional, national and international scales Total traffic handled in Petabytes per Month Projected Traffic Reaches 1 Exabyte Per Month. by ~ EB/Mo. by ~2024 Rate of increase follows or exceeds Historical trend of 10X per 4 Years HEP traffic will compete with BES, BER and ASCR Exascale CSN Ecosystems, great opportunity for HEP (eg CMS CPU needs will grow by X by HL-LHC) Oct , PRP Big Data Freeway, smaria@caltech.edu 10/14/15 PRP Workshop 22 2

23 Maria Spiropolu (Caltech) INTELLIGENT CFN SYSTEMS Allocate guaranteed bandwidth to high priority flows (Dynamic Circuit Networking: ESnet/FNAL, Internet2 ) Point-to-Point circuits across the LHCONE multi-domain fabric Deeply programmable, agile software-defined network (SDN) infrastructures are emerging as multi-service multi-domain network operating systems interconnecting science teams across regional, national and global distances, Worldwide distributed systems developed by the data intensive science programs, harnessing global workflow, scheduling and data management systems they have developed, which are enabled by distributed operations and security infrastructures riding on high capacity (but still-passive) networks New Computing Models: network aware data operations, strategic data distribution/placement/managent via dynamic network provisioning (more on H. Newman s presentation on Fri) Oct , PRP Big Data Freeway, smaria@caltech.edu 3 10/14/15 PRP Workshop 23

24 Size of data & frequency of transfers caching of experiment data is local CPU power limited, and ad hoc. 5Gbps probably plenty enough initially (see caching benchmark). serving out is remote CPU power limited, and ad hoc. 10Gbps probably enough to feed 1-2k remote CPUs (see read-only benchmark). More needed later, most likely. Data is exchanged within PRP and with LHCOne LHCOne is the most important connectivity external to PRP. Tools used: Xrootd, HTCondor, gridftp Speed achieved: 10Gbps read-only, 5Gbps caching (see benchmarking) What is screwed up? Answers to Q s don t know yet. Have exercised the infrastructure on LAN, but not yet sufficiently on WAN. concerned about CPU elasticity: can we grow fast enough to have a serious impact? Are there enough CPU resources on PRP to scale out? concerned about infrastructure operational cost. Lacking experience! What are the failure modes? How do we monitor against failure? How much human intervention is required to debug and fix stuff when it breaks? How do we deal with effort limited operations? How stable is the infrastructure against abuse ( = unexpected loads see also next point!!!) concerned about detailed IO performance requirements. Lacking experience! 10/14/15 PRP Workshop 24

25 Benchmarking 10/14/15 PRP Workshop 25

26 Read-only 1000 to 2000 simultaneous clients Aggregate peaks at % of the available 10Gbps. Xrootd Data Server use case: Many clients read small amounts of data at a time because IO per client is limited by CPU available. Recall, all apps are single threaded! This test was run with synthetic workflows simulating realistic read patterns in LAN environment before the server was shipped. 10/14/15 PRP Workshop 26

27 Caching behavior Synthetic load to simulate typical cache use case: 200 jobs read 2.4MB every ~ 10 seconds. cache loads up in parallel to reads all files requested are cached no more writes while reads continue Write performance at ~ 5Gbps in parallel with reads. Reads almost unaffected by caching & writes. (ignore spikes at 20:30 additional unrelated tests active at that time.) 10/14/15 PRP Workshop 27

Flying HTCondor at 100gbps Over the Golden State

Flying HTCondor at 100gbps Over the Golden State Flying HTCondor at 100gbps Over the Golden State Jeff Dost (UCSD) HTCondor Week 2016 1 What is PRP? Pacific Research Platform: - 100 gbit network extending from Southern California to Washington - Interconnects

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

The ATLAS Distributed Analysis System

The ATLAS Distributed Analysis System The ATLAS Distributed Analysis System F. Legger (LMU) on behalf of the ATLAS collaboration October 17th, 2013 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Amsterdam

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013

Computing at the Large Hadron Collider. Frank Würthwein. Professor of Physics University of California San Diego November 15th, 2013 Computing at the Large Hadron Collider Frank Würthwein Professor of Physics of California San Diego November 15th, 2013 Outline The Science Software & Computing Challenges Present Solutions Future Solutions

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

LHC and LSST Use Cases

LHC and LSST Use Cases LHC and LSST Use Cases Depots Network 0 100 200 300 A B C Paul Sheldon & Alan Tackett Vanderbilt University LHC Data Movement and Placement n Model must evolve n Was: Hierarchical, strategic pre- placement

More information

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S)

Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Stephen J. Gowdy (CERN) 12 th September 2012 XLDB Conference FINDING THE HIGGS IN THE HAYSTACK(S) Overview Large Hadron Collider (LHC) Compact Muon Solenoid (CMS) experiment The Challenge Worldwide LHC

More information

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Early experience with the Run 2 ATLAS analysis model

Early experience with the Run 2 ATLAS analysis model Early experience with the Run 2 ATLAS analysis model Argonne National Laboratory E-mail: cranshaw@anl.gov During the long shutdown of the LHC, the ATLAS collaboration redesigned its analysis model based

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool D. Mason for CMS Software & Computing 1 Going to try to give you a picture of the CMS HTCondor/ glideinwms global pool What s the use case

More information

ATLAS Analysis Workshop Summary

ATLAS Analysis Workshop Summary ATLAS Analysis Workshop Summary Matthew Feickert 1 1 Southern Methodist University March 29th, 2016 Matthew Feickert (SMU) ATLAS Analysis Workshop Summary March 29th, 2016 1 Outline 1 ATLAS Analysis with

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science Joint Genome Institute LHC Beyond the Higgs Boson LSST SKA Harvey Newman, Caltech NSF CC*/CICI Workshop: Data Integration

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Programmable Information Highway (with no Traffic Jams)

Programmable Information Highway (with no Traffic Jams) Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan

More information

A Virtual Comet. HTCondor Week 2017 May Edgar Fajardo On behalf of OSG Software and Technology

A Virtual Comet. HTCondor Week 2017 May Edgar Fajardo On behalf of OSG Software and Technology A Virtual Comet HTCondor Week 2017 May 3 2017 Edgar Fajardo On behalf of OSG Software and Technology 1 Working in Comet What my friends think I do What Instagram thinks I do What my boss thinks I do 2

More information

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback

Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy. David Toback Big Computing and the Mitchell Institute for Fundamental Physics and Astronomy Texas A&M Big Data Workshop October 2011 January 2015, Texas A&M University Research Topics Seminar 1 Outline Overview of

More information

Towards a Strategy for Data Sciences at UW

Towards a Strategy for Data Sciences at UW Towards a Strategy for Data Sciences at UW Albrecht Karle Department of Physics June 2017 High performance compu0ng infrastructure: Perspec0ves from Physics Exis0ng infrastructure and projected future

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract

ATLAS NOTE. December 4, ATLAS offline reconstruction timing improvements for run-2. The ATLAS Collaboration. Abstract ATLAS NOTE December 4, 2014 ATLAS offline reconstruction timing improvements for run-2 The ATLAS Collaboration Abstract ATL-SOFT-PUB-2014-004 04/12/2014 From 2013 to 2014 the LHC underwent an upgrade to

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS

THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS THOUGHTS ON SDN IN DATA INTENSIVE SCIENCE APPLICATIONS Artur Barczyk/Caltech Internet2 Technology Exchange Indianapolis, October 30 th, 2014 October 29, 2014 Artur.Barczyk@cern.ch 1 HEP context - for this

More information

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # 1659348) Klara Jelinkova Joseph Ghobrial NSF Campus Cyberinfrastructure PI and Cybersecurity Innovation

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Opportunities A Realistic Study of Costs Associated

Opportunities A Realistic Study of Costs Associated e-fiscal Summer Workshop Opportunities A Realistic Study of Costs Associated X to Datacenter Installation and Operation in a Research Institute can we do EVEN better? Samos, 3rd July 2012 Jesús Marco de

More information

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016

HIGH ENERGY PHYSICS ON THE OSG. Brian Bockelman CCL Workshop, 2016 HIGH ENERGY PHYSICS ON THE OSG Brian Bockelman CCL Workshop, 2016 SOME HIGH ENERGY PHYSICS ON THE OSG (AND OTHER PLACES TOO) Brian Bockelman CCL Workshop, 2016 Remind me again - WHY DO PHYSICISTS NEED

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing

More information

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster

CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster CMS Grid Computing at TAMU Performance, Monitoring and Current Status of the Brazos Cluster Vaikunth Thukral Department of Physics and Astronomy Texas A&M University 1 Outline Grid Computing with CMS:

More information

CMS Computing Model with Focus on German Tier1 Activities

CMS Computing Model with Focus on German Tier1 Activities CMS Computing Model with Focus on German Tier1 Activities Seminar über Datenverarbeitung in der Hochenergiephysik DESY Hamburg, 24.11.2008 Overview The Large Hadron Collider The Compact Muon Solenoid CMS

More information

Scheduling Computational and Storage Resources on the NRP

Scheduling Computational and Storage Resources on the NRP Scheduling Computational and Storage Resources on the NRP Rob Gardner Dima Mishin University of Chicago UCSD Second NRP Workshop Montana State University August 6-7, 2018 slides: http://bit.ly/nrp-scheduling

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

Geant4 Computing Performance Benchmarking and Monitoring

Geant4 Computing Performance Benchmarking and Monitoring Journal of Physics: Conference Series PAPER OPEN ACCESS Geant4 Computing Performance Benchmarking and Monitoring To cite this article: Andrea Dotti et al 2015 J. Phys.: Conf. Ser. 664 062021 View the article

More information

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS MARINA ROTARU 1, MIHAI CIUBĂNCAN 1, GABRIEL STOICEA 1 1 Horia Hulubei National Institute for Physics and Nuclear Engineering, Reactorului 30,

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan

Philippe Laurens, Michigan State University, for USATLAS. Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan Philippe Laurens, Michigan State University, for USATLAS Atlas Great Lakes Tier 2 collocated at MSU and the University of Michigan ESCC/Internet2 Joint Techs -- 12 July 2011 Content Introduction LHC, ATLAS,

More information

Improving Generators Interface to Support LHEF V3 Format

Improving Generators Interface to Support LHEF V3 Format Improving Generators Interface to Support LHEF V3 Format Fernando Cornet-Gomez Universidad de Granada, Spain DESY Summer Student Supervisor: Ewelina M. Lobodzinska September 11, 2014 Abstract The aim of

More information

Enabling a SuperFacility with Software Defined Networking

Enabling a SuperFacility with Software Defined Networking Enabling a SuperFacility with Software Defined Networking Shane Canon Tina Declerck, Brent Draney, Jason Lee, David Paul, David Skinner May 2017 CUG 2017-1 - SuperFacility - Defined Combining the capabilities

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009 Magellan Project Jeff Broughton NERSC Systems Department Head October 7, 2009 1 Magellan Background National Energy Research Scientific Computing Center (NERSC) Argonne Leadership Computing Facility (ALCF)

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

Implementation of the Pacific Research Platform over Pacific Wave

Implementation of the Pacific Research Platform over Pacific Wave Implementation of the Pacific Research Platform over Pacific Wave 21 September 2015 CANS, Chengdu, China Dave Reese (dave@cenic.org) www.pnw-gigapop.net A Brief History of Pacific Wave n Late 1990 s: Exchange

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

Data handling and processing at the LHC experiments

Data handling and processing at the LHC experiments 1 Data handling and processing at the LHC experiments Astronomy and Bio-informatic Farida Fassi CC-IN2P3/CNRS EPAM 2011, Taza, Morocco 2 The presentation will be LHC centric, which is very relevant for

More information

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO

Tackling tomorrow s computing challenges today at CERN. Maria Girone CERN openlab CTO Tackling tomorrow s computing challenges today at CERN CERN openlab CTO CERN is the European Laboratory for Particle Physics. CERN openlab CTO The laboratory straddles the Franco- Swiss border near Geneva.

More information

Prompt data reconstruction at the ATLAS experiment

Prompt data reconstruction at the ATLAS experiment Prompt data reconstruction at the ATLAS experiment Graeme Andrew Stewart 1, Jamie Boyd 1, João Firmino da Costa 2, Joseph Tuggle 3 and Guillaume Unal 1, on behalf of the ATLAS Collaboration 1 European

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin Overview and Introduction to Scientific Visualization Texas Advanced Computing Center The University of Texas at Austin Scientific Visualization The purpose of computing is insight not numbers. -- R. W.

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization In computer science, storage virtualization uses virtualization to enable better functionality

More information

SOFTWARE-DEFINED NETWORKING WHAT IT IS, AND WHY IT MATTERS

SOFTWARE-DEFINED NETWORKING WHAT IT IS, AND WHY IT MATTERS SOFTWARE-DEFINED NETWORKING WHAT IT IS, AND WHY IT MATTERS When discussing business networking and communications solutions, the conversation seems invariably to revolve around cloud services, and more

More information

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

The Run 2 ATLAS Analysis Event Data Model

The Run 2 ATLAS Analysis Event Data Model The Run 2 ATLAS Analysis Event Data Model Marcin Nowak, BNL On behalf of the ATLAS Analysis Software Group and Event Store Group 16 th International workshop on Advanced Computing and Analysis Techniques

More information

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products

Insight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products What is big data? Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing Wolf Behrenhoff, Christoph Wissing DESY Computing Seminar May 17th, 2010 Page 1 Installation of

More information

IRNC:RXP SDN / SDX Update

IRNC:RXP SDN / SDX Update 30 September 2016 IRNC:RXP SDN / SDX Update John Hess Darrell Newcomb GLIF16, Miami Pacific Wave: Overview Joint project of CENIC (California regional research and

More information

Analytics Platform for ATLAS Computing Services

Analytics Platform for ATLAS Computing Services Analytics Platform for ATLAS Computing Services Ilija Vukotic for the ATLAS collaboration ICHEP 2016, Chicago, USA Getting the most from distributed resources What we want To understand the system To understand

More information

Parallel Storage Systems for Large-Scale Machines

Parallel Storage Systems for Large-Scale Machines Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens

More information

Computing at Belle II

Computing at Belle II Computing at Belle II CHEP 22.05.2012 Takanori Hara for the Belle II Computing Group Physics Objective of Belle and Belle II Confirmation of KM mechanism of CP in the Standard Model CP in the SM too small

More information

Best Practices for Validating the Performance of Data Center Infrastructure. Henry He Ixia

Best Practices for Validating the Performance of Data Center Infrastructure. Henry He Ixia Best Practices for Validating the Performance of Data Center Infrastructure Henry He Ixia Game Changers Big data - the world is getting hungrier and hungrier for data 2.5B pieces of content 500+ TB ingested

More information

International Cooperation in High Energy Physics. Barry Barish Caltech 30-Oct-06

International Cooperation in High Energy Physics. Barry Barish Caltech 30-Oct-06 International Cooperation in High Energy Physics Barry Barish Caltech 30-Oct-06 International Collaboration a brief history The Beginning of Modern Particle Physics The Discovery of the π Meson (1947)

More information

BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE

BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE BIG DATA AND HADOOP ON THE ZFS STORAGE APPLIANCE BRETT WENINGER, MANAGING DIRECTOR 10/21/2014 ADURANT APPROACH TO BIG DATA Align to Un/Semi-structured Data Instead of Big Scale out will become Big Greatest

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

Spark and HPC for High Energy Physics Data Analyses

Spark and HPC for High Energy Physics Data Analyses Spark and HPC for High Energy Physics Data Analyses Marc Paterno, Jim Kowalkowski, and Saba Sehrish 2017 IEEE International Workshop on High-Performance Big Data Computing Introduction High energy physics

More information