CMS evolution on data access: xrootd remote access and data federation. Giacinto DONVITO INFN-BARI

Size: px
Start display at page:

Download "CMS evolution on data access: xrootd remote access and data federation. Giacinto DONVITO INFN-BARI"

Transcription

1 CMS evolution on ata access: xroot remote access an ata feeration Giacinto DONVITO INFN-BARI

2 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

3 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

4 CMS computing moel: first assumptions ò The Computing Moels of the LHC experiments were esigne using Monarc (Moels of Networke Analysis at Regional Centers for LHC Experiments) Moel assumption: ò ò ò ò ò ò ò Resource are locate in istribute centers with a hierarchical structure ò ò ò ò Tier 0 Master computing center, full resource set (isks an tapes) Tier 1 National Centre, resource subset (isks an tapes) Tier 2 Major Centre (only isks) Tier 3 University Department The network will be bottleneck We will nee a hierarchical mass storage because we cannot affor sufficient local isk space The file catalogue will not scale We will overloa the source site if we ask for transfers to a large number of sites We nee to run job close to the ata to achieve efficient CPU utilization We nee a a priori structure an preictable ata utilization

5 CMS computing moel: first assumptions (it applies to all LHC exps, though) ~PBytes/sec Online System ~100 MBytes/sec 1 TIPS is approximately 25,000 SpecInt95 equivalents Tier 1 There is a bunch crossing every 25 nsecs. There are 100 triggers per secon Each triggere event is ~1 MByte in size France Regional Centre ~622 Mbits/sec or Air Freight (eprecate) Germany Regional Centre Tier 0 Offline Processor Farm ~20 TIPS Italy Regional Centre ~100 MBytes/sec CERN Computer Centre FermiLab ~4 TIPS ~622 Mbits/sec ~622 Mbits/sec Tier 2 Caltech ~1 TIPS Tier2 Centre Tier2 Centre Tier2 Centre Tier2 Centre ~1 TIPS ~1 TIPS ~1 TIPS ~1 TIPS Physics ata cache Institute Institute ~0.25TIPS Physicist workstations Institute ~1 MBytes/sec Institute Tier 4 Physicists work on analysis channels. Each institute will have ~10 physicists working on one or more channels; ata for these channels shoul be cache by the institute server [ Image courtesy of Harvey Newman, Caltech ]

6 CMS computing moel: first assumptions Other routes: T&!T! T! $$$ example: CMS Take CMS ata Cow as example CMS T&"T! transfers likely to be bursty an riven by analysis emans D&"4&"" MB4s for worst4best connecte T!+s )!""5* CMS T!"T& transfers almost entirely fairly continuous simulation transfers D &"MB4s )NOTE: aggregate input rate into T&+s comparable to rate from T"!* tape 280 MB/s (RAW, RECO, AOD) 225 MB/s (RAW) Tier M-SI2K 0.4 PB isk 4.9 PB tape 5 Gbps WAN WNs 225 MB/s (RAW) 280 MB/s (RAW, RECO, AOD) Tier-1s Tier-1 40 MB/s (RAW, RECO, AOD) Tier-0 Tier-1s Tier-2s AOD AOD 48 MB/s (MC) 240 MB/s (skimme AOD, Some RAW+RECO) 60 MB/s (skimme AOD, Some RAW+RECO) 12 MB/s (MC) Tier MSI2K 1.0 PB isk 2.4 PB tape ~10 Gbps WAN WNs Tier M-SI2K 0.2 PB isk 1 Gbps WAN ~800 MB/s (AOD skimming, ata reprocessing) Up to 1 GB/s (AOD analysis, calibration) NB: average sustaine throughputs. Courtesy: J.Hernanez (CMS) WNs CCR!""# $ Gran Sasso% &" June "# D' Bonacorsi!,

7 CMS computing moel: first results ò Data transfers among sites is much more reliable than expecte ò WAN network performances are growing fast an the network infrastructure is reliable ò Retrieving files from a remote sites coul be easier than retrieving it from a local hierarchical mass storage system ò It looks like we nee a clearer partition between isk an tape ò Using tape only like an archive ò Geographically istribute Job submission an matchmaking are working well ò The tool evelope to help the final users are easy an make the usage of the gri transparent to the user

8 CMS computing moel: first results From CERN From T1s It is easy to get out of a single T0/T1 up to MB/s sustaine of WAN transfer to all others CMS sites

9 CMS computing moel: first results A very small number of atasets are use very heavily

10 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

11 Technology trens an impact on HEP Computing moels

12 Technology trens an impact on ò The WAN available banwith is comparable with the backbone available at LAN level ò Network flows are alreay larger than foreseen at this point in the LHC program, even with lower luminosity ò Some T2 s are very large ò All US ATLAS an US CMS T2 s have 10G capability. ò Some T1-T2 flows are quite large (several to 10Gbps) ò T2-T2 ata flows are also starting to become significant. ò The concept of transferring file regionally is substantially broken! HEP Computing moels

13 Technology trens an impact on HEP Computing moels ò The vision progressively moves away from all hierarchical moels to peer-peer ò True for both CMS an ATLAS ò For reasons of reuce latency, increase working efficiency ò The challenge in the next future will be the available IOPS on storage systems ò It is clear we nee to optimize the IO at the application level as the isk will not increase the performance too much in the future ò The hierarchical mass storage system coul not cope with ynamic ata stage-in request ò The cost per TB of the isk allow us to buil huge isk-only facilities

14 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

15 Xroot concepts an features What Is xroot? ò A file access an ata transfer protocol ò Defines POSIX-style byte-level ranom access for ò Arbitrary ata organize as files of any type ò Ientifie by a hierarchical irectory-like name ò A reference software implementation ò Emboie as the xroot an cms aemons ò xroot aemon provies access to ata ò cms aemon clusters xroot aemons together

16 Xroot concepts an features What Isn t xroot? ò It is not a POSIX file system ò There is a FUSE implementation calle xrootfs ò An xroot client simulating a mountable file system ò It oes not provie full POSIX file system semantics ò It is not an Storage Resource Manager (SRM) ò Provies SRM functionality via BeStMan ò It is not aware of any file internals (e.g., root files) ò But is istribute with root an proof frameworks ò As it provies unique & efficient file access primitives

17 Xroot concepts an features Primary xroot Access Moes ò The root framework ò Use by most HEP an many Astro experiments (MacOS, Unix an Winows) ò POSIX preloa library ò Any POSIX compliant application (Unix only, no recompilation neee) ò File system in User SpacE ò A mounte xroot ata access system via FUSE (Linux an MacOS only) ò SRM, globus-url-copy, griftp, etc ò General gri access (Unix only) ò xrcp ò The parallel stream, multi-source copy comman (MacOS, Unix an Winows) ò xr ò The comman line interface for meta-ata operations (MacOS, Unix an Winows)

18 Xroot concepts an features What Makes xroot Unusual? ò A comprehensive plug-in architecture ò Security, storage back-ens (e.g., tape), proxies, etc ò Clusters wiely isparate file systems ò Practically any existing file system ò Distribute (share-everything) to JBODS (share-nothing) ò Unifie view at local, regional, an global levels ò Very low support requirements ò Harware an human aministration

19 Xroot concepts an features Protocol Driver (XRD) Authorization (bms, voms, etc) Protocol (1 of n) (xroot, proof, etc) Authentication (gsi, krb5, etc) lfn2pfn prefix encoing Let s take a closer look at xroot-style clustering Clustering (cms) Logical File System (ofs, sfs, alice, etc) Physical Storage System (ufs, hfs, hpss, etc) Replaceable plug-ins to accommoate any environment

20 Xroot concepts an features A Simple xroot Cluster Client 1: 4: open( /my/file ) Try open() at A xroot cms Manager (a.k.a. Reirector) 2: Who has /my/file? xroot cms xroot cms xroot cms A /my/file B C /my/file Data Servers

21 Xroot concepts an features Exploiting Stackability Client 1: 5: open( /my/file ) Try open() at ANL xroot cms Meta-Manager (a.k.a. Global Reirector) 8: open( /my/file ) 6: 7: open( /my/file ) Try open() at A Manager (a.k.a. Local Reirector) Data is uniformly available 2: Who has /my/file? By feerating three istinct sites Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) xroot cms xroot cms xroot cms S S S xroot cms e xroot cms e xroot cms e r r r v v v C B /my/file e C /my/file e C /my/file e xroot cms xroot cms r xroot cms xroot cms r xroot cms xroot cms r s s s A /my/file C B /my/file A B A /my/file B ANL 3: Who has /my/file? SLAC 3: Who has /my/file? UTA 3: Who has /my/file? An exponentially parallel search! (i.e. O(2 Distribute Clusters n )) Feerate Distribute Clusters

22 Xroot concepts an features Feerate Distribute Clusters ò Unites multiple site-specific ata repositories ò Each site enforces its own access rules ò Usable even in the presence of firewalls ò Scalability increases as more sites join ò Essentially a real-time bit-torrent social moel ò Feerations are flui an changeable in real time ò Provie multiple ata sources to achieve high transfer rates ò Increase opportunities for ata analysis ò Base on what is actually available

23 Xroot concepts an features Copy Data Access Architecture ò The built-in File Resiency Manager rives ò ò Copy On Fault ò Deman riven (fetch to restore missing file) Copy On Request ò Pre-riven (fetch files to be use for analysis) Client open( /my/file ) xrcp x xroot://mm.org//my/file /my xroot cms Meta-Manager (a.k.a. Global Reirector) Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) xroot cms xroot cms xroot cms S S xroot cms e xroot cms e xroot cms r r v v C /my/file e C /my/file e C /my/file xroot cms xroot cms r xroot cms xroot cms r xroot cms xroot cms s s A /my/file B /my/file A /my/file B A /my/file B ANL SLAC UTA xrcp copies ata using two sources S e r v e r s

24 Xroot concepts an features Direct Data Access Architecture ò Use servers as if all of them were local ò Normal an easiest way of oing this ò Latency may be an issue (epens on algorithms & CPU-I/O ratio) ò Requires Cost-Benefit analysis to see if acceptable Client open( /my/file ) xroot cms Meta-Manager (a.k.a. Global Reirector) Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) xroot cms xroot cms xroot cms S S xroot cms e xroot cms e xroot cms r r v v C /my/file e C /my/file e C /my/file xroot cms xroot cms r xroot cms xroot cms r xroot cms xroot cms s s A /my/file B /my/file A B A /my/file B ANL SLAC UTA S e r v e r s

25 Xroot concepts an features Cache Data Access Architecture ò Front servers with a caching proxy server ò Client access proxy server for all ata ò Server can be central or local to client (i.e. laptop) ò Data comes from proxy s cache or other servers Client open( /my/file ) xroot xroot cms Meta-Manager (a.k.a. Global Reirector) Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) Manager (a.k.a. Local Reirector) xroot cms xroot cms xroot cms S S xroot cms e xroot cms e xroot cms r r v v C /my/file e C /my/file e C /my/file xroot cms xroot cms r xroot cms xroot cms r xroot cms xroot cms s s A B A /my/file B A B ANL SLAC UTA S e r v e r s

26 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

27 CMS framework enhancements ò The CMSSW 4_X contains ROOT 5.27/06b, which has significant I/O improvements over 3_x ò CMSSW 4_x also contains moest I/O improvements on top of out-of-the-box ROOT I/O ò New ROOT ae: ò Auto-flushing: All buffers are flushe to isk perioically, guaranteeing some level of locality. ò Buffer-resizing: Buffers are resize so approximately the same number of events are in each buffer. ò Rea coalescing: Nearby reas (but non-consecutive) are combine into one.

28 CMS framework enhancements ò Incremental improvements through 3_x an 4_x: Only one event tree, improve event rea orering,ttreecache became functional, caching non-event trees. ò Improve startup: While TTreeCache is being traine, we now rea all baskets for the first 10 events at once. So, startup is typically one large rea instea of many small ones. Improve startup latencies (simplifie) {Event Training CMSSW 3_X CMSSW 4_2 Branch Branch = iniviual rea = vectore rea 7 rea calls 21 squares rea Event 2 rea calls 27 squares rea We rea all branches for the first 20 events; compare to the entire file size, the banwith cost is minimal. On high latency links, this can cause a 5 minute reuction in time to rea the first 20 events.

29 CMS framework enhancements Upcoming Enhancements ò Real Asynchronous prefetch (using threas an oublebuffering). Brian s opinion: the current patch set is problematic an not usable by CMS. At least a year out. ò Aitional reuction in un-streaming time. Multi-threae un-streaming. ò ROOT implementation of things in CMSSW: multiple TTreeCaches per file. Improve startup latencies.

30 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

31 Xroot Reirector use cases an basic ieas ò The goal of this activity is to ecrease the effort require for the en- user to access CMS ata. ò We have built a global prototype, base on Xroot, which allows a user to open (almost) any file in CMS, regarless of their location or the file s source. ò Goals: ò Reliability: Seamless failover, even between sites. ò Transparency: Open the same filename regarless of source site. ò Usability: Must be native to ROOT. ò Global: Cover as many CMS files as possible.

32 Xroot Reirector use cases an basic ieas ò The target of this project: ò En-users: event viewers, running on a few CMS files, sharing files with a group. ò T3s: Running meium-scale ntuple analysis ò None of these users are well-represente by CMS tools right now. ò So, a prototype is better than nothing

33 Xroot Reirector use cases an basic ieas Xroot Prototype ò Have a global reirector users can contact for all files. ò Can use any ROOT-base app to access the prototype infrastructure! Each file only has one possible URL ò Each participating site eploys at least 1 xroot server that acts like a proxy/oor to the external worl.

34 Xroot Reirector use cases an basic ieas

35 Xroot Illustration Reirector use cases an basic ieas Remote User 1. Attempt login 4. Login successful! 5. Open /store/foor 4. Open successful! 2. Request mapping for DN / VOMS 3. Result: User "brian" Xroot 6. Can brian open /store/foo? 7. Permit ò The authentication is always elegate to the site hosting the file LCMAPS Authfile (Site mapping policy)

36 Xroot Reirector use cases an basic ieas

37 Xroot Reirector use cases an basic ieas

38 Xroot Reirector use cases an basic ieas

39 Outlook ò CMS computing moel: first assumptions an results ò Technology trens an impact on HEP Computing moels ò Xroot concepts an features ò CMS framework enhancements ò Xroot Reirector use cases an basic ieas ò Xroot Reirectors actual eployment ò Xroot first test an monitoring

40 Xroot Reirectors actual eployment Site Integration ò We have yet to run into a storage system that cannot be integrate with Xroot. Techniques: ò POSIX:Works out-of-the-box. Example: Lustre. ò C API: Requires a new XrOss plugin. Example: HDFS, cap, REDDNet (?) ò Native implementation: Latest Cache has built-in Xroot support; combine with xroot.org cms to join feeration. ò Direct access: Let xroot rea the other system s files irectly from isk FNAL)

41 Xroot over HDFS Xroot reirector Local HDFS installation Native Xroot local oor ò In this case: we nee a native xroot oor reaing file using hfs stanar C library ò This is quite similar to the installation of a griftp oor

42 Xroot over xroot Xroot EU reirector Local Cache installation Dcache xroot oor Native Xroot local oor ò In this case: we nee a native xroot oor reaing file using xroot library ò This machine is configurate in proxy-moe ò This nees that the Cache instance alreay provie a Cache xroot enpoint

43 Xroot over posix Xroot EU reirector Local GPFS/ Lustre installation Native Xroot local oor ò In this case: we nee one or more native xroot oors that are clients of the local parallel file-system (GPFS/Lustre)

44 Xroot Reirectors actual eployment CMSSW Performance ò Note the use cases all involve wie-area access to CMS ata, whether interactively or through the batch system. ò WAN performance for the software is key to this project s success. ò Current monitoring shows 99% of the bytes CMS jobs requests are via vectore reas - many jobs have about 300 reas per file total. That s at least 30s of I/ O that a locally running job wouln t have. ò We plan to invest our time in I/O to make sure these numbers improve, or at least on t regress.

45 Xroot Reirectors actual eployment Xroot eployment ò US Xroot Reirector: ò Collecting ata from US Sites: ò Nebraska, Purue, Caltech, UCSD, FNAL ò EU Xroot Reirector: ò Collecting ata from EU Sites: ò INFN-Bari, INFN-Pisa, DESY, Imperial College (UK)

46 Xroot Reirectors actual eployment Xroot eployment ò Xroot provies per-file an per-connection monitoring. ò I.e., a UDP packet per login, open, close, isconnect, an summarizing every 5s of activity. ò This is forware to a central server, where it is summarize an forware to to several consumers. ò Also provies a perserver summary (connections, files open) that is fe irectly to MonaLisa.

47 Xroot Reirectors actual eployment Xroot istribute performance ò Using CMSSW 4_1_3 an a I/O boun analysis ò Job are running on T2_IT_LNL an ata are hoste in T2_IT_Bari ò ping time: 12ms ò CPU Efficiency rop from 68% to 50% ò ~30% of performance rop ò This is about to be one of the worst case

48 Xroot Reirectors actual eployment Xroot istribute performance

49 Conclusion ò Xroot reirector infrastructure is fulfilling few use cases that are not covere by official CMS tools ò It is alreay been use: ò from both final users ò Debugging coe, visualization etc. ò an computing center ò That o not have enough man power to fully support CMS locally ò Tuning CMSSW is very important in orer to provie goo performance when accessing ata remotely

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC

I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC I Service Challenge e l'implementazione dell'architettura a Tier in WLCG per il calcolo nell'era LHC T. Ferrari (INFN-CNAF Tier-1) D. Bonacorsi (INFN-CNAF Tier-1 and CMS experiment) IFAE 2006 Incontri

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2

SPINOSO Vincenzo. Optimization of the job submission and data access in a LHC Tier2 EGI User Forum Vilnius, 11-14 April 2011 SPINOSO Vincenzo Optimization of the job submission and data access in a LHC Tier2 Overview User needs Administration issues INFN Bari farm design and deployment

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Long Term Data Preservation for CDF at INFN-CNAF

Long Term Data Preservation for CDF at INFN-CNAF Long Term Data Preservation for CDF at INFN-CNAF S. Amerio 1, L. Chiarelli 2, L. dell Agnello 3, D. De Girolamo 3, D. Gregori 3, M. Pezzi 3, A. Prosperini 3, P. Ricci 3, F. Rosso 3, and S. Zani 3 1 University

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Message Transport With The User Datagram Protocol

Message Transport With The User Datagram Protocol Message Transport With The User Datagram Protocol User Datagram Protocol (UDP) Use During startup For VoIP an some vieo applications Accounts for less than 10% of Internet traffic Blocke by some ISPs Computer

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization In computer science, storage virtualization uses virtualization to enable better functionality

More information

The National Analysis DESY

The National Analysis DESY The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Programmable Information Highway (with no Traffic Jams)

Programmable Information Highway (with no Traffic Jams) Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan

More information

Multicast Routing : Computer Networking. Example Applications. Overview

Multicast Routing : Computer Networking. Example Applications. Overview Multicast outing 5-744: Computer Networking Unicast: one source to one estination Multicast: one source to many estinations Two main functions: Efficient ata istribution Logical naming of a group L-0 Multicast

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

The Global Grid and the Local Analysis

The Global Grid and the Local Analysis The Global Grid and the Local Analysis Yves Kemp DESY IT GridKA School, 11.9.2008 Overview Global and globalization : Some thoughts Anatomy of an analysis and the computing resources needed Boundary between

More information

Multi-threaded, discrete event simulation of distributed computing systems

Multi-threaded, discrete event simulation of distributed computing systems Multi-threaded, discrete event simulation of distributed computing systems Iosif C. Legrand California Institute of Technology, Pasadena, CA, U.S.A Abstract The LHC experiments have envisaged computing

More information

Exploring cloud storage for scien3fic research

Exploring cloud storage for scien3fic research Exploring cloud storage for scien3fic research Fabio Hernandez fabio@in2p3.fr Lu Wang Lu.Wang@ihep.ac.cn 第十六届全国科学计算与信息化会议暨科研大数据论坛 h"p://indico.ihep.ac.cn/conferencedisplay.py?confid=3138 Dalian, July 8th

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Understanding StoRM: from introduction to internals

Understanding StoRM: from introduction to internals Understanding StoRM: from introduction to internals 13 November 2007 Outline Storage Resource Manager The StoRM service StoRM components and internals Deployment configuration Authorization and ACLs Conclusions.

More information

ANSE: Advanced Network Services for [LHC] Experiments

ANSE: Advanced Network Services for [LHC] Experiments ANSE: Advanced Network Services for [LHC] Experiments Artur Barczyk California Institute of Technology Joint Techs 2013 Honolulu, January 16, 2013 Introduction ANSE is a project funded by NSF s CC-NIE

More information

LHC Computing Models

LHC Computing Models LHC Computing Models Commissione I 31/1/2005 Francesco Forti,, Pisa Gruppo di referaggio Forti (chair), Belforte, Menasce, Simone, Taiuti, Ferrari, Morandin, Zoccoli Outline Comparative analysis of the

More information

The ATLAS EventIndex: Full chain deployment and first operation

The ATLAS EventIndex: Full chain deployment and first operation The ATLAS EventIndex: Full chain deployment and first operation Álvaro Fernández Casaní Instituto de Física Corpuscular () Universitat de València CSIC On behalf of the ATLAS Collaboration 1 Outline ATLAS

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The Legnaro-Padova distributed Tier-2: challenges and results

The Legnaro-Padova distributed Tier-2: challenges and results The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron

More information

Welcome to Wayland. Martin Gra ßlin BlueSystems Akademy 2015

Welcome to Wayland. Martin Gra ßlin BlueSystems Akademy 2015 Welcome to Waylan Martin Gra ßlin mgraesslin@ke.org BlueSystems Akaemy 2015 26.07.2015 Agena 1 Architecture 2 Evolution of KWin 3 The kwin Project 4 What s next? Welcome to Waylan Martin Gra ßlin Agena

More information

Influence of Distributing a Tier-2 Data Storage on Physics Analysis

Influence of Distributing a Tier-2 Data Storage on Physics Analysis ACAT Conference 2013 Influence of Distributing a Tier-2 Data Storage on Physics Analysis Jiří Horký 1,2 (horky@fzu.cz) Miloš Lokajíček 1, Jakub Peisar 2 1 Institute of Physics ASCR, 2 CESNET 17th of May,

More information

Generalized Edge Coloring for Channel Assignment in Wireless Networks

Generalized Edge Coloring for Channel Assignment in Wireless Networks Generalize Ege Coloring for Channel Assignment in Wireless Networks Chun-Chen Hsu Institute of Information Science Acaemia Sinica Taipei, Taiwan Da-wei Wang Jan-Jan Wu Institute of Information Science

More information

MODULE VII. Emerging Technologies

MODULE VII. Emerging Technologies MODULE VII Emerging Technologies Computer Networks an Internets -- Moule 7 1 Spring, 2014 Copyright 2014. All rights reserve. Topics Software Define Networking The Internet Of Things Other trens in networking

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Data Access and Data Management

Data Access and Data Management Data Access and Data Management in grids Jos van Wezel Overview Background [KIT, GridKa] Practice [LHC, glite] Data storage systems [dcache a.o.] Data and meta data Intro KIT = FZK + Univ. of Karlsruhe

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Storage Resource Sharing with CASTOR.

Storage Resource Sharing with CASTOR. Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing

More information

LCG data management at IN2P3 CC FTS SRM dcache HPSS

LCG data management at IN2P3 CC FTS SRM dcache HPSS jeudi 26 avril 2007 LCG data management at IN2P3 CC FTS SRM dcache HPSS Jonathan Schaeffer / Lionel Schwarz dcachemaster@cc.in2p3.fr dcache Joint development by FNAL and DESY Cache disk manager with unique

More information

Map-Reduce. Marco Mura 2010 March, 31th

Map-Reduce. Marco Mura 2010 March, 31th Map-Reduce Marco Mura (mura@di.unipi.it) 2010 March, 31th This paper is a note from the 2009-2010 course Strumenti di programmazione per sistemi paralleli e distribuiti and it s based by the lessons of

More information

Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA.

Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA. Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA. November 3th, 2005 Von Welch vwelch@ncsa.uiuc.edu Outline

More information

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 The CORAL Project Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 Outline CORAL - a foundation for Physics Database Applications in the LHC Computing Grid (LCG)

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Experimental Computing. Frank Porter System Manager: Juan Barayoga

Experimental Computing. Frank Porter System Manager: Juan Barayoga Experimental Computing Frank Porter System Manager: Juan Barayoga 1 Frank Porter, Caltech DoE Review, July 21, 2004 2 Frank Porter, Caltech DoE Review, July 21, 2004 HEP Experimental Computing System Description

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

LHCb Computing Resources: 2018 requests and preview of 2019 requests

LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb Computing Resources: 2018 requests and preview of 2019 requests LHCb-PUB-2017-009 23/02/2017 LHCb Public Note Issue: 0 Revision: 0 Reference: LHCb-PUB-2017-009 Created: 23 rd February 2017 Last modified:

More information

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS MARINA ROTARU 1, MIHAI CIUBĂNCAN 1, GABRIEL STOICEA 1 1 Horia Hulubei National Institute for Physics and Nuclear Engineering, Reactorului 30,

More information

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER

More information

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status

CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status CMS Tier-2 Program for user Analysis Computing on the Open Science Grid Frank Würthwein UCSD Goals & Status High Level Requirements for user analysis computing Code Development Environment Compile, run,

More information

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1

Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 ATLAS Tier-2 Computing in D GridKa-TAB, Karlsruhe, 30.9.2005 München Computing Model Tier-2 Plans for Germany Relations to GridKa/Tier-1 GridKa-TAB, 30.9.05 1 ATLAS Offline Computing ~Pb/sec PC (2004)

More information

Cooperation among ALICE Storage Elements: current status and directions (The ALICE Global Redirector: a step towards real storage robustness).

Cooperation among ALICE Storage Elements: current status and directions (The ALICE Global Redirector: a step towards real storage robustness). Cooperation among ALICE Storage Elements: current status and directions (The ALICE Global Redirector: a step towards real storage robustness). 1 CERN Geneve 23, CH-1211, Switzerland E-mail: fabrizio.furano@cern.ch

More information

Socially-optimal ISP-aware P2P Content Distribution via a Primal-Dual Approach

Socially-optimal ISP-aware P2P Content Distribution via a Primal-Dual Approach Socially-optimal ISP-aware P2P Content Distribution via a Primal-Dual Approach Jian Zhao, Chuan Wu The University of Hong Kong {jzhao,cwu}@cs.hku.hk Abstract Peer-to-peer (P2P) technology is popularly

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework

Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework Stitched Together: Transitioning CMS to a Hierarchical Threaded Framework CD Jones and E Sexton-Kennedy Fermilab, P.O.Box 500, Batavia, IL 60510-5011, USA E-mail: cdj@fnal.gov, sexton@fnal.gov Abstract.

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation Journal of Physics: Conference Series PAPER OPEN ACCESS Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation To cite this article: R. Di Nardo et al 2015 J. Phys.: Conf.

More information

Xrootd Monitoring for the CMS Experiment

Xrootd Monitoring for the CMS Experiment Journal of Physics: Conference Series Xrootd Monitoring for the CMS Experiment To cite this article: L A T Bauerdick et al 2012 J. Phys.: Conf. Ser. 396 042058 View the article online for updates and enhancements.

More information

Cloud Computing. Summary

Cloud Computing. Summary Cloud Computing Lectures 2 and 3 Definition of Cloud Computing, Grid Architectures 2012-2013 Summary Definition of Cloud Computing (more complete). Grid Computing: Conceptual Architecture. Condor. 1 Cloud

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Computer Organization

Computer Organization Computer Organization Douglas Comer Computer Science Department Purue University 250 N. University Street West Lafayette, IN 47907-2066 http://www.cs.purue.eu/people/comer Copyright 2006. All rights reserve.

More information

Cloud Search Service Product Introduction. Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD.

Cloud Search Service Product Introduction. Issue 01 Date HUAWEI TECHNOLOGIES CO., LTD. 1.3.15 Issue 01 Date 2018-11-21 HUAWEI TECHNOLOGIES CO., LTD. Copyright Huawei Technologies Co., Lt. 2019. All rights reserve. No part of this ocument may be reprouce or transmitte in any form or by any

More information

Distributed Filesystem

Distributed Filesystem Distributed Filesystem 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributing Code! Don t move data to workers move workers to the data! - Store data on the local disks of nodes in the

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control

Almost Disjunct Codes in Large Scale Multihop Wireless Network Media Access Control Almost Disjunct Coes in Large Scale Multihop Wireless Network Meia Access Control D. Charles Engelhart Anan Sivasubramaniam Penn. State University University Park PA 682 engelhar,anan @cse.psu.eu Abstract

More information

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES

THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES 1 THE ATLAS DISTRIBUTED DATA MANAGEMENT SYSTEM & DATABASES Vincent Garonne, Mario Lassnig, Martin Barisits, Thomas Beermann, Ralph Vigne, Cedric Serfon Vincent.Garonne@cern.ch ph-adp-ddm-lab@cern.ch XLDB

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

Preamble. Singly linked lists. Collaboration policy and academic integrity. Getting help

Preamble. Singly linked lists. Collaboration policy and academic integrity. Getting help CS2110 Spring 2016 Assignment A. Linke Lists Due on the CMS by: See the CMS 1 Preamble Linke Lists This assignment begins our iscussions of structures. In this assignment, you will implement a structure

More information

Distributed Data Management on the Grid. Mario Lassnig

Distributed Data Management on the Grid. Mario Lassnig Distributed Data Management on the Grid Mario Lassnig Who am I? Mario Lassnig Computer scientist main field of study was theoretical (algorithm design) working on/with distributed and embedded systems

More information

Real-time dataflow and workflow with the CMS tracker data

Real-time dataflow and workflow with the CMS tracker data Journal of Physics: Conference Series Real-time dataflow and workflow with the CMS tracker data To cite this article: N D Filippis et al 2008 J. Phys.: Conf. Ser. 119 072015 View the article online for

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

Evaluation of the computing resources required for a Nordic research exploitation of the LHC

Evaluation of the computing resources required for a Nordic research exploitation of the LHC PROCEEDINGS Evaluation of the computing resources required for a Nordic research exploitation of the LHC and Sverker Almehed, Chafik Driouichi, Paula Eerola, Ulf Mjörnmark, Oxana Smirnova,TorstenÅkesson

More information

Parallel File Systems. John White Lawrence Berkeley National Lab

Parallel File Systems. John White Lawrence Berkeley National Lab Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

A L I C E Computing Model

A L I C E Computing Model CERN-LHCC-2004-038/G-086 04 February 2005 A L I C E Computing Model Computing Project Leader Offline Coordinator F. Carminati Y. Schutz (Editors on behalf of the ALICE Collaboration) i Foreword This document

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Beyond Petascale. Roger Haskin Manager, Parallel File Systems IBM Almaden Research Center

Beyond Petascale. Roger Haskin Manager, Parallel File Systems IBM Almaden Research Center Beyond Petascale Roger Haskin Manager, Parallel File Systems IBM Almaden Research Center GPFS Research and Development! GPFS product originated at IBM Almaden Research Laboratory! Research continues to

More information

Data Management 1. Grid data management. Different sources of data. Sensors Analytic equipment Measurement tools and devices

Data Management 1. Grid data management. Different sources of data. Sensors Analytic equipment Measurement tools and devices Data Management 1 Grid data management Different sources of data Sensors Analytic equipment Measurement tools and devices Need to discover patterns in data to create information Need mechanisms to deal

More information