Data transfer over the wide area network with a large round trip time

Size: px
Start display at page:

Download "Data transfer over the wide area network with a large round trip time"

Transcription

1 Journal of Physics: Conference Series Data transfer over the wide area network with a large round trip time To cite this article: H Matsunaga et al 1 J. Phys.: Conf. Ser Recent citations - A two phased service oriented broker for replica selection in data grids (2SOB) Rafah M. Almuttairi et al - Rafah M. Almuttairi et al View the article online for updates and enhancements. This content was downloaded from IP address on 16/1/18 at :29

2 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP9) IOP Publishing Journal of Physics: Conference Series 219 (1) 656 doi:1.188/ /219/6/656 Data transfer over the wide area network with a large round trip time H Matsunaga, T Isobe, T Mashimo, H Sakamoto and I Ueda International Center for Elementary Particle Physics, the University of Tokyo, Tokyo, Japan matunaga@icepp.s.u-tokyo.ac.jp Abstract. A Tier-2 regional center is running at the University of Tokyo in Japan. This center receives a large amount of data of the ATLAS experiment from the Tier-1 center in France. Although the link between the two centers has 1Gbps bandwidth, it is not a dedicated link but is shared with other traffic, and the round trip time is 29ms. It is not easy to exploit the available bandwidth for such a link, so-called long fat network. We performed data transfer tests by using GridFTP in various combinations of the parameters, such as the number of parallel streams and the. In addition, we have gained experience of the actual data transfer in our production system where the Disk Pool Manager (DPM) is used as the Storage Element and the data transfer is controlled by the File Transfer Service (FTS). We report results of the tests and the daily activity, and discuss the improvement of the data transfer throughput. 1. Introduction In order to analyze a large amount of data produced in the experiments at the Large Hadron Collider (LHC) at CERN, the Worldwide LHC Computing Grid (WLCG) has been created to allow the data analysis to be performed using the distributed computing centers around the world, based on the data Grid environment. International Center for Elementary Particle Physics (ICEPP) of the University of Tokyo is one of the ATLAS collaborating institutes and operates a Tier-2 regional center with the aim of meeting the demands of the Japanese physicists in ATLAS. In the ATLAS computing model, a Tier-2 site should play a role in the user analysis as well as the Monte Carlo simulation data production. The data transfer activity at a Tier-2 site is dominated by replication of (real or simulated) data from the Tier-1 for the user analysis, while for the simulation data production the transfer rate is much lower. It should be noted that fast data transfer is essential for efficient data analysis. Each ATLAS Tier-2 site is associated with only one Tier-1 site which is usually geographically close (e.g. in the same country or same region), but in case of the ICEPP Tier-2, the Tier-1 site is CC-IN2P3 in Lyon, France, which is very far from Tokyo. It is well known that it is difficult to achieve high data transfer rate with Transmission Control Protocol (TCP) over the long distance network with a large bandwidth. The aim of this paper is to study performance for the network with a long latency and a large bandwidth between Japan and Europe, and understand the current limitations or bottlenecks. We present results of the data transfer tests with varying conditions and real-life experience in the Tier-2 production system, and also discuss possible improvement. c 1 IOP Publishing Ltd 1

3 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP9) IOP Publishing Journal of Physics: Conference Series 219 (1) 656 doi:1.188/ /219/6/ Data Transfer over WAN Standard tool for the data transfer between the Grid sites or for downloading from the Grid sites is GridFTP[1]. GridFTP is included in most of the storage management systems, such as the Disk Pool Manager (DPM)[2] or dcache[3]. It uses TCP as the transport layer protocol. In TCP, data transfer rate is roughly given by window size / RTT, where RTT is the round trip time. Therefore, to maximize data transfer rate, one should increase or number of parallel streams or files. As for the network link, the ICEPP Tier-2 center connects to SINET3[4], the Japanese National Research and Education Network (NREN), through the University router. The SINET connects to GEANT2[5], the European academic network, at the MANLAN (Manhattan Landing) in New York City. Although the path is shared with other traffic, the bandwidth is 1Gbps from the ICEPP site to GEANT2 network. Furthermore, RENATER[6], which is the French NREN, provides 1Gbps link from GEANT2 to CC-IN2P3, hence 1Gbps is available for the whole path between ICEPP and CC-IN2P3. The RTT of the path is 29 ms, and the Bandwidth-Delay Product (BDP) is 1Gbps 29ms = 3MB, which is needed to fully use 1Gbps bandwidth with a single TCP stream. 3. Test setup We have set up Linux PCs at CERN and ICEPP. The CERN ICEPP route is slightly different within Europe from the CC-IN2P3 ICEPP route, and the RTT are almost the same. This CERN ICEPP route is also important for us because many Japanese physicists stationed at CERN copy data between the ICEPP regional center and their local resources at CERN. At CERN, the traffic between the test PCs goes through the High Throughput Access Route (HTAR) to bypass the CERN firewall. The HTAR bandwidth is limited to 1Gbps. The Linux PC at CERN is dual CPU Xeon L54 server with 32GB of RAM and Intel 1GbE Network Interface Card (NIC), running SLC4.7 x The data area is provided by the hardware RAID (3ware). At ICEPP, the Linux PC is dual CPU Xeon 51 server with 8GB of RAM and a Chelsio 1GbE NIC, running SLC4.7 x The data disk is the external RAID (Infortrend), attached with 4Gb Fibre Channel. This server is the same as the disk server used at the Tier-2 site. The Linux kernel is EL.cernsmp included in SLC4.7, but for the PC at CERN, vanilla kernel is also tried because it has improved network code, in particular CUBIC TCP implementation, in addition to BIC TCP which is the default in kernel. The following parameters are set for Linux kernel and NIC: net.ipv4.tcp_sack= net.ipv4.tcp_dsack= net.ipv4.tcp_timestamp= net.ipv4.tcp_no_metrics_save=1 net.ipv4.tcp_rmem= net.ipv4.tcp_wmem= net.core.rmem_max= net.core.wmem_max= net.core.netdev_max_backlog=1 txqueuelen=1 The first three parameters have been introduced for the extension of TCP. Even if they are enabled, no performance improvement is expected in many cases and we disable them in this test. tcp no metrics save is enabled not to cache the parameters in the previous TCP connections. is enlarged up to 16MiB. For the data area, we use XFS as the filesystem. The disk access speed depends on the kernel version, but we observe >1MB/s for both reading and writing on the CERN server, and 2/s for writing and 1/s for reading on the ICEPP server. 2

4 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP9) IOP Publishing Journal of Physics: Conference Series 219 (1) 656 doi:1.188/ /219/6/656 In the following tests, the sender node is always at CERN, while the receiver node is at ICEPP. 4. Iperf test Before performing disk-to-disk transfer test, we measure network throughput in the memory-tomemory mode by using iperf[7] program with TCP to check the pure network condition without the disk IO. Figure 1 shows network throughputs from CERN to ICEPP using iperf for various number of parallel streams (1, 2, 4 and 8) and window sizes (2, 4, 8 and 16MiB). To obtain the results, we measure multiple times and then take an average disregarding a small number (fixed fraction) of the worst measurements, which are likely due to the network congestion with other traffic by accident. As can be seen, the measured throughputs are proportional to number of streams or window size in case of <1MB/s, and 1Gbps (125MB/s) limit is achieved when the window size and/or number of streams are large enough. We see no difference between the kernel version (2.6.9 and ) in the iperf results. In Figure 2, transfer rates with one stream are shown as a function of time. These are measured by running tcpdump on the receiver node, and t = is defined as the arrival of the first data packet. We see the slow-start phase of TCP in the first several seconds and then a constant rate in the congestion avoidance phase with small fluctuation. iperf iperf (1 stream, kernel ) Time (s) Figure 1. Network throughput with iperf from CERN to ICEPP, for varying window sizes and number of streams. Figure 2. One stream thoughput as a function of time. Sender node at CERN runs kernel GridFTP test Data transfer from disk to disk between the remote hosts is carried out simulating the actual use case. We use two versions of GridFTP in the Globus Toolkit and 4.2.1, specifically a GridFTP server (version 1.17 or 3.15) runs on the receiver node at ICEPP, and a client, globusurl-copy, (version 3.6 or 4.14) is issued on the sender node at CERN. The GSI authentication is used, but the transfer rate is measured only after the authentication phase. File size is 4TB in most cases and 1TB in some slow cases. Figure 3 shows results of the data transfer rates from CERN to ICEPP for Globus Toolkit or and the Linux kernel at CERN or and TCP windows sizes, which are given in the command line options of globus-url-copy, are also changed as done in the iperf tests. Compared with the iperf results, the throughputs are clearly worse for higher rate points, probably because the disk IO speed is limited (in particular in reading from the slow disk of the CERN server) even with the multiple streams, or the speed is not very constant. Newer Linux kernel improves the transfer rates to some extent, and with this kernel there is little difference between the GridFTP versions. On the other hand, with kernel 3

5 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP9) IOP Publishing Journal of Physics: Conference Series 219 (1) 656 doi:1.188/ /219/6/656 better performance is seen with the newer GridFTP. Overall, the throughputs are limited at /s due to slower disk IO on the sender node. gridftp (GT 3.2.1, kernel 2.6.9) gridftp (GT 3.2.1, kernel ) gridftp (GT 4.2.1, kernel 2.6.9) gridftp (GT 4.2.1, kernel ) Figure 3. Results of data transfer with GridFTP from CERN to ICEPP, for varying window sizes and number of streams. Linux kernel is (left) or (right), and the Globus Toolkit is (top) or (bottom). Figure 4 shows a throughput of a file transfer with single stream as a function of time. In this case, one can see a drop in rates in the middle of the transfer, which is soon recovered as seen at the start-up of the transfer. Rate fluctuation is larger than the iperf result even in the constant phase. Results for a multiple stream transfer are shown in Figure 5. In this measurement, most streams, out of 8, are well balanced, but the aggregated rate is more unstable than one stream case due to heavier load on the disk IO. Interestingly, we occasionally see slow speed at about 1 seconds after the data transfer, which may be caused by the disk (RAID) characteristics. gridftp: per stream (1 stream,, kernel 2.6.9) 5 gridftp: per stream (8 streams,, kernel ) Total * Time (s) Time (s) Figure 4. Result of throughput vs. time. (1 stream, 8MiB window size, kernel 2.6.9, Globus Toolkit 4.2.1). Packet loss happens during the data transfer. Figure 5. Data transfer rates per stream in a file transfer (8 streams, 4MiB window size, Globus Toolkit 4.2.1). Each rate of the streams and its sum (.5) are shown. Figure 6 shows results for the parallel file transfers with the same configuration. For this measurement, all files are in the same filesystem on both sender and receiver nodes. In case of 2 or 4 concurrent files, the total throughputs are nearly 1MB/s which seems to be limited by the local disk read, and small performance difference are seen between Linux kernel versions. 4

6 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP9) IOP Publishing Journal of Physics: Conference Series 219 (1) 656 doi:1.188/ /219/6/656 gridftp (GT 4.2.1, 32 MB, 8 streams) Linux kernel Number of concurrent files Figure 6. Results for multiple file transfers using Linux kernel or on the sender node. (32MiB window size, 8 streams, Globus Toolkit 4.2.1). 6. Production system At the ICEPP Tier-2 site, DPM is used as the Storage Element. In the current configuration, it consists of one headnode and 13 disk servers. The headnode runs several services which manage name space and disk pool with a MySQL database backend, while the actual data transfer is performed by the disk server running the GridFTP server. The hardware and the operating system is the same as the ICEPP test server described above, but some parameters are different, in particular the maximum window size is 2MiB. The GridFTP software is provided by the DPM and the version of the GridFTP server is 2.3 (originally included in Globus Toolkit 4..3). For each disk server, 5 external RAID boxes are attached; 2 RAIDs are attached to a 4Gb Fibre Channel port and other 3 RAIDs are attached to another port. One XFS filesystem (6TB) is created for each RAID disk. In ATLAS, data transfer is managed by the Distributed Data Management (DDM)[8] system, which uses the File Transfer Service (FTS)[9] for the bulk data transfer and register files to the catalogs. The FTS controls the file transfer by using the third party transfer of the GridFTP between the Grid sites. In our usual case of the data transfer between CC-IN2P3 and ICEPP sites, DDM services are running at CERN, and FTS and LCG File Catalog (LFC)[1] are operated at CC-IN2P3. Therefore, compared with the test condition in the previous sections, the efficiency of data transfer is lowered by the overhead of the DDM, FTS, DPM services, and also GSI authentication in each file transfer. With FTS, a channel should be established between the Storage Elements of the remote sites and for each channel one can set number of concurrent files and number of GridFTP streams. Our current settings are files and 1 streams, respectively, which was determined by rough optimization. Figure 7 shows a snapshot of aggregated data traffic measured at the disk servers at ICEPP Tier-2. At that time, there were 6 disk servers at ICEPP and more than 3 disk servers of dcache at CC-IN2P3. A peak rate of 5MB/s has been observed when large files (3.5GB each) were transferred and other activities were low at both sites. The data transfer rate depends largely on the ATLAS and WLCG activities, and it is bursty rather than constant, and a bulk transfer usually lasts some minutes to several hours. As of writing this, we have observed 5MB/s data transfers from CC-IN2P3 several times. Figure 7. Data transfer rate from CC-IN2P3 to ICEPP in the production system. A peak rate of 5MB/s was observed. 7. Conclusions For the ICEPP Tier-2 site in Tokyo, the data transfer is a critical issue because the site mostly receives data from CC-IN2P3 Tier-1 site in Lyon, France, and also copies data from/to the 5

7 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP9) IOP Publishing Journal of Physics: Conference Series 219 (1) 656 doi:1.188/ /219/6/656 local resources at CERN. The connection from ICEPP in Tokyo to Europe is not a private or dedicated network, and the bandwidth is 1Gbps with a large RTT (29ms to CC-IN2P3 or CERN). We have tested the network performance between ICEPP and CERN (1Gbps) with and without disk access using PCs. In the memory to memory test, the throughput are scalable with the window size or number of streams, and can easily reach the 1Gbps limit with modern hardware and software. In the disk to disk test, however, it is difficult to achieve the similar throughput even with multiple streams and multiple files. Judging from the obtained results, faster disk access and/or parallel streams with different hardware will be very important for good performance. In the comparison of Linux kernel versions and , the new kernel leads to better performance and the reason may be improvement in the network or local disk access implementation, or both. Concerning the GridFTP version, the difference is less than that of the kernel version but some improvement can be expected. It has also been demonstrated that data transfer rates between ICEPP and CC-IN2P3 can exceed 5MB/s with many servers in the production system. This could be increased more in the future with new software and hardware as well as the system tunings, but it is still unclear that the connection still have a margin of bandwidth, especially in France, because CC-IN2P3 also sends data to other French Tier-2 sites (simultaneously) and a part of the path is shared with us. The optimization of the system parameters such as window size should be carefully performed because the disk server supports data access from many LAN clients in addition to WAN data transfer. We will study this LAN access in future to find the best settings for our DPM disk servers. Acknowledgments We would like to thank National Institute of Informatics (NII), Information Technology Center of the University of Tokyo, and Computing Research Center of High Energy Accelerator Research Organization (KEK) for setting up and managing the network infrastructure. Thanks go to J. Tanaka (ICEPP) for his help in setting up test PCs and HTAR route at CERN. We are also grateful to CC-IN2P3 staff for their cooperation and support for data transfer in the production system. References [1] Globus GridFTP. [2] Disk Pool Manager. [3] dcache. [4] SINET3. [5] GEANT2. [6] RENATER. [7] iperf. [8] Miguel Branco et al., J. Phys. Conference Series 119, 617 (8) [9] File Transfer Service. [1] LCG File Catalog. 6

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Grid Operation at Tokyo Tier-2 Centre for ATLAS Grid Operation at Tokyo Tier-2 Centre for ATLAS Hiroyuki Matsunaga, Tadaaki Isobe, Tetsuro Mashimo, Hiroshi Sakamoto & Ikuo Ueda International Centre for Elementary Particle Physics, the University of

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

A data Grid testbed environment in Gigabit WAN with HPSS

A data Grid testbed environment in Gigabit WAN with HPSS Computing in High Energy and Nuclear Physics, 24-28 March 23, La Jolla, California A data Grid testbed environment in Gigabit with HPSS Atsushi Manabe, Setsuya Kawabata, Youhei Morita, Takashi Sasaki,

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK

TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK TCP Tuning Domenico Vicinanza DANTE, Cambridge, UK domenico.vicinanza@dante.net EGI Technical Forum 2013, Madrid, Spain TCP! Transmission Control Protocol (TCP)! One of the original core protocols of the

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

Unified storage systems for distributed Tier-2 centres

Unified storage systems for distributed Tier-2 centres Journal of Physics: Conference Series Unified storage systems for distributed Tier-2 centres To cite this article: G A Cowan et al 28 J. Phys.: Conf. Ser. 119 6227 View the article online for updates and

More information

WHEN the Large Hadron Collider (LHC) begins operation

WHEN the Large Hadron Collider (LHC) begins operation 2228 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 4, AUGUST 2006 Measurement of the LCG2 and Glite File Catalogue s Performance Craig Munro, Birger Koblitz, Nuno Santos, and Akram Khan Abstract When

More information

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK

High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK High bandwidth, Long distance. Where is my throughput? Robin Tasker CCLRC, Daresbury Laboratory, UK [r.tasker@dl.ac.uk] DataTAG is a project sponsored by the European Commission - EU Grant IST-2001-32459

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX

Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX Scaling Internet TV Content Delivery ALEX GUTARIN DIRECTOR OF ENGINEERING, NETFLIX Inventing Internet TV Available in more than 190 countries 104+ million subscribers Lots of Streaming == Lots of Traffic

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed ASPERA HIGH-SPEED TRANSFER Moving the world s data at maximum speed ASPERA HIGH-SPEED FILE TRANSFER Aspera FASP Data Transfer at 80 Gbps Elimina8ng tradi8onal bo

More information

46PaQ. Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL. 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep

46PaQ. Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL. 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep 46PaQ Dimitris Miras, Saleem Bhatti, Peter Kirstein Networks Research Group Computer Science UCL 46PaQ AHM 2005 UKLIGHT Workshop, 19 Sep 2005 1 Today s talk Overview Current Status and Results Future Work

More information

Internet data transfer record between CERN and California. Sylvain Ravot (Caltech) Paolo Moroni (CERN)

Internet data transfer record between CERN and California. Sylvain Ravot (Caltech) Paolo Moroni (CERN) Internet data transfer record between CERN and California Sylvain Ravot (Caltech) Paolo Moroni (CERN) Summary Internet2 Land Speed Record Contest New LSR DataTAG project and network configuration Establishing

More information

Department of Physics & Astronomy

Department of Physics & Astronomy Department of Physics & Astronomy Experimental Particle Physics Group Kelvin Building, University of Glasgow, Glasgow, G1 8QQ, Scotland Telephone: +44 ()141 339 8855 Fax: +44 ()141 33 5881 GLAS-PPE/7-3

More information

FUJITSU Software Interstage Information Integrator V11

FUJITSU Software Interstage Information Integrator V11 FUJITSU Software V11 An Innovative WAN optimization solution to bring out maximum network performance October, 2013 Fujitsu Limited Contents Overview Key technologies Supported network characteristics

More information

ATLAS software configuration and build tool optimisation

ATLAS software configuration and build tool optimisation Journal of Physics: Conference Series OPEN ACCESS ATLAS software configuration and build tool optimisation To cite this article: Grigory Rybkin and the Atlas Collaboration 2014 J. Phys.: Conf. Ser. 513

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Use of containerisation as an alternative to full virtualisation in grid environments.

Use of containerisation as an alternative to full virtualisation in grid environments. Journal of Physics: Conference Series PAPER OPEN ACCESS Use of containerisation as an alternative to full virtualisation in grid environments. Related content - Use of containerisation as an alternative

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Parallel Storage Systems for Large-Scale Machines

Parallel Storage Systems for Large-Scale Machines Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens

More information

Benchmarking third-party-transfer protocols with the FTS

Benchmarking third-party-transfer protocols with the FTS Benchmarking third-party-transfer protocols with the FTS Rizart Dona CERN Summer Student Programme 2018 Supervised by Dr. Simone Campana & Dr. Oliver Keeble 1.Introduction 1 Worldwide LHC Computing Grid

More information

Networks & protocols research in Grid5000 DAS3

Networks & protocols research in Grid5000 DAS3 1 Grid 5000 Networks & protocols research in Grid5000 DAS3 Date Pascale Vicat-Blanc Primet Senior Researcher at INRIA Leader of the RESO team LIP Laboratory UMR CNRS-INRIA-ENS-UCBL Ecole Normale Supérieure

More information

Geant4 Computing Performance Benchmarking and Monitoring

Geant4 Computing Performance Benchmarking and Monitoring Journal of Physics: Conference Series PAPER OPEN ACCESS Geant4 Computing Performance Benchmarking and Monitoring To cite this article: Andrea Dotti et al 2015 J. Phys.: Conf. Ser. 664 062021 View the article

More information

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear

More information

Evaluation of the Huawei UDS cloud storage system for CERN specific data

Evaluation of the Huawei UDS cloud storage system for CERN specific data th International Conference on Computing in High Energy and Nuclear Physics (CHEP3) IOP Publishing Journal of Physics: Conference Series 53 (4) 44 doi:.88/74-6596/53/4/44 Evaluation of the Huawei UDS cloud

More information

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup

Chapter 4. Routers with Tiny Buffers: Experiments. 4.1 Testbed experiments Setup Chapter 4 Routers with Tiny Buffers: Experiments This chapter describes two sets of experiments with tiny buffers in networks: one in a testbed and the other in a real network over the Internet2 1 backbone.

More information

Experience with PROOF-Lite in ATLAS data analysis

Experience with PROOF-Lite in ATLAS data analysis Journal of Physics: Conference Series Experience with PROOF-Lite in ATLAS data analysis To cite this article: S Y Panitkin et al 2011 J. Phys.: Conf. Ser. 331 072057 View the article online for updates

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage

Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage Microsoft SQL Server in a VMware Environment on Dell PowerEdge R810 Servers and Dell EqualLogic Storage A Dell Technical White Paper Dell Database Engineering Solutions Anthony Fernandez April 2010 THIS

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008

CERN openlab II. CERN openlab and. Sverre Jarp CERN openlab CTO 16 September 2008 CERN openlab II CERN openlab and Intel: Today and Tomorrow Sverre Jarp CERN openlab CTO 16 September 2008 Overview of CERN 2 CERN is the world's largest particle physics centre What is CERN? Particle physics

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

Exercises TCP/IP Networking With Solutions

Exercises TCP/IP Networking With Solutions Exercises TCP/IP Networking With Solutions Jean-Yves Le Boudec Fall 2009 3 Module 3: Congestion Control Exercise 3.2 1. Assume that a TCP sender, called S, does not implement fast retransmit, but does

More information

Overview. About CERN 2 / 11

Overview. About CERN 2 / 11 Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They

More information

A comparison of data-access platforms for BaBar and ALICE analysis computing model at the Italian Tier1

A comparison of data-access platforms for BaBar and ALICE analysis computing model at the Italian Tier1 Journal of Physics: Conference Series A comparison of data-access platforms for BaBar and ALICE analysis computing model at the Italian Tier1 To cite this article: A Fella et al 21 J. Phys.: Conf. Ser.

More information

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput

Topics. TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput Topics TCP sliding window protocol TCP PUSH flag TCP slow start Bulk data throughput 2 Introduction In this chapter we will discuss TCP s form of flow control called a sliding window protocol It allows

More information

Efficient HTTP based I/O on very large datasets for high performance computing with the Libdavix library

Efficient HTTP based I/O on very large datasets for high performance computing with the Libdavix library Efficient HTTP based I/O on very large datasets for high performance computing with the Libdavix library Authors Devresse Adrien (CERN) Fabrizio Furano (CERN) Typical HPC architecture Computing Cluster

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

Ultra high-speed transmission technology for wide area data movement

Ultra high-speed transmission technology for wide area data movement Ultra high-speed transmission technology for wide area data movement Michelle Munson, president & co-founder Aspera Outline Business motivation Moving ever larger file sets over commodity IP networks (public,

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

Why Your Application only Uses 10Mbps Even the Link is 1Gbps?

Why Your Application only Uses 10Mbps Even the Link is 1Gbps? Why Your Application only Uses 10Mbps Even the Link is 1Gbps? Contents Introduction Background Information Overview of the Issue Bandwidth-Delay Product Verify Solution How to Tell Round Trip Time (RTT)

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Experiments on TCP Re-Ordering March 27 th 2017

Experiments on TCP Re-Ordering March 27 th 2017 Experiments on TCP Re-Ordering March 27 th 2017 Introduction The Transmission Control Protocol (TCP) is very sensitive to the behavior of packets sent end-to-end. Variations in arrival time ( jitter )

More information

Disk-to-Disk network transfers at 100 Gb/s

Disk-to-Disk network transfers at 100 Gb/s Journal of Physics: Conference Series Disk-to-Disk network transfers at 100 Gb/s To cite this article: Artur Barczyk et al 2012 J. Phys.: Conf. Ser. 396 042006 View the article online for updates and enhancements.

More information

LHCb Distributed Conditions Database

LHCb Distributed Conditions Database LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The

More information

Federated data storage system prototype for LHC experiments and data intensive science

Federated data storage system prototype for LHC experiments and data intensive science Federated data storage system prototype for LHC experiments and data intensive science A. Kiryanov 1,2,a, A. Klimentov 1,3,b, D. Krasnopevtsev 1,4,c, E. Ryabinkin 1,d, A. Zarochentsev 1,5,e 1 National

More information

The DMLite Rucio Plugin: ATLAS data in a filesystem

The DMLite Rucio Plugin: ATLAS data in a filesystem Journal of Physics: Conference Series OPEN ACCESS The DMLite Rucio Plugin: ATLAS data in a filesystem To cite this article: M Lassnig et al 2014 J. Phys.: Conf. Ser. 513 042030 View the article online

More information

Fermilab WAN Performance Analysis Methodology. Wenji Wu, Phil DeMar, Matt Crawford ESCC/Internet2 Joint Techs July 23, 2008

Fermilab WAN Performance Analysis Methodology. Wenji Wu, Phil DeMar, Matt Crawford ESCC/Internet2 Joint Techs July 23, 2008 Fermilab WAN Performance Analysis Methodology Wenji Wu, Phil DeMar, Matt Crawford ESCC/Internet2 Joint Techs July 23, 2008 1 The Wizard s Gap 10 years and counting The Wizard Gap (Mathis 1999) is still

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

CMS High Level Trigger Timing Measurements

CMS High Level Trigger Timing Measurements Journal of Physics: Conference Series PAPER OPEN ACCESS High Level Trigger Timing Measurements To cite this article: Clint Richardson 2015 J. Phys.: Conf. Ser. 664 082045 Related content - Recent Standard

More information

Profiling Grid Data Transfer Protocols and Servers. George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA

Profiling Grid Data Transfer Protocols and Servers. George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA Profiling Grid Data Transfer Protocols and Servers George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison USA Motivation Scientific experiments are generating large amounts of data Education

More information

TCP and BBR. Geoff Huston APNIC

TCP and BBR. Geoff Huston APNIC TCP and BBR Geoff Huston APNIC Computer Networking is all about moving data The way in which data movement is controlled is a key characteristic of the network architecture The Internet protocol passed

More information

Current Status of the Ceph Based Storage Systems at the RACF

Current Status of the Ceph Based Storage Systems at the RACF Journal of Physics: Conference Series PAPER OPEN ACCESS Current Status of the Ceph Based Storage Systems at the RACF To cite this article: A. Zaytsev et al 2015 J. Phys.: Conf. Ser. 664 042027 View the

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP

ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP ATLAS 実験コンピューティングの現状と将来 - エクサバイトへの挑戦 坂本宏 東大 ICEPP 1 Contents Energy Frontier Particle Physics Large Hadron Collider (LHC) LHC Experiments: mainly ATLAS Requirements on computing Worldwide LHC Computing

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

Service challenge for LHC

Service challenge for LHC L impatto del calcolo degli esperimenti LHC sulla infrastruttura di rete locale, nazionale ed internazionale Tiziana.Ferrari@cnaf.infn.it on behalf of the SC INFN Team GARR Workshop, Roma, 18 Nov 2005

More information

Network Management & Monitoring

Network Management & Monitoring Network Management & Monitoring Network Delay These materials are licensed under the Creative Commons Attribution-Noncommercial 3.0 Unported license (http://creativecommons.org/licenses/by-nc/3.0/) End-to-end

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

Grid Data Management

Grid Data Management Grid Data Management Week #4 Hardi Teder hardi@eenet.ee University of Tartu March 6th 2013 Overview Grid Data Management Where the Data comes from? Grid Data Management tools 2/33 Grid foundations 3/33

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Enabling High Performance Bulk Data Transfers With SSH

Enabling High Performance Bulk Data Transfers With SSH Enabling High Performance Bulk Data Transfers With SSH Chris Rapier Benjamin Bennett TIP 08 Moving Data Still crazy after all these years Multiple solutions exist Protocols UDT, SABUL, etc Implementations

More information

di-eos "distributed EOS": Initial experience with split-site persistency in a production service

di-eos distributed EOS: Initial experience with split-site persistency in a production service Journal of Physics: Conference Series OPEN ACCESS di-eos "distributed EOS": Initial experience with split-site persistency in a production service To cite this article: A J Peters et al 2014 J. Phys.:

More information

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine Journal of Physics: Conference Series PAPER OPEN ACCESS ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine To cite this article: Noemi Calace et al 2015 J. Phys.: Conf. Ser. 664 072005

More information

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed

ASPERA HIGH-SPEED TRANSFER. Moving the world s data at maximum speed ASPERA HIGH-SPEED TRANSFER Moving the world s data at maximum speed ASPERA HIGH-SPEED FILE TRANSFER 80 GBIT/S OVER IP USING DPDK Performance, Code, and Architecture Charles Shiflett Developer of next-generation

More information

Analysis of internal network requirements for the distributed Nordic Tier-1

Analysis of internal network requirements for the distributed Nordic Tier-1 Journal of Physics: Conference Series Analysis of internal network requirements for the distributed Nordic Tier-1 To cite this article: G Behrmann et al 2010 J. Phys.: Conf. Ser. 219 052001 View the article

More information

High-performance message striping over reliable transport protocols

High-performance message striping over reliable transport protocols J Supercomput (2006) 38:261 278 DOI 10.1007/s11227-006-8443-6 High-performance message striping over reliable transport protocols Nader Mohamed Jameela Al-Jaroodi Hong Jiang David Swanson C Science + Business

More information

Zhengyang Liu! Oct 25, Supported by NSF Grant OCI

Zhengyang Liu! Oct 25, Supported by NSF Grant OCI SDCI Net: Collaborative Research: An integrated study of datacenter networking and 100 GigE wide-area networking in support of distributed scientific computing Zhengyang Liu! Oct 25, 2013 Supported by

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

Data Storage. Paul Millar dcache

Data Storage. Paul Millar dcache Data Storage Paul Millar dcache Overview Introducing storage How storage is used Challenges and future directions 2 (Magnetic) Hard Disks 3 Tape systems 4 Disk enclosures 5 RAID systems 6 Types of RAID

More information

Testing the limits of an LVS - GridFTP cluster as a replacement for BeSTMan

Testing the limits of an LVS - GridFTP cluster as a replacement for BeSTMan Journal of Physics: Conference Series PAPER OPEN ACCESS Testing the limits of an LVS - GridFTP cluster as a replacement for BeSTMan To cite this article: E Fajardo et al 2018 J. Phys.: Conf. Ser. 1085

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation Journal of Physics: Conference Series PAPER OPEN ACCESS Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation To cite this article: R. Di Nardo et al 2015 J. Phys.: Conf.

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

RIGHTNOW A C E

RIGHTNOW A C E RIGHTNOW A C E 2 0 1 4 2014 Aras 1 A C E 2 0 1 4 Scalability Test Projects Understanding the results 2014 Aras Overview Original Use Case Scalability vs Performance Scale to? Scaling the Database Server

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers

The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers Journal of Physics: Conference Series The CMS experiment workflows on StoRM based storage at Tier-1 and Tier-2 centers To cite this article: D Bonacorsi et al 2010 J. Phys.: Conf. Ser. 219 072027 View

More information

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance

More information

Performance of popular open source databases for HEP related computing problems

Performance of popular open source databases for HEP related computing problems Journal of Physics: Conference Series OPEN ACCESS Performance of popular open source databases for HEP related computing problems To cite this article: D Kovalskyi et al 2014 J. Phys.: Conf. Ser. 513 042027

More information

A copy can be downloaded for personal non-commercial research or study, without prior permission or charge

A copy can be downloaded for personal non-commercial research or study, without prior permission or charge Bhimji, W., Bland, J., Clark, P. J., Mouzeli, E. G., Skipsey, S., and Walker, C. J. (11) Tuning grid storage resources for LHC data analysis. In: International Conference on Computing in High Energy and

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007

Lighting the Blue Touchpaper for UK e-science - Closing Conference of ESLEA Project The George Hotel, Edinburgh, UK March, 2007 Working with 1 Gigabit Ethernet 1, The School of Physics and Astronomy, The University of Manchester, Manchester, M13 9PL UK E-mail: R.Hughes-Jones@manchester.ac.uk Stephen Kershaw The School of Physics

More information

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control

Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 802.3x Flow Control Report on Transport Protocols over Mismatched-rate Layer-1 Circuits with 82.3x Flow Control Helali Bhuiyan, Mark McGinley, Tao Li, Malathi Veeraraghavan University of Virginia Email: {helali, mem5qf, taoli,

More information

arxiv:cs.dc/ v2 14 May 2002

arxiv:cs.dc/ v2 14 May 2002 Performance evaluation of the GridFTP within the NorduGrid project M. Ellert a, A. Konstantinov b, B. Kónya c, O. Smirnova c, A. Wäänänen d arxiv:cs.dc/2523 v2 14 May 22 a Department of Radiation Sciences,

More information