Experience of Data Grid simulation packages using.

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Experience of Data Grid simulation packages using."

Transcription

1 Experience of Data Grid simulation packages using. Nechaevskiy A.V. (SINP MSU), Korenkov V.V. (LIT JINR) Dubna, 2008

2 Contant Operation of LCG DataGrid Errors of FTS services of the Grid. Primary goals of the Grid simulation systems. The OptorSim and the GridSim simulators. Results of the LCG DataGrid simulation with the OptorSim. Tier- 2s and Tier-1s are inter- connected by the general Grid solution for LHC experiments Any may access data at any Tier- 1 BNL Nordic IN2P3 purpose research networks GridKa TRIUMF ASCC FNAL CNAF SARA PIC RAL

3 LHC experiments support Errors description are used in FTS monitoring: Scope source s error (SOURCE source site, DESTINATION destination site, TRANSFER during transfer). Category an error class (FILE-EXIST, NO-SPACE-LEFT, TRANSFER-TOMEOUT etc.). Phase a stage in transfer life cycle on which there was an error (ALLOCATION, TRANSFER-PREPARATION, TRANSFER, etc.). Message the detailed description of an error. We have a list from more than 400 various patterns which changes in time. Main faults have been allocated for the monitoring time: timeouts, the program errors, specific errors of applications and an users errors. SOURCE during PREPARATION phase: [REQUEST_TIMEOUT] failed to prepare source file in 180 seconds TRANSFER during TRANSFER phase: [TRANSFER_TIMEOUT] gridftp_copy_wait: Connection timed out The server sent an error response: Can't open data connection. timed out() failed DESTINATION during PREPARATION phase: [CONNECTION] failed to contact on remote SRM [srm]. Givin' up after 3 tries Error s details description:

4 The primary goals solved by DataGrid simulation tools Grid simulators: SimGrid OptorSim GridSim Simulation allows to make various experiments of investigated object; Simulation allows to predict and prevent a number of unexpected situations; Simulation makes it possible to define equipment for data transfers and data storage in a minimum variation for providing requirements of the project; Simulation also gives possibilities to check the system work to define its "bottlenecks" and many other possibilities.

5 Requirements for grid simulator It is obvious that a simulator must include: simulation of operation of DataGrid s basic elements (data storage elements (SE), resource brokers (RB), replica catalogs (RC), network, users, sites); simulation time has to be much less then a time of real work of DataGrid; different kind of statistics is needed (for example, volume of data transfers, throughput, etc.); simulation of failures of the equipment is necessary and also results of the simulation should be comparable to a real situation.

6 OptorSim allows to estimate various algorithms of optimisation and replication strategy Implemented in Java Configuration files are used to set simulation s parameters The source code is available OptorSim edg-wp2.web.cern.ch/edgwp2/optimization/optorsim.html

7 Implementation of the Replica Catalog in the LCG and in the OptorSim LCG: The file catalogue LFC stores the information about all the files and their replicas in the LCG. It is one of the critical services. Logical File Name (LFN) An alias created by a user to refer to some item of data, e.g. lfn:cms/ /run2/track1 Globally Unique Identifier (GUID) A non-human-readable unique identifier for an item of data, e.g. guid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6 Site URL (SURL) / Physical FN (PFN) / Site FN (SFN) The location of an actual piece of data on a storage system, e.g. srm://srm.cern.ch/castor/ cern.ch/grid/cms/output10_1 OptorSim: File information is stored in the OptorSim in the Replica Catalogue (same in LCG) Replica Catalogue is a list of mapping of LFN to their physical file names (LFN and PFN in LCG) Replica Manager manages the data replication and registers files in Replica Catalogue (The cataloging of the files is implemented in the LFC) The "best" placement of replica is defined before the transfer. It allows Sites to copy the files from different sources in order to avoid huge loadings of the resources.

8 OptorSim s - graphic interface The Statistics is available in the table forms, graphics and diagrammes

9 GridSim GridSim allows to simulate various classes of heterogeneous resources, users, applications and brokers Implemented in Java Configuration files are used to set simulation s parameters The source code is available There is a lot of examples of the GridSim using

10 The simulation details CERN-RDIG segment is a part of global LCG structure GEANT2 network are used for the huge data traffic between CERN and RDIG s sites and other participants Routers are also used for foreign traffic and they are represented as background traffic in the simulastion Four RDIG s sites - JINR, SINP (Moscow State University), IHEP, ITEP were considered

11 Simulation s results It is required hours for transfer of GB data with 6-12 Mb/s throughputs. This situation is close to a reality The volumes of the data transfers can vary from several Gigabytes to hundreds of Gigabytes per hour but channel s throughputs in the OptorSim are fixed The possibility to simulate various failures of the equipment and the other errors is absent in the OptorSim Throughput of the channel CERN-JINR and quantity of the passed data for

12 Conclusion The main errors of the LCG including the FTS errors were considered The simulation toolkits do not provide possibility to simulate various sorts of errors in Grid The simulation of the various sorts of errors in Grid-networks is necessary

13 Questions?

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova

Workload Management. Stefano Lacaprara. CMS Physics Week, FNAL, 12/16 April Department of Physics INFN and University of Padova Workload Management Stefano Lacaprara Department of Physics INFN and University of Padova CMS Physics Week, FNAL, 12/16 April 2005 Outline 1 Workload Management: the CMS way General Architecture Present

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

WLCG Network Throughput WG

WLCG Network Throughput WG WLCG Network Throughput WG Shawn McKee, Marian Babik for the Working Group HEPiX Tsukuba 16-20 October 2017 Working Group WLCG Network Throughput WG formed in the fall of 2014 within the scope of WLCG

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

Simulation model and instrument to evaluate replication technologies

Simulation model and instrument to evaluate replication technologies Simulation model and instrument to evaluate replication technologies Bogdan Eremia *, Ciprian Dobre *, Florin Pop *, Alexandru Costan *, Valentin Cristea * * University POLITEHNICA of Bucharest, Romania

More information

WHEN the Large Hadron Collider (LHC) begins operation

WHEN the Large Hadron Collider (LHC) begins operation 2228 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 4, AUGUST 2006 Measurement of the LCG2 and Glite File Catalogue s Performance Craig Munro, Birger Koblitz, Nuno Santos, and Akram Khan Abstract When

More information

Grid Data Management

Grid Data Management Grid Data Management Data Management Distributed community of users need to access and analyze large amounts of data Fusion community s International ITER project Requirement arises in both simulation

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

GridNEWS: A distributed Grid platform for efficient storage, annotating, indexing and searching of large audiovisual news content

GridNEWS: A distributed Grid platform for efficient storage, annotating, indexing and searching of large audiovisual news content 1st HellasGrid User Forum 10-11/1/2008 GridNEWS: A distributed Grid platform for efficient storage, annotating, indexing and searching of large audiovisual news content Ioannis Konstantinou School of ECE

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

Outline. ASP 2012 Grid School

Outline. ASP 2012 Grid School Distributed Storage Rob Quick Indiana University Slides courtesy of Derek Weitzel University of Nebraska Lincoln Outline Storage Patterns in Grid Applications Storage

More information

glite Data Management System Hands-on

glite Data Management System Hands-on glite Data Management System Hands-on Antonio Calanducci National Institute of Nuclear Physics (INFN) - Catania EGEE Grid tutorial for Users and Sysadmin Barcelona, 14th-18th April 2008 www.eu-egee.org

More information

Multi-Domain Management:

Multi-Domain Management: Multi-Domain Management: Results Achieved & Future Challenges Using the Example of GÉANTG Vasilis Maglaris maglaris@netmode.ntua.gr Chairman, European NREN Policy Committee - GÉANT Consortium Professor,

More information

TCP Strategies. Keepalive Timer. implementations do not have it as it is occasionally regarded as controversial. between source and destination

TCP Strategies. Keepalive Timer. implementations do not have it as it is occasionally regarded as controversial. between source and destination Keepalive Timer! Yet another timer in TCP is the keepalive! This one is not required, and some implementations do not have it as it is occasionally regarded as controversial! When a TCP connection is idle

More information

glite Middleware Usage

glite Middleware Usage glite Middleware Usage Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Nov. 18, 2008 www.eu-egee.org EGEE and glite are registered trademarks Usage

More information

The glite File Transfer Service

The glite File Transfer Service The glite File Transfer Service Peter Kunszt Paolo Badino Ricardo Brito da Rocha James Casey Ákos Frohner Gavin McCance CERN, IT Department 1211 Geneva 23, Switzerland Abstract Transferring data reliably

More information

CA485 Ray Walshe Google File System

CA485 Ray Walshe Google File System Google File System Overview Google File System is scalable, distributed file system on inexpensive commodity hardware that provides: Fault Tolerance File system runs on hundreds or thousands of storage

More information

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator

Computing. DOE Program Review SLAC. Rainer Bartoldus. Breakout Session 3 June BaBar Deputy Computing Coordinator Computing DOE Program Review SLAC Breakout Session 3 June 2004 Rainer Bartoldus BaBar Deputy Computing Coordinator 1 Outline The New Computing Model (CM2) New Kanga/ROOT event store, new Analysis Model,

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 27 July 2010 Location Building 513 (opposite of restaurant no. 2) Building Large building with 2700 m 2 surface for

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca S. Lacaprara INFN Legnaro

PoS(ACAT)020. Status and evolution of CRAB. Fabio Farina University and INFN Milano-Bicocca   S. Lacaprara INFN Legnaro Status and evolution of CRAB University and INFN Milano-Bicocca E-mail: fabio.farina@cern.ch S. Lacaprara INFN Legnaro W. Bacchi University and INFN Bologna M. Cinquilli University and INFN Perugia G.

More information

CERN Network activities update

CERN Network activities update CERN Network activities update SIG-NOC at CERN 27th of April 2017 edoardo.martelli@cern.ch Agenda Updates on: - Networks at CERN IT - LHCOPN - LHCONE - Cloud Activities - IPv6 adoption 2 CERN IT IT Communication

More information

A Simulation Model for Evaluating Distributed Systems Dependability

A Simulation Model for Evaluating Distributed Systems Dependability A Simulation Model for Evaluating Distributed Systems Dependability Ciprian Dobre, Florin Pop, Valentin Cristea Faculty of Automatics and Computer Science, University Politehnica of Bucharest, Romania

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information

Monitoring of large-scale federated data storage: XRootD and beyond.

Monitoring of large-scale federated data storage: XRootD and beyond. Monitoring of large-scale federated data storage: XRootD and beyond. J Andreeva 1, A Beche 1, S Belov 2, D Diguez Arias 1, D Giordano 1, D Oleynik 2, A Petrosyan 2, P Saiz 1, M Tadel 3, D Tuckett 1 and

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

Introduction to The Storage Resource Broker

Introduction to The Storage Resource Broker http://www.nesc.ac.uk/training http://www.ngs.ac.uk Introduction to The Storage Resource Broker http://www.pparc.ac.uk/ http://www.eu-egee.org/ Policy for re-use This presentation can be re-used for academic

More information

Test & Analysis Project aka statistical testing

Test & Analysis Project aka statistical testing Test & Analysis Project aka statistical testing on behalf of the T&A team http://www.ge.infn.it/geant4/analysis/tanda Geant4 Workshop, CERN, 2 October 2002 What is the Test & Analysis project? Test & Analysis

More information

DIRAC Distributed Infrastructure with Remote Agent Control

DIRAC Distributed Infrastructure with Remote Agent Control Computing in High Energy and Nuclear Physics, La Jolla, California, 24-28 March 2003 1 DIRAC Distributed Infrastructure with Remote Agent Control A.Tsaregorodtsev, V.Garonne CPPM-IN2P3-CNRS, Marseille,

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

Geographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE

Geographical failover for the EGEE-WLCG Grid collaboration tools. CHEP 2007 Victoria, Canada, 2-7 September. Enabling Grids for E-sciencE Geographical failover for the EGEE-WLCG Grid collaboration tools CHEP 2007 Victoria, Canada, 2-7 September Alessandro Cavalli, Alfredo Pagano (INFN/CNAF, Bologna, Italy) Cyril L'Orphelin, Gilles Mathieu,

More information

Data Management for the World s Largest Machine

Data Management for the World s Largest Machine Data Management for the World s Largest Machine Sigve Haug 1, Farid Ould-Saada 2, Katarina Pajchel 2, and Alexander L. Read 2 1 Laboratory for High Energy Physics, University of Bern, Sidlerstrasse 5,

More information

Data Management for Distributed Scientific Collaborations Using a Rule Engine

Data Management for Distributed Scientific Collaborations Using a Rule Engine Data Management for Distributed Scientific Collaborations Using a Rule Engine Sara Alspaugh Department of Computer Science University of Virginia alspaugh@virginia.edu Ann Chervenak Information Sciences

More information

File Access Optimization with the Lustre Filesystem at Florida CMS T2

File Access Optimization with the Lustre Filesystem at Florida CMS T2 Journal of Physics: Conference Series PAPER OPEN ACCESS File Access Optimization with the Lustre Filesystem at Florida CMS T2 To cite this article: P. Avery et al 215 J. Phys.: Conf. Ser. 664 4228 View

More information

CernVM-FS. Catalin Condurache STFC RAL UK

CernVM-FS. Catalin Condurache STFC RAL UK CernVM-FS Catalin Condurache STFC RAL UK Outline Introduction Brief history EGI CernVM-FS infrastructure The users Recent developments Plans 2 Outline Introduction Brief history EGI CernVM-FS infrastructure

More information

Monitoring tools in EGEE

Monitoring tools in EGEE Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

High Throughput WAN Data Transfer with Hadoop-based Storage

High Throughput WAN Data Transfer with Hadoop-based Storage High Throughput WAN Data Transfer with Hadoop-based Storage A Amin 2, B Bockelman 4, J Letts 1, T Levshina 3, T Martin 1, H Pi 1, I Sfiligoi 1, M Thomas 2, F Wuerthwein 1 1 University of California, San

More information

LCG Conditions Database Project

LCG Conditions Database Project Computing in High Energy and Nuclear Physics (CHEP 2006) TIFR, Mumbai, 13 Feb 2006 LCG Conditions Database Project COOL Development and Deployment: Status and Plans On behalf of the COOL team (A.V., D.Front,

More information

Programming the Grid with glite

Programming the Grid with glite Programming the Grid with glite E. Laure 1, C. Grandi 1, S. Fisher 2, A. Frohner 1, P. Kunszt 3, A. Krenek 4, O. Mulmo 5, F. Pacini 6, F. Prelz 7, J. White 1 M. Barroso 1, P. Buncic 1, R. Byrom 2, L. Cornwall

More information

Governments are the main stakeholders of NRENs

Governments are the main stakeholders of NRENs EU-MED Event 23rd and 24th October 2007 Session Governments are the main stakeholders of NRENs Enzo.Valente@garr.it Questions: 1-Are governments the main stakeholders of NRENs? 2-Should governments be

More information

Gigabyte Bandwidth Enables Global Co-Laboratories

Gigabyte Bandwidth Enables Global Co-Laboratories Gigabyte Bandwidth Enables Global Co-Laboratories Prof. Harvey Newman, Caltech Jim Gray, Microsoft Presented at Windows Hardware Engineering Conference Seattle, WA, 2 May 2004 Credits: This represents

More information

The Integration of Grid Technology with OGC Web Services (OWS) in NWGISS for NASA EOS Data

The Integration of Grid Technology with OGC Web Services (OWS) in NWGISS for NASA EOS Data The Integration of Grid Technology with OGC Web Services (OWS) in NWGISS for NASA EOS Data Liping Di, Aijun Chen, Wenli Yang and Peisheng Zhao achen6@gmu.edu; achen@laits.gmu.edu Lab for Advanced Information

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst

UK Tier-2 site evolution for ATLAS. Alastair Dewhurst UK Tier-2 site evolution for ATLAS Alastair Dewhurst Introduction My understanding is that GridPP funding is only part of the story when it comes to paying for a Tier 2 site. Each site is unique. Aim to

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Grid Computing. Lectured by: Dr. Pham Tran Vu Faculty of Computer and Engineering HCMC University of Technology

Grid Computing. Lectured by: Dr. Pham Tran Vu   Faculty of Computer and Engineering HCMC University of Technology Grid Computing Lectured by: Dr. Pham Tran Vu Email: ptvu@cse.hcmut.edu.vn 1 Grid Architecture 2 Outline Layer Architecture Open Grid Service Architecture 3 Grid Characteristics Large-scale Need for dynamic

More information

SIMULATION FRAMEWORK FOR MODELING LARGE-SCALE DISTRIBUTED SYSTEMS. Dobre Ciprian Mihai *, Cristea Valentin *, Iosif C. Legrand **

SIMULATION FRAMEWORK FOR MODELING LARGE-SCALE DISTRIBUTED SYSTEMS. Dobre Ciprian Mihai *, Cristea Valentin *, Iosif C. Legrand ** SIMULATION FRAMEWORK FOR MODELING LARGE-SCALE DISTRIBUTED SYSTEMS Dobre Ciprian Mihai *, Cristea Valentin *, Iosif C. Legrand ** * Politehnica University of Bucharest ** California Institute of Technology

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Java Without the Jitter

Java Without the Jitter TECHNOLOGY WHITE PAPER Achieving Ultra-Low Latency Table of Contents Executive Summary... 3 Introduction... 4 Why Java Pauses Can t Be Tuned Away.... 5 Modern Servers Have Huge Capacities Why Hasn t Latency

More information

IST GridLab - A Grid Application Toolkit and Testbed. Result Evaluation. Jason Maassen, Rob V. van Nieuwpoort, Andre Merzky, Thilo Kielmann

IST GridLab - A Grid Application Toolkit and Testbed. Result Evaluation. Jason Maassen, Rob V. van Nieuwpoort, Andre Merzky, Thilo Kielmann GridLab - A Grid Application Toolkit and Testbed Result Evaluation Author(s): Document Filename: Work package: Partner(s): Lead Partner: Config ID: Document classification: Jason Maassen, Rob V. van Nieuwpoort,

More information

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011

Physics Computing at CERN. Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Physics Computing at CERN Helge Meinhard CERN, IT Department OpenLab Student Lecture 21 July 2011 Location (1) Building 513 (opposite of restaurant no. 2) Building 513 (1) Large building with 2700 m 2

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

The ATLAS Production System

The ATLAS Production System The ATLAS MC and Data Rodney Walker Ludwig Maximilians Universität Munich 2nd Feb, 2009 / DESY Computing Seminar Outline 1 Monte Carlo Production Data 2 3 MC Production Data MC Production Data Group and

More information

Features and Future. Frédéric Hemmer - CERN Deputy Head of IT Department. Enabling Grids for E-sciencE. BEGrid seminar Brussels, October 27, 2006

Features and Future. Frédéric Hemmer - CERN Deputy Head of IT Department. Enabling Grids for E-sciencE. BEGrid seminar Brussels, October 27, 2006 Features and Future Frédéric Hemmer - CERN Deputy Head of IT Department BEGrid seminar Brussels, October 27, 2006 www.eu-egee.org www.glite.org Outline Overview of EGEE EGEE glite Middleware Foundation

More information

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM

Network Analytics. Hendrik Borras, Marian Babik IT-CM-MM Network Analytics Hendrik Borras, Marian Babik IT-CM-MM perfsonar Infrastructure perfsonar has been widely deployed in WLCG 249 active instances, deployed at 120 sites including major network hubs at ESNet,

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

Australian participation in the world's largest eresearch infrastructure: the Worldwide LHC Computing Grid and the EGEE Program.

Australian participation in the world's largest eresearch infrastructure: the Worldwide LHC Computing Grid and the EGEE Program. Australian participation in the world's largest eresearch infrastructure: the Worldwide LHC Computing Grid and the EGEE Program. 27 June 2007 1 High Energy Particle Physics The aim of High Energy Physics

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

vsan Disaster Recovery November 19, 2017

vsan Disaster Recovery November 19, 2017 November 19, 2017 1 Table of Contents 1. Disaster Recovery 1.1.Overview 1.2.vSAN Stretched Clusters and Site Recovery Manager 1.3.vSAN Performance 1.4.Summary 2 1. Disaster Recovery According to the United

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

DISTRIBUTED DATABASES CS561-SPRING 2012 WPI, MOHAMED ELTABAKH

DISTRIBUTED DATABASES CS561-SPRING 2012 WPI, MOHAMED ELTABAKH DISTRIBUTED DATABASES CS561-SPRING 2012 WPI, MOHAMED ELTABAKH 1 RECAP: PARALLEL DATABASES Three possible architectures Shared-memory Shared-disk Shared-nothing (the most common one) Parallel algorithms

More information

Atlas Managed Production on Nordugrid

Atlas Managed Production on Nordugrid Atlas Managed Production on Nordugrid Alex Read Mattias Ellert (Uppsala), Katarina Pajchel, Adrian Taga University of Oslo November 7 9, 2006 1 Outline 1. 2. 3. 4. 5. 6. 7. 8. 9. LHC/ATLAS Background The

More information

Connectivity Services, Autobahn and New Services

Connectivity Services, Autobahn and New Services Connectivity Services, Autobahn and New Services Domenico Vicinanza, DANTE EGEE 09, Barcelona, 21 st -25 th September 2009 Agenda Background GÉANT Connectivity services: GÉANT IP GÉANT Plus GÉANT Lambda

More information

CouchDB-based system for data management in a Grid environment Implementation and Experience

CouchDB-based system for data management in a Grid environment Implementation and Experience CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment

More information

LCG User Registration & VO management

LCG User Registration & VO management LCG User Registration & VO management Spring HEPiX Edinburgh 1Maria Dimou- cern-it-gd Presentation Outline Why is LCG Registration worth talking about. How do we register users today. What needs to be

More information

Towards Network Awareness in LHC Computing

Towards Network Awareness in LHC Computing Towards Network Awareness in LHC Computing CMS ALICE CERN Atlas LHCb LHC Run1: Discovery of a New Boson LHC Run2: Beyond the Standard Model Gateway to a New Era Artur Barczyk / Caltech Internet2 Technology

More information

GENSER, the generator repository in LCG

GENSER, the generator repository in LCG GENSER, the generator repository in LCG Alexandre Sherstnev, Sergey Makarychev the LCG project LCG Application Area and Generator Services subproject. What is GENSER? Scheme of working with GENSER and

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

LHC Computing Grid today Did it work?

LHC Computing Grid today Did it work? Did it work? Sept. 9th 2011, 1 KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association Institut www.kit.edu Abteilung Large Hadron Collider and Experiments

More information

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017

Hadoop File System S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y 11/15/2017 Hadoop File System 1 S L I D E S M O D I F I E D F R O M P R E S E N T A T I O N B Y B. R A M A M U R T H Y Moving Computation is Cheaper than Moving Data Motivation: Big Data! What is BigData? - Google

More information

Virtual Routing and Forwarding for Lightpaths Implementations at SARA

Virtual Routing and Forwarding for Lightpaths Implementations at SARA Virtual Routing and Forwarding for Lightpaths Implementations at SARA 10 May 2011 Sander Boele, Pieter de Boer, Igor Idziejczak, Bas Kreukniet, Ronald van der Pol, Freek Dijkstra SARA Computing and Networking

More information

The Google File System

The Google File System The Google File System Sanjay Ghemawat, Howard Gobioff and Shun Tak Leung Google* Shivesh Kumar Sharma fl4164@wayne.edu Fall 2015 004395771 Overview Google file system is a scalable distributed file system

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

Coherence & WebLogic Server integration with Coherence (Active Cache)

Coherence & WebLogic Server integration with Coherence (Active Cache) WebLogic Innovation Seminar Coherence & WebLogic Server integration with Coherence (Active Cache) Duško Vukmanović FMW Principal Sales Consultant Agenda Coherence Overview WebLogic

More information

User Guide. Business & Internet solutions stream. Amman Techno Center. User Guide: V1 - Feb 2 nd 2015.

User Guide. Business & Internet solutions stream. Amman Techno Center. User Guide: V1 - Feb 2 nd 2015. User Guide Business & Internet solutions stream. Amman Techno Center. User Guide: V1 - Feb 2 nd 2015. 1 What is Orange Business Care Business Care is tool to help business customers to manage easily their

More information

Forensic Toolkit System Specifications Guide

Forensic Toolkit System Specifications Guide Forensic Toolkit System Specifications Guide February 2012 When it comes to performing effective and timely investigations, we recommend examiners take into consideration the demands the software, and

More information

AGIS: The ATLAS Grid Information System

AGIS: The ATLAS Grid Information System AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS

More information

A Replica Location Grid Service Implementation

A Replica Location Grid Service Implementation A Replica Location Grid Service Implementation Mary Manohar, Ann Chervenak, Ben Clifford, Carl Kesselman Information Sciences Institute, University of Southern California Marina Del Rey, CA 90292 {mmanohar,

More information

UNIT-IV HDFS. Ms. Selva Mary. G

UNIT-IV HDFS. Ms. Selva Mary. G UNIT-IV HDFS HDFS ARCHITECTURE Dataset partition across a number of separate machines Hadoop Distributed File system The Design of HDFS HDFS is a file system designed for storing very large files with

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information