The DESY Grid Testbed

Size: px
Start display at page:

Download "The DESY Grid Testbed"

Transcription

1 The DESY Grid Testbed Andreas Gellrich * DESY IT Group IT-Seminar * Andreas.Gellrich@desy.de Overview The Physics Case: LHC DESY The Grid Idea: Concepts Implementations DESY: The Grid Testbed ILDG The Future: EGEE Andreas.Gellrich@desy.de IT-Seminar,

2 The Physics Case IT-Seminar, The HEP Data Chain Monte Carlo Production Online Data Taking Event Generation Detector Simulation CON GEO Data Processing Digitization Trigger & DAQ Event Reconstruction Data Analysis Nobel Prize Andreas.Gellrich@desy.de IT-Seminar,

3 Typical HEP Jobs Monte Carlo Jobs: Event Generation: no I; small O; little CPU Detector Simulation: small I; large O; vast CPU (Digitization: usually part of simulation) Event Reconstruction: (First processing: Reprocessing: Selections: Analysis: online or semi-online) full I; full O; large CPU large I; large O; large CPU General: large I; small O; little CPU Performed by many users, many times! IT-Seminar, Some Numbers DESY Experiment (e.g. HERA-B): Event Size: 200 kb Event Rate: 50 Hz => Event number: 500 M/y => Data Rate: 10 MB/sec => Data Volume: 100 TB/y (1 y = 10 7 sec) Online Processing: Reconstruction Time: 4 sec/event Event Rate: 50 events/sec => Bandwidth: 10 MB/sec => Farm Nodes: 200 (200 = 50 * 4) Andreas.Gellrich@desy.de IT-Seminar,

4 Some Numbers cont d Offline Computing: Users: ~300 Power Users: ~30 Active Data Volume: ~10 TB File Server: ~10 Offline Nodes: ~100 Typical Analysis Job: Event Selection: Data Selection: Processing Time: => Processing Rate: => Bandwidth: ~1 M events ~10 kb/event ~100 msec/event ~10 Hz 100 kb/sec Andreas.Gellrich@desy.de IT-Seminar, The LHC Computing Model ~PBytes/sec Online System ~100 MBytes/sec 1 TIPS = 25,000 SpecInt95 PC (1999) = ~15 SpecInt95 One bunch crossing per 25 ns 100 triggers per second Each event is ~1 Mbyte Tier 1 US Regional Centre ~ Gbits/sec or Air Freight Italian Regional Centre Tier 0 French Regional Centre Offline Farm ~20 TIPS ~100 MBytes/sec CERN Computer Centre >20 TIPS RAL Regional Centre Physics data cache Tier 3 Workstations Institute Institute ~0.25TIPS Tier 2 ~Gbits/sec Institute Mbits/sec Institute ScotGRID++ ~1 TIPS Tier2 Centre Tier2 Centre Tier2 Centre ~1 TIPS ~1 TIPS ~1 TIPS Physicists work on analysis channels Each institute has ~10 physicists working on one or more channels Data for these channels should be cached by the institute server Andreas.Gellrich@desy.de IT-Seminar,

5 The Grid Idea IT-Seminar, Introduction We will probably see the spread of computer utilities, which, like present electric and telephone utilities, will service individual homes and offices across the country. Len Kleinrock (1969) A computational grid is a hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities. I. Foster, C. Kesselmann (1998) The sharing that we are concerned with is not primarily file exchange but rather direct access to computers, software, data, and other resources, as is required by a range of collaborative problem-solving and resource brokering strategies emerging in industry, science, and engineering. The sharing is, necessarily, highly controlled, with resources providers and consumers defining clearly and carefully just what is shared, who is allowed to share, and the conditions under which sharing occurs. A set of individuals and/or institutions defined by such sharing rules what we call a virtual organization. I. Foster, C. Kesselmann, S. Tuecke (2000) Andreas.Gellrich@desy.de IT-Seminar,

6 The Grid Dream Mobile Access Desktop Visualizing G R I D M I D D L E W A R E Supercomputer, PC-Cluster Data Storage, Sensors, Experiments Internet, Networks Andreas.Gellrich@desy.de IT-Seminar, Grid Types Data Grids: provide transparent access to data which can be physically distributed within Virtual Organizations (VO) Computational Grids: allow for large-scale compute resource sharing within Virtual Organizations (VO) Information Grids: provide information and data exchange, using well defined standards and web services Andreas.Gellrich@desy.de IT-Seminar,

7 The Grid Idea cont d I. Foster: What is the Grid? A Three Point Checklist (2002) A Grid is a system that: coordinates resources which are not subject to centralized control integration and coordination of resources and users of different domains vs. local management systems (batch systems) using standard, open, general-purpose protocols and interfaces standard and open multi-purpose protocols vs. application specific system to deliver nontrivial qualities of services. coordinated use of resources vs. uncoordinated approach (world wide web) Andreas.Gellrich@desy.de IT-Seminar, Grid Middleware Globus: Toolkit Argonne, U Chicago EDG (EU DataGrid): Project to develop Grid middleware Uses parts of Globus Funded for 3 years ( ) LCG (LHC Computing Grid): Grid infrastructure for LHC production Based on stable EDG versions plus VDT etc. LCG-2 for Data Challenges in 2004 (EDG 2.0) EGEE (Enabling Grids for E-Science in Europe) Starts for years Andreas.Gellrich@desy.de IT-Seminar,

8 Grid Ingredients Authentication: Provide a method to guarantee authenticated access only => Let users log in and create a token-like proxy Information Service: Provide a system which keeps track of the available resources => Monitoring system Resource Management: Manage and exploit the available computing resources => distribute jobs Storage Management: Manage and exploit data => replicate data Andreas.Gellrich@desy.de IT-Seminar, Certification Authorization and authentication are essential parts of Grids A certificate is an encrypted electronic document, digitally signed by a Certification Authority (CA) A Certificate Revocation List (CRL) published by the CA For Germany GridKa issues certificates on request (needs ID copy) Contacts at DESY: R. Mankel, A. Gellrich Users, hosts, and services must be certified The Globus Security Infrastructure (GSI) is part of the Globus Toolkit GSI is based on the openssl Public Key Infrastructure (PKI) It uses X.509 certificates The certificate is used via a token-like proxy In my case: /O=GermanGrid/OU=DESY/CN=Andreas Gellrich Andreas.Gellrich@desy.de IT-Seminar,

9 Virtual Organization A Virtual Organization (VO) is a dynamic collection of individuals, institutions, and resources which is defined by certain sharing rules. Technically, a user is represented by his/her certificate The collection of authorized users is defined on every machine in /etc/grid-security/grid-mapfile This file is regularly updated from a central server The server holds a list of all users belonging to a collection It is this collection we call a VO! The VO a user belongs to is not part of the certificate A VO is defined in a central list, e.g. a LDAP tree In our case there is a VO=Testbed on DESY s central LDAP server Andreas.Gellrich@desy.de IT-Seminar, DESY Andreas.Gellrich@desy.de IT-Seminar,

10 DESY In remembrance of Ralf Gerhards ( ) Andreas.Gellrich@desy.de IT-Seminar, DESY The DESY Grid Testbed International Data Grid (ILDG) dcache D-GRID Initiative Enabling Grids for E-Science in Europe Andreas.Gellrich@desy.de IT-Seminar,

11 The DESY Grid Testbed Initiated by IT and H1 (R. Gerhards, A. Gellrich) Open initiative to study Grids at DESY ( Main developers: M. Vorobiev (H1), J. Nowak (H1), D. Kant (QMUL), A. Gellrich (IT) Further members: A. Campbell (H1), U. Ensslin (IT), B. Lewendel (HERA-B), C. Wissing (H1 Do), S. Padhi (ZEUS), and others Test basic functionalities of the Grid middleware Implemented in collaboration with the Queen Mary University London that also takes part in the UK GridPP and LCG projects An example HEP application could be Monte Carlo simulation IT-Seminar, Grid Testbed Technicalities Exploits 10 Linux PCs donated by H1 and HERA-B grid001 grid010 run DESY Linux 4/5 (S.u.S.E.-based) EU Data Grid ( ) Grid middleware of EDG 1.4 ( EDG 1.4 is based on Globus 2.4 ( LCG-2 will use EDG 2 ( EDG and Globus are built on/for RedHat Linux 6.2 Normally, Grid nodes are set up from an installation server (lcfg) The EDG distribution allows to download binaries Those binaries where modified and installed on the DESY machines Andreas.Gellrich@desy.de IT-Seminar,

12 Grid Testbed Set-up Authentication: Grid Security Infrastructure (GSI) based on PKI (openssl) Globus Gatekeeper, Proxy renewal service Grid Information Service (GIS): Grid Resource Information Service (GRIS) Grid Information Index Service (GIIS) Resource Management: Resource Broker, Job Manager, Job Submission, Batch System (PBS), Logging and Bookkeeping Storage Management: Replica Catalogue,GSI-enabled FTP, GDMP Replica Location Service (RLS) IT-Seminar, Grid Testbed Set-up cont d Testbed hardware: => Mapping of services to logical and physical nodes. The basic nodes are: User Interface (UI) Computing Element (CE) Worker Node (WN) Resource Broker (RB) Storage Element (SE) Replica Catalogue (RC) Information Service (IS) Andreas.Gellrich@desy.de IT-Seminar,

13 Grid Testbed Minimal Set-up Per site: User Interface (UI) to submit jobs Computing Element (CE) to run jobs Worker Node (WN) to do the work Storage Element (SE) to provide data files [Grid Information Index Service (GIIS) as site-mds] Per Grid: Resource Broker (RB) Replica Catalogue (RC) Information Service (IS) VO-server (VO) IT-Seminar, Grid Testbed Schema grid001 ssh grid003 UI JDL output grid006 RB RLS BDII GIIS grid009 RC certs $HOME/.globus/ Computing Element ldap.desy.de VO RB CE SE /etc/grid-security/grid-mapfile grid007 CE PBS GRIS GRIS SE grid002 Other site GridFTP WN WN WN WN grid008 NFS disk IT-Seminar,

14 Ganglia IT-Seminar, Ganglia cont d Andreas.Gellrich@desy.de IT-Seminar,

15 Ganglia cont d Andreas.Gellrich@desy.de IT-Seminar, Ganglia cont d Andreas.Gellrich@desy.de IT-Seminar,

16 Job Example Job Submission: Job delivers hostname and date of worker node Grid Environment Job script/binary which will be executed Job description by JDL Job submission Job status request Job output retrieval IT-Seminar, Job Environment Start. grid003> export GLOBUS_LOCATION=/opt/globus grid003>. $GLOBUS_LOCATION/etc/globus-user-env.sh grid003> export EDG_LOCATION=/opt/edg grid003>. $EDG_LOCATION/etc/edg-user-env.sh grid003> export EDG_WL_LOCATION=$EDG_LOCATION grid003> export RC_CONFIG_FILE=$EDG_LOCATION/etc/rc.conf grid003> exportgdmp_config_file=$edg_location/etc/gdmp.conf IT-Seminar,

17 Job Authentication grid003> grid-proxy-init Your identity: /O=GermanGrid/OU=DESY/CN=Andreas Gellrich Enter GRID pass phrase for this identity: Creating proxy... Done Your proxy is valid until Thu Oct 23 01:52: grid003> grid-proxy-info -all subject : /O=GermanGrid/OU=DESY/CN=Andreas Gellrich/CN=proxy issuer : /O=GermanGrid/OU=DESY/CN=Andreas Gellrich type : full strength : 512 bits timeleft : 11:59:48 Andreas.Gellrich@desy.de IT-Seminar, Job Description grid003> less script.sh #! /usr/bin/zsh host=`/bin/hostname` date=`/bin/date` echo "$host: $date grid003> less hostname.jdl Executable = "script.sh"; Arguments = " "; StdOutput = "hostname.out"; StdError = "hostname.err"; InputSandbox = {"script.sh"}; OutputSandbox = {"hostname.out","hostname.err"}; Rank = other.maxcputime; Andreas.Gellrich@desy.de IT-Seminar,

18 Job Matching grid003> dg-job-list-match hostname.jdl Connecting to host grid006.desy.de, port 7771 *************************************************************************** COMPUTING ELEMENT IDs LIST The following CE(s) matching your job requirements have been found: - grid007.desy.de:2119/jobmanager-pbs-short - grid007.desy.de:2119/jobmanager-pbs-long - grid007.desy.de:2119/jobmanager-pbs-medium - grid007.desy.de:2119/jobmanager-pbs-infinite *************************************************************************** Andreas.Gellrich@desy.de IT-Seminar, Job Submission grid003> dg-job-submit hostname.jdl Connecting to host grid006.desy.de, port 7771 Logging to host grid006.desy.de, port ************************************************************************************ JOB SUBMIT OUTCOME The job has been successfully submitted to the Resource Broker. Use dg-job-status command to check job current status. Your job identifier (dg_jobid) is: desyde:7771 ************************************************************************************ Andreas.Gellrich@desy.de IT-Seminar,

19 Job Status grid003> dg-job-status ' desy.de:7771 Retrieving Information from LB server Please wait: this operation could take some seconds. ************************************************************* BOOKKEEPING INFORMATION: Printing status info for the Job : 06.desy.de:7771 To be continued Andreas.Gellrich@desy.de IT-Seminar, Job Status cont d continued: Status = Initial Status = Scheduled Status = Done Status = OutputReady Andreas.Gellrich@desy.de IT-Seminar,

20 Job History grid003> dg-job-get-logging-info ' desy.de:7771' Retrieving Information from LB server Please wait: this operation could take some seconds. ********************************************************************** LOGGING INFORMATION: Printing info for the Job : desy.de:7771 To be continued Andreas.Gellrich@desy.de IT-Seminar, Job History cont d continued: Event Type = JobAccept Event Type = Job Transfer Event Type = JobMatch Event Type = JobScheduled Event Type = JobRun Event Type = JobDone Andreas.Gellrich@desy.de IT-Seminar,

21 Job Output grid003> dg-job-get-output ' desy.de:7771' ************************************************************************************ JOB GET OUTPUT OUTCOME Output sandbox files for the job: desy.de:7771 have been successfully retrieved and stored in the directory: /tmp/ ************************************************************************************ Andreas.Gellrich@desy.de IT-Seminar, Job Output cont d grid003> ls -l /tmp/ total 4 -rw-r--r-- 1 gellrich it 0 Oct 23 15:49 hostname.err -rw-r--r-- 1 gellrich it 39 Oct 23 15:49 hostname.out grid003> less /tmp/ /hostname.out grid008: Thu Oct 23 15:47:41 MEST 2003 Done! Andreas.Gellrich@desy.de IT-Seminar,

22 Data Management So far we have simply computed something The RB has picked a CE with computing resources A typical job reads/writes data Data files shall not be transferred with the job Possible Access Scenarios: The data file is directly accessible by the WN from the SE The data file needs o be replicated to the closest SE The WN copies the data file to its local disk from a SE Andreas.Gellrich@desy.de IT-Seminar, Data Management cont d Replica Catalogue (RC): Knows all files and their replicas Maps logical file name (LFN) to physical location Knows the access protocol (file, gridftp) grid003> edg-replica-manager-listreplicas -l file.txt configuration file: /opt/edg/etc/rc.conf logical file name: file.txt protocol: gsiftp SASL/GSI-GSSAPI authentication started SASL SSF: 56 SASL installing layers For LFN file.txt the following replicas have been found: location 1: grid002.desy.de/flatfiles/grid002/desy/file.txt Andreas.Gellrich@desy.de IT-Seminar,

23 Data Management cont d grid002> less file.jdl InputData DataAccessProtocol ReplicaCatalog = {"LF:file.txt"}; = {"file", "gridftp"}; = "ldap://grid001.desy.de:9011/ lc=desycollection, rc=desyrc, dc=grid001, dc=desy,dc=de"; The actual access to the file must be handled by the application or by a wrapper script. In case of an NFS-mounted file system that is trivial. In case of a remote SE GridFTP must be called explicitly. All infos are provided by the RB in.brokerinfo. Andreas.Gellrich@desy.de IT-Seminar, Data Management cont d dcache: The dcache project is a joint effort between the Fermi National Laboratories and DESY, providing a fast and scalable disk cache system with optional Hierarchical Storage Manager (HSM) connection Beside various other access methods, the dcache implements a Grid mass storage fabric It supports gsiftp and http as transport protocols and the Storage Resource Manager (SRM) protocol as Replica Management negotiation vehicle The system is in production for all DESY experiments, the CDF experiment at FNAL and for the CMS-US division at various locations, including CERN dcache is a R&D project for the Grid! Andreas.Gellrich@desy.de IT-Seminar,

24 Grid Testbed Observations System aspects: DESY Linux 4/5 installation is possible but clumsy Grid middleware deeply penetrates system software Information Service: Weak points in Globus (LDAP not scalable w/ writing) Rebuilt in EDG and LCG (R-GMA: RDBMS rather than LDAP) Resource Broker: Plays central role Needs to store all data coming with the job (resource intensive) Computing / Storage Element: Probably scalable w/ more worker nodes (farms) Only simple Data Management scheme implemented Andreas.Gellrich@desy.de IT-Seminar, Grid Testbed Plans Testbed: Connect second CE Get more sites involved: DESY Zeuthen, U Dortmund,... The Grid Testbed is not application or experiment specific! The Grid Testbed is not meant for production! Prototyping: Not studied in detail yet Open for various HEP applications Monte Carlo simulations is a good candidate Production: Would need new (or) appropriate hardware The goal must be a DESY-wide Grid infrastructure Andreas.Gellrich@desy.de IT-Seminar,

25 The future: IT-Seminar, EGEE Enabling Grids for E-Science in Europe (EGEE) EU 6th Framework Programme (FP 6) IST 70 partners in 27 countries organized in 10 federations Total volume: 32 M DESY part: 148 k Headquarter: CERN Germany: DESY, DKRZ, FhG, FZK, GSI Andreas.Gellrich@desy.de IT-Seminar,

26 EGEE cont d Create a European wide Grid production quality infrastructure on top of the present and future EU Research Network infrastructure (RN Geant and GN2) Provide distributed European research communities with roundthe-clock access to major computing resources, independent of their geographical location Support many application domains with one large-scale infrastructure that will attract new resources over time Provide outreach, training and support for new user communities Andreas.Gellrich@desy.de IT-Seminar, EGEE cont d Leverage national resources in a more effective way for broader European benefit 70 leading institutions in 27 countries, organized in 10 federations Andreas.Gellrich@desy.de IT-Seminar,

27 EGEE cont d EDG/LCG-1 LCG-2 EGEE-1 LCG DESY? Andreas.Gellrich@desy.de IT-Seminar, EGEE cont d EGEE and LCG: Middleware R&D LCG-1/2 were based on EDG plus VDT EGEE will start with LCG-2 EGEE will be the basis for LCG-3 and beyond LCG LCG-1 LCG-2 EGEE-1 EGEE-2 Globus 2 based OGSI based EDG VDT... LCG... EGEE Andreas.Gellrich@desy.de IT-Seminar,

28 EGEE cont d Networking (Network Activities, NA): NA1: Management of the I3 NA2: Dissemination and Outreach NA3: Training and Induction NA4: Application identification and Support NA5: Policy and International Cooperation Services (Specific Service Activities, SA): SA1: Grid operations, Support and Management SA2: Network resource provision Middleware (Joint Research Activities, JRA): JRA1: Middleware Engineering and Integration Research JRA2: Quality Assurance JRA3: Security JRA4: Network Service Development Andreas.Gellrich@desy.de IT-Seminar, EGEE cont d Timeline: (FP Infrastructure2) ( EGEE) 6 May 2003: 6 Aug 2003: 10 Oct 2003: 17 Oct 2003: 20 Oct 2003: Proposal submitted Proposal accepted Technical Annex (TA) submitted Final negotiation with EU DESY signs contract (CPF) 26 Jan 2004: Jan 2004: Completion of negotiation with EU Consortium Agreement (CA) signing 1 Apr 2004: Apr 2004: Nov 2004: Project start 1 st EGEE Meeting, Cork, Ireland 2 nd EGEE Meeting, The Netherlands Andreas.Gellrich@desy.de IT-Seminar,

29 EGEE cont d DESY is in SA1 (Specific Service Activities) => European Grid Support, Operation and Management Partner of the German ROC Support Centre and Resource Centre User and Administrator Support, CPU and Storage provision DESY will receive funding of ~148 k for 2 years DESY is committed to provide infrastructure Volker Gülzow (Representative) Andreas Gellrich (Deputy) This is the entrance into the Grid world! Andreas.Gellrich@desy.de IT-Seminar, Grid Impact on CC System support: Installing, testing, and configuration of middleware System administration Grid Operations: Running the local grid components User Support: Consultancy on making applications Grid-aware Help desk and documentation Training Resource Planning and Scheduling Andreas.Gellrich@desy.de IT-Seminar,

30 Web Links Grid: LCG: DESY: IT-Seminar, Literature Books: Foster, C. Kesselmann: The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmann Publisher Inc. (1999) F. Berman, G. Fox, T. Hey: Grid Computing: Making The Global Infrastructure a Reality, John Wiley & Sons (2003) Articles: I. Foster, C. Kesselmann, S. Tuecke: The Anatomy of the Grid (2000) I. Foster, C. Kesselmann, J.M. Nick, S. Tuecke: The Physiology of the Grid (2002) I. Foster: What is the Grid? A Three Point Checklist (2002) Andreas.Gellrich@desy.de IT-Seminar,

31 Summary The Grid has developed from a smart idea to reality Grid infrastructure will be standard in (e)-science in the future LHC can not live w/o Grids DESY could live w/o Grids now, but would miss the future There are projects under way at DESY See: The EGEE project opens the Grid world for DESY! Andreas.Gellrich@desy.de IT-Seminar,

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP,

Computing in HEP. Andreas Gellrich. DESY IT Group - Physics Computing. DESY Summer Student Program 2005 Lectures in HEP, Computing in HEP Andreas Gellrich DESY IT Group - Physics Computing DESY Summer Student Program 2005 Lectures in HEP, 11.08.2005 Program for Today Computing in HEP The DESY Computer Center Grid Computing

More information

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay)

Grid Computing. Olivier Dadoun LAL, Orsay. Introduction & Parachute method. Socle 2006 Clermont-Ferrand Orsay) virtual organization Grid Computing Introduction & Parachute method Socle 2006 Clermont-Ferrand (@lal Orsay) Olivier Dadoun LAL, Orsay dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Preamble

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Grid services Dusan Vudragovic dusan@phy.bg.ac.yu Scientific Computing Laboratory Institute of Physics Belgrade, Serbia Sep. 19, 2008 www.eu-egee.org Set of basic Grid services Job submission/management

More information

Grid Computing. Olivier Dadoun LAL, Orsay Introduction & Parachute method. APC-Grid February 2007

Grid Computing. Olivier Dadoun LAL, Orsay  Introduction & Parachute method. APC-Grid February 2007 Grid Computing Introduction & Parachute method APC-Grid February 2007 Olivier Dadoun LAL, Orsay http://flc-mdi.lal.in2p3.fr dadoun@lal.in2p3.fr www.dadoun.net October 2006 1 Contents Machine Detector Interface

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Tier-2 DESY Volker Gülzow, Peter Wegner

Tier-2 DESY Volker Gülzow, Peter Wegner Tier-2 Planning @ DESY Volker Gülzow, Peter Wegner DESY DV&IT 1 Outline LCG- requirements and concept Current status and plans for DESY Conclusion Plans for Zeuthen DESY DV&IT 2 LCG requirements and concepts

More information

Monitoring the Usage of the ZEUS Analysis Grid

Monitoring the Usage of the ZEUS Analysis Grid Monitoring the Usage of the ZEUS Analysis Grid Stefanos Leontsinis September 9, 2006 Summer Student Programme 2006 DESY Hamburg Supervisor Dr. Hartmut Stadie National Technical

More information

glite Grid Services Overview

glite Grid Services Overview The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) glite Grid Services Overview Antonio Calanducci INFN Catania Joint GISELA/EPIKH School for Grid Site Administrators Valparaiso,

More information

Architecture Proposal

Architecture Proposal Nordic Testbed for Wide Area Computing and Data Handling NORDUGRID-TECH-1 19/02/2002 Architecture Proposal M.Ellert, A.Konstantinov, B.Kónya, O.Smirnova, A.Wäänänen Introduction The document describes

More information

Grid Infrastructure For Collaborative High Performance Scientific Computing

Grid Infrastructure For Collaborative High Performance Scientific Computing Computing For Nation Development, February 08 09, 2008 Bharati Vidyapeeth s Institute of Computer Applications and Management, New Delhi Grid Infrastructure For Collaborative High Performance Scientific

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan

Grids and Security. Ian Neilson Grid Deployment Group CERN. TF-CSIRT London 27 Jan Grids and Security Ian Neilson Grid Deployment Group CERN TF-CSIRT London 27 Jan 2004-1 TOC Background Grids Grid Projects Some Technical Aspects The three or four A s Some Operational Aspects Security

More information

Grid Architectural Models

Grid Architectural Models Grid Architectural Models Computational Grids - A computational Grid aggregates the processing power from a distributed collection of systems - This type of Grid is primarily composed of low powered computers

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

LCG-2 and glite Architecture and components

LCG-2 and glite Architecture and components LCG-2 and glite Architecture and components Author E.Slabospitskaya www.eu-egee.org Outline Enabling Grids for E-sciencE What are LCG-2 and glite? glite Architecture Release 1.0 review What is glite?.

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Advanced Job Submission on the Grid

Advanced Job Submission on the Grid Advanced Job Submission on the Grid Antun Balaz Scientific Computing Laboratory Institute of Physics Belgrade http://www.scl.rs/ 30 Nov 11 Dec 2009 www.eu-egee.org Scope User Interface Submit job Workload

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side Troubleshooting Grid authentication from the client side By Adriaan van der Zee RP1 presentation 2009-02-04 Contents The Grid @NIKHEF The project Grid components and interactions X.509 certificates, proxies

More information

EU DataGRID testbed management and support at CERN

EU DataGRID testbed management and support at CERN EU DataGRID testbed management and support at CERN E. Leonardi and M.W. Schulz CERN, Geneva, Switzerland In this paper we report on the first two years of running the CERN testbed site for the EU DataGRID

More information

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008

Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, November 2008 Introduction Data Management Jan Just Keijser Nikhef Grid Tutorial, 13-14 November 2008 Outline Introduction SRM Storage Elements in glite LCG File Catalog (LFC) Information System Grid Tutorial, 13-14

More information

Overview of HEP software & LCG from the openlab perspective

Overview of HEP software & LCG from the openlab perspective Overview of HEP software & LCG from the openlab perspective Andreas Unterkircher, CERN openlab February 2005 Andreas Unterkircher 1 Contents 1. Opencluster overview 2. High Energy Physics (HEP) software

More information

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE

The glite middleware. Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 Enabling Grids for E-sciencE The glite middleware Presented by John White EGEE-II JRA1 Dep. Manager On behalf of JRA1 John.White@cern.ch www.eu-egee.org EGEE and glite are registered trademarks Outline glite distributions Software

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

A scalable storage element and its usage in HEP

A scalable storage element and its usage in HEP AstroGrid D Meeting at MPE 14 15. November 2006 Garching dcache A scalable storage element and its usage in HEP Martin Radicke Patrick Fuhrmann Introduction to dcache 2 Project overview joint venture between

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011

( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING. version 0.6. July 2010 Revised January 2011 ( PROPOSAL ) THE AGATA GRID COMPUTING MODEL FOR DATA MANAGEMENT AND DATA PROCESSING version 0.6 July 2010 Revised January 2011 Mohammed Kaci 1 and Victor Méndez 1 For the AGATA collaboration 1 IFIC Grid

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Grid Challenges and Experience

Grid Challenges and Experience Grid Challenges and Experience Heinz Stockinger Outreach & Education Manager EU DataGrid project CERN (European Organization for Nuclear Research) Grid Technology Workshop, Islamabad, Pakistan, 20 October

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

Implementing GRID interoperability

Implementing GRID interoperability AFS & Kerberos Best Practices Workshop University of Michigan, Ann Arbor June 12-16 2006 Implementing GRID interoperability G. Bracco, P. D'Angelo, L. Giammarino*, S.Migliori, A. Quintiliani, C. Scio**,

More information

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager The University of Oxford campus grid, expansion and integrating new partners Dr. David Wallom Technical Manager Outline Overview of OxGrid Self designed components Users Resources, adding new local or

More information

Future Developments in the EU DataGrid

Future Developments in the EU DataGrid Future Developments in the EU DataGrid The European DataGrid Project Team http://www.eu-datagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Overview Where is the

More information

CrossGrid testbed status

CrossGrid testbed status Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft CrossGrid testbed status Ariel García The EU CrossGrid Project 1 March 2002 30 April 2005 Main focus on interactive and parallel applications People

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

The GENIUS Grid Portal

The GENIUS Grid Portal The GENIUS Grid Portal Computing in High Energy and Nuclear Physics, 24-28 March 2003, La Jolla, California R. Barbera Dipartimento di Fisica e Astronomia, INFN, and ALICE Collaboration, via S. Sofia 64,

More information

Lessons Learned in the NorduGrid Federation

Lessons Learned in the NorduGrid Federation Lessons Learned in the NorduGrid Federation David Cameron University of Oslo With input from Gerd Behrmann, Oxana Smirnova and Mattias Wadenstein Creating Federated Data Stores For The LHC 14.9.12, Lyon,

More information

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status:

LHC COMPUTING GRID INSTALLING THE RELEASE. Document identifier: Date: April 6, Document status: LHC COMPUTING GRID INSTALLING THE RELEASE Document identifier: EDMS id: Version: n/a v2.4.0 Date: April 6, 2005 Section: Document status: gis final Author(s): GRID Deployment Group ()

More information

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany

GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING. 2nd EGAN School, December 2012, GSI Darmstadt, Germany GRID COMPUTING APPLIED TO OFF-LINE AGATA DATA PROCESSING M. KACI mohammed.kaci@ific.uv.es 2nd EGAN School, 03-07 December 2012, GSI Darmstadt, Germany GRID COMPUTING TECHNOLOGY THE EUROPEAN GRID: HISTORY

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group

Bookkeeping and submission tools prototype. L. Tomassetti on behalf of distributed computing group Bookkeeping and submission tools prototype L. Tomassetti on behalf of distributed computing group Outline General Overview Bookkeeping database Submission tools (for simulation productions) Framework Design

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

DataGrid EDG-BROKERINFO USER GUIDE. Document identifier: Date: 06/08/2003. Work package: Document status: Deliverable identifier:

DataGrid EDG-BROKERINFO USER GUIDE. Document identifier: Date: 06/08/2003. Work package: Document status: Deliverable identifier: DataGrid Document identifier: Date: 06/08/2003 Work package: Document status: WP1-WP2 DRAFT Deliverable identifier: Abstract: This document presents what is the BrokerInfo file, the edg-brokerinfo command

More information

Programming Environment Oct 9, Grid Programming (1) Osamu Tatebe University of Tsukuba

Programming Environment Oct 9, Grid Programming (1) Osamu Tatebe University of Tsukuba Programming Environment Oct 9, 2014 Grid Programming (1) Osamu Tatebe University of Tsukuba Overview Grid Computing Computational Grid Data Grid Access Grid Grid Technology Security - Single Sign On Information

More information

An Introduction to the Grid

An Introduction to the Grid 1 An Introduction to the Grid 1.1 INTRODUCTION The Grid concepts and technologies are all very new, first expressed by Foster and Kesselman in 1998 [1]. Before this, efforts to orchestrate wide-area distributed

More information

Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA.

Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA. Authentication for Virtual Organizations: From Passwords to X509, Identity Federation and GridShib BRIITE Meeting Salk Institute, La Jolla CA. November 3th, 2005 Von Welch vwelch@ncsa.uiuc.edu Outline

More information

The National Analysis DESY

The National Analysis DESY The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Challenges of the LHC Computing Grid by the CMS experiment

Challenges of the LHC Computing Grid by the CMS experiment 2007 German e-science Available online at http://www.ges2007.de This document is under the terms of the CC-BY-NC-ND Creative Commons Attribution Challenges of the LHC Computing Grid by the CMS experiment

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System

International Collaboration to Extend and Advance Grid Education. glite WMS Workload Management System International Collaboration to Extend and Advance Grid Education glite WMS Workload Management System Marco Pappalardo Consorzio COMETA & INFN Catania, Italy ITIS Ferraris, Acireale, Tutorial GRID per

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Distributed Computing Grid Experiences in CMS Data Challenge

Distributed Computing Grid Experiences in CMS Data Challenge Distributed Computing Grid Experiences in CMS Data Challenge A.Fanfani Dept. of Physics and INFN, Bologna Introduction about LHC and CMS CMS Production on Grid CMS Data challenge 2 nd GGF School on Grid

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

AGATA Analysis on the GRID

AGATA Analysis on the GRID AGATA Analysis on the GRID R.M. Pérez-Vidal IFIC-CSIC For the e682 collaboration What is GRID? Grid technologies allow that computers share trough Internet or other telecommunication networks not only

More information

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4

Day 1 : August (Thursday) An overview of Globus Toolkit 2.4 An Overview of Grid Computing Workshop Day 1 : August 05 2004 (Thursday) An overview of Globus Toolkit 2.4 By CDAC Experts Contact :vcvrao@cdacindia.com; betatest@cdacindia.com URL : http://www.cs.umn.edu/~vcvrao

More information

The German National Analysis Facility What it is and how to use it efficiently

The German National Analysis Facility What it is and how to use it efficiently The German National Analysis Facility What it is and how to use it efficiently Andreas Haupt, Stephan Wiesand, Yves Kemp GridKa School 2010 Karlsruhe, 8 th September 2010 Outline > NAF? What's that? >

More information

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT

E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT E UFORIA G RID I NFRASTRUCTURE S TATUS R EPORT DSA1.1 Document Filename: Activity: Partner(s): Lead Partner: Document classification: EUFORIA-DSA1.1-v1.0-CSIC SA1 CSIC, FZK, PSNC, CHALMERS CSIC PUBLIC

More information

Site Report. Stephan Wiesand DESY -DV

Site Report. Stephan Wiesand DESY -DV Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing

More information

Experience with LCG-2 and Storage Resource Management Middleware

Experience with LCG-2 and Storage Resource Management Middleware Experience with LCG-2 and Storage Resource Management Middleware Dimitrios Tsirigkas September 10th, 2004 MSc in High Performance Computing The University of Edinburgh Year of Presentation: 2004 Authorship

More information

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium

Storage on the Lunatic Fringe. Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium Storage on the Lunatic Fringe Thomas M. Ruwart University of Minnesota Digital Technology Center Intelligent Storage Consortium tmruwart@dtc.umn.edu Orientation Who are the lunatics? What are their requirements?

More information

Architecture of the WMS

Architecture of the WMS Architecture of the WMS Dr. Giuliano Taffoni INFORMATION SYSTEMS UNIT Outline This presentation will cover the following arguments: Overview of WMS Architecture Job Description Language Overview WMProxy

More information

Hardware Tokens in META Centre

Hardware Tokens in META Centre MWSG meeting, CERN, September 15, 2005 Hardware Tokens in META Centre Daniel Kouřil kouril@ics.muni.cz CESNET Project META Centre One of the basic activities of CESNET (Czech NREN operator); started in

More information

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities

DESY at the LHC. Klaus Mőnig. On behalf of the ATLAS, CMS and the Grid/Tier2 communities DESY at the LHC Klaus Mőnig On behalf of the ATLAS, CMS and the Grid/Tier2 communities A bit of History In Spring 2005 DESY decided to participate in the LHC experimental program During summer 2005 a group

More information

Philippe Charpentier PH Department CERN, Geneva

Philippe Charpentier PH Department CERN, Geneva Philippe Charpentier PH Department CERN, Geneva Outline Disclaimer: These lectures are not meant at teaching you how to compute on the Grid! I hope it will give you a flavor on what Grid Computing is about

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

Troubleshooting Grid authentication from the client side

Troubleshooting Grid authentication from the client side System and Network Engineering RP1 Troubleshooting Grid authentication from the client side Adriaan van der Zee 2009-02-05 Abstract This report, the result of a four-week research project, discusses the

More information

EUROPEAN MIDDLEWARE INITIATIVE

EUROPEAN MIDDLEWARE INITIATIVE EUROPEAN MIDDLEWARE INITIATIVE VOMS CORE AND WMS SECURITY ASSESSMENT EMI DOCUMENT Document identifier: EMI-DOC-SA2- VOMS_WMS_Security_Assessment_v1.0.doc Activity: Lead Partner: Document status: Document

More information

Globus Toolkit Firewall Requirements. Abstract

Globus Toolkit Firewall Requirements. Abstract Globus Toolkit Firewall Requirements v0.3 8/30/2002 Von Welch Software Architect, Globus Project welch@mcs.anl.gov Abstract This document provides requirements and guidance to firewall administrators at

More information

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project

Introduction to GT3. Introduction to GT3. What is a Grid? A Story of Evolution. The Globus Project Introduction to GT3 The Globus Project Argonne National Laboratory USC Information Sciences Institute Copyright (C) 2003 University of Chicago and The University of Southern California. All Rights Reserved.

More information

ARDA status. Massimo Lamanna / CERN. LHCC referees meeting, 28 June

ARDA status. Massimo Lamanna / CERN. LHCC referees meeting, 28 June LHCC referees meeting, 28 June 2004 ARDA status http://cern.ch/arda Massimo Lamanna / CERN www.eu-egee.org cern.ch/lcg EGEE is a project funded by the European Union under contract IST-2003-508833 LHCC

More information

Monitoring tools in EGEE

Monitoring tools in EGEE Monitoring tools in EGEE Piotr Nyczyk CERN IT/GD Joint OSG and EGEE Operations Workshop - 3 Abingdon, 27-29 September 2005 www.eu-egee.org Kaleidoscope of monitoring tools Monitoring for operations Covered

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

ELFms industrialisation plans

ELFms industrialisation plans ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with

More information

dcache Introduction Course

dcache Introduction Course GRIDKA SCHOOL 2013 KARLSRUHER INSTITUT FÜR TECHNOLOGIE KARLSRUHE August 29, 2013 dcache Introduction Course Overview Chapters I, II and Ⅴ christoph.anton.mitterer@lmu.de I. Introduction To dcache Slide

More information

Gergely Sipos MTA SZTAKI

Gergely Sipos MTA SZTAKI Application development on EGEE with P-GRADE Portal Gergely Sipos MTA SZTAKI sipos@sztaki.hu EGEE Training and Induction EGEE Application Porting Support www.lpds.sztaki.hu/gasuc www.portal.p-grade.hu

More information

Based on: Grid Intro and Fundamentals Review Talk by Gabrielle Allen Talk by Laura Bright / Bill Howe

Based on: Grid Intro and Fundamentals Review Talk by Gabrielle Allen Talk by Laura Bright / Bill Howe Introduction to Grid Computing 1 Based on: Grid Intro and Fundamentals Review Talk by Gabrielle Allen Talk by Laura Bright / Bill Howe 2 Overview Background: What is the Grid? Related technologies Grid

More information