Laboratory of Information Technologies. Distributed computing and Grid technologies. Grid-infrastructure infrastructure at JINR

Size: px
Start display at page:

Download "Laboratory of Information Technologies. Distributed computing and Grid technologies. Grid-infrastructure infrastructure at JINR"

Transcription

1 Laboratory of Information Technologies. Distributed computing and Grid technologies. Grid-infrastructure infrastructure at JINR Korenkov Vladimir (LIT,JINR) Dubna,,

2 Laboratory of Information Technologies

3 Specific task: Provision of theoretical and experimental studies conducted by the JINR Member State institutes at JINR and other scientific centres with modern telecommunication, network and information resources. In order to fulfill the task, it is necessary to provide: development of telecommunication channels of JINR with the JINR Member States on the basis of national and regional telecommunication networks; fault-tolerant tolerant operation and further development of the high- speed and protected local area network of JINR; development and maintenance of the distributed high- performance computing infrastructure and mass storage resources; information, algorithmic and software support of the research-and and-production activity of JINR; a reliable operation and development of the JINR Grid- segment as a component of the global Grid-infrastructure. infrastructure.

4 Cooperation Collaborators from the JINR Member States (protocols of cooperation with INRNE (Bulgaria), ArmeSFo (Armenia), FZK Karlsruhe GmbH (Germany), IHEPI TSU (Georgia), NC PHEP BSU (Belarus), KFTI NASU (Ukraine), etc.). Grants of Plenipotentiaries of the JINR Member States: Poland, Bulgaria, Slovakia, Romania. Agreement on Joint Research Programme Meshcheryakov-Hulubei LIT-Romania. BMBF grant JINR Network Security, Failover Measures, Network monitoring. CERN-JINR Cooperation Agreement on several topics, and JINR in the LCG Project (participation of JINR in the LCG project). JINR-South Africa Cooperation Agreement -project Development of Grid segment for the LHC experiments. Participation in common projects: Enabling Grids for E-sciencE (EGEE -a project co-funded by the European Commission through the Sixth Framework Programme, CERN-INTAS project and "SKIF-GRID project of the Program of the Belarusian-Russian Union State as a part of joint propositions of NASB and Federal Agency of Science and Innovations of Russian Federation The development and use of hard and software in GRID-technologies and advanced supercomputer systems SKIF in (SKIF-GRID), GridNNN (National Nanotechnological Network) Joint research grants from RFBR and Ministry of Science and Education with MSU and SINP MSU, Saratov University, IM SB RAS, IMM RAS, TU Gomel 4 (Belorussia).

5 Core e-science areas Numerical Analysis Visualization and Image Science Mathematical Modeling Distributed Resources Database Technology Parallel Algorithms and Performance Optimization Distributed Resources and Database Technology Numerical Analysis Visualization and Image Science Parallel Algorithms and Performance Optimization Mathematical Modeling 5

6 LIT JINR: two important projects completed Two large-scale very important projects have been completed in A grand presentation was held simultuneouslywith the meeting of the Programme Advisory Committee for Particle Physics on June 10. All organizations involved in the projects realisation together with the representatives of Ministry of Communications, Federal Agency for Science and Innovation attended the Presentation. Our community colleagues and prospective users - residents of the special economical zone - were also invited. 6

7 JINR - Moscow 20 Gbps telecommunication channel Collaboration with RSCC, RIPN, MSK-IX, JET Infosystems, Nortel

8 JINR Central Information and Computing Complex In 2009,, total CICC performance was 2400 ksi2k, the disk storage capacity 500 TB At present,, the CICC performance equals 2800 ksi2k and the disk storage capacity 1068TB (>1PB 1PB), 900TB for users from dcache and xtoord for data storage. Scheme of the CICC network connections

9 JINR Local Area Network Backbone (LAN) Plans: Step-by-step modernization of the JINR Backbone transfer to 10 Gbps Development and modernization of the control system of the JINR highway network Comprises 7250 computers and nodes, Users 4200, IP 8176, Remote VPN users (Lanpolis, Contact, TelecomMPK) ; High-speed transport (1Gbps/10 Gbps); Controlled-access; Partially isolated local traffic (8 divisions have own subnetworks with Cisco Catalyst 3550 as gateways); General network authorization system involves many services (AFS, batch, Grid, remote access, etc.) (AFS, batch, Grid, remote 9

10 Mathematical Support of JINR Experiments: mathematical support of particle and relativistic nuclear physics experiments performed in JINR and ones with JINR participation (LHC-CERN, GSI, RHIC, DESY, etc); design of mathematical methods and means of modeling physical processes and experimental data analysis; development and study of the new methods of 2D and 3D approximations and smoothing for increasing the efficiency of experimental data processing; development of software and computer complexes for experimental data processing; involvement of new workforce (young people) in data processing. 10

11 Numerical Investigation of Problems arising in the modeling of complex physical systems: creation of numerical algorithms and software for modeling complex physical systems; implementation of parallel algorithms proposed for calculations at multi-processor computer clusters using the MPI technology; development of new-generation computing techniques, including the gridification of the developed algorithms. 11

12 lit.jinr.ru 12

13 13

14 Challenges of scientific computing distributed computing and Grid 14

15 Industry Journey Old World Static Solo Physical Manual Application New World Dynamic Shared Virtual Automated Service 15

16 We do e-science e like in digital, distributed Science that is: Computationally intensive Operates on massive digital data sets Carried out in a distributed network environment High-Throughput vs High- Performance Computing: HTC: distributed (serial tasks), free cycles, cheap HPC: compact (parallel tasks), booked years ahead, expensive High-Energy Physics is a textbook example of e-science 16

17 New instruments, more data, more scientists, more computers WWW born in CERN 17

18 Why Grid? The Changing Nature of Work Collaborative & Dynamic Distributed & Heterogeneous Data & Computation Intensive Concurrent Innovation Cycles Project focused, globally distributed teams, spanning organizations within and beyond company boundaries Each team member/group brings own data, compute, & other resources into the project Access to computing and data resources must be coordinated across the collaboration Resources must be available to projects with strong QoS, & also reflect enterprise-wide biz priorities IT must adapt to this new reality 18

19 Five Emerging Models of Networked Computing From The Grid Distributed Computing synchronous processing High-Throughput Computing asynchronous processing On-Demand Computing dynamic resources Data-Intensive Computing databases Collaborative Computing scientists Ian Foster and Carl Kesselman, editors, The Grid: Blueprint for a New Computing Infrastructure, Morgan Kaufmann, 1999, 19

20 The Grid Enable coordinated resource sharing & problem solving in dynamic, multiinstitutional virtual organizations. (Source: The Anatomy of the Grid ) Access to shared resources Virtualization, allocation, management With predictable behaviors Provisioning, quality of service In dynamic, heterogeneous environments Standards-based interfaces and protocols 20

21 Grid computing Today there are many definitions of Grid computing: The definitive definition of a Grid is provided by Ian Foster in his article "What is the Grid? A Three Point Checklist" The three points of this checklist are: Computing resources are not administered centrally; Open standards are used; Non-trivial quality of service is achieved. 21

22 GÉANT2 Pan-European Backbone 34 NRENs, ~30M Users; 50k km Leased Lines 12k km Dark Fiber; Point to Point Services GN3 Next Gen. Network Projected Start Q Dark Fiber Core Among 19 Countries: u Austria u Belgium u Croatia u Czech Republic u Denmark u Finland u France u Germany u Hungary u Ireland u Italy u Netherlands u Norway u Slovakia u Slovenia u Spain u Sweden u Switzerland u United Kingdom 22

23 GEANT2 International Connectivity New Projects in u ALICE2 in and to Latin America u CAREN for Central Asia u FEAST: Feasibility Study for Sub-Saharan Africa Inter-Regional Connections u GEANT2/ESNet over US LHCNet for US Tier2/ EU Tier1 Connections u 4 10G Links to US R&E Nets u 1 GbE Connection to the Ubuntunet Alliance (Africa); also TENET in South Africa u ORIENT: CERNET + CSTNET (China) to Europe at 2.5G u TEIN2: 9 Asian Countries + Australia to Europe; TEIN3 Approved u EUMEDConnect: 11 Mediterranean + N. Africa u ALICE: 12 Latin American countries to GEANT2 at 622 Mbps u SuperSINET (Japan) at New York: 10G u India: 100 Mbps 23

24 Connectivity of JINR Member States INCREASE of the JINR - Moscow data link: 20 Gbps in Gbps in 2016 GÉANT3 Pan-European Backbone Consortium of 34 NRENs TEIN3: The Research and Education Network for Asia-Pacific Vietnam Bulgaria, Czech, Poland, Romania, Russia, Slovakia Ukraine, Belarus, Azerbaijan, Armenia, Georgia, Moldova,

25 Grid is a result of IT progress 25 Graph from The Triumph of the Light, G. Stix, Sci. T.Strizh Am. January (LIT, JINR) 2001

26 From the conventional HPC To the Grid 26

27 27

28 Job submission to the WLCG User Interface Replica Catalogue (RC) Information Service (IS) submitted waiting Resource Broker ready scheduled running Job Submission Service (JSS) Logging and Bookkeeping Computing Element (CE) Storage Element (SE) done outputready cleared 28

29 Global Community 29

30 Some history 1999 Monarc Project Early discussions on how to organise distributed computing for LHC EU DataGrid project middleware & testbed for an operational grid LHC Computing Grid LCG deploying the results of DataGrid to provide a production facility for LHC experiments EU EGEE project phase 1 starts from the LCG grid shared production infrastructure expanding to other communities and sciences EU EGEE-II Building on phase 1 Expanding applications and communities EU EGEE-III CERN 30

31 CERN - - European Organization for Nuclear Research Research/ Discovery Technology Training Collaborating 31

32 Large Hadron Collider Start-up of the Large Hadron Collider (LHC), one of the largest and truly global scientific projects ever, is the most exciting turning point in particle physics. CMS LHCb ALICE LHC ring: 27 km circumference ATLAS

33 ATLAS Number of scientists: 1800 Number of institutes: 164 Number of countries: 35 33

34 Tier 0 at CERN: Acquisition, First pass reconstruction, Storage & Distribution T.Strizh Ian.Bird@cern.ch (LIT, JINR) 1.25 GB/sec (ions) 34

35 Tier 0 Tier 1 Tier 2 Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-2 (>200 centres): Simulation End-user analysis T.Strizh Ian.Bird@cern.ch (LIT, JINR) 35

36 DataGrid Architecture Local Computing Local Application Local Database Grid Grid Application Layer Job Management Data Management Metadata Management Object to File Mapping Collective Services Information & Monitoring Replica Manager Grid Scheduler Grid Underlying Grid Services Database Services Computing Element Services Storage Element Services Replica Catalog Authorization Authentication & Accounting Logging & Bookkeeping Fabric Fabric services Resource Management Configuration Management Monitoring and Fault Tolerance Node Installation & Management Fabric Storage Management 36

37 EGEE (Enabling Grids for E-sciencE) The aim of the project is to create a global Pan-European computing infrastructure of a Grid type. -Integrate regional Grid efforts -Represent leading grid activities in Europe 10 Federations, 27 Countries, 70 Organizations 37

38 350 sites 55 countries 150,000 CPUs 26 PetaBytes (Disk) 40 PetaBytes (Tape) >15,000 users >300 Vos 12 mln jobs/month Астрономия иастрофизика Безопасность населения Вычислительная химия Вычислительные науки/программирование Физика конденсированного состояния Науки оземле Синтез Физика высоких энергий Науки ожизни 38

39 39

40 glite Middleware BDII Information Services MON EGEE Maintained Components General Services Logging & Workload Book Management keeping Service Service File Transfer Service Compute Element CREAM glexec BLAH LCG-CE Worker Node User User Access External User Interface Components Interface LHC File Catalogue Hydra AMGA Storage Element Disk Pool Manager dcache Virtual Organisation Membership Service Proxy Server Security Services SCAS Authz. Service LCAS & LCMAPS Physical Resources Bob Jones - EGEE09 40

41 glite 41

42 Bioinformatics and Grid Many large clusters utilized for Services Sequence similarity (BLAST queues) Research Molecular modeling (folding, docking) Training of novel predictors Jobs are typically short (3 minutes) But plenty (all against all ) Considerable preparation for single job (couple of gigabytes of data to transfer) 42

43 Biomedical applications Biomedicine is also a pilot application area More than 20 applications deployed and being ported Three sub domains Medical image processing Biomedicine Drug discovery Use Grid as platform for collaboration (don t t need same massive processing power or storage as HEP) 43

44 Applications Example: WISDOM Grid-enabled drug discovery process for neglected diseases In silico docking compute probability that potential drugs dock with target protein To speed up and reduce cost to develop new drugs WISDOM (World-wide In Silico ( Malaria Docking On : Three large-scale deployments with more than 6 centuries of computations achieved in 190 days 3,5TB of data produced Up to 5000 computers in 50 countries Some promising in-vitro tests, with relevant biological results. Tuesday, July 20, 2010 EGEE Applications 44

45 Radio astronomy needs Grid, too Enormous datasets, massive computing, innovative instrumentation Dozens of new surveys launched recently Many (10 100) terabytes per survey High data rates researchers per survey International collaborations (almost always) Data is non-proprietary (usually) 45

46 Computational Chemistry GEMS (Grid Enabled Molecular Simulator) application Calculation and fitting of electronic energies of atomic and molecular aggregates (using high level ab initio methods) The use of statistical kinetics and dynamics to study chemical processes Virtual Monitors Angular distributions Vibrational distributions Rotational distributions Many body systems End-User applications Nanotubes Life sciences Statistical Thermodynamics Molecular Virtual Reality Angular distribution Rotational distribution Vibrational distribution Many body system Angular distribution 46

47 Fusion Large Nuclear Fusion installations E.g. International Thermonuclear Experimental Reactor (ITER) Distributed data storage and handling needed Computing power needed for Making decisions in real time Solving kinetic transport particle orbits Stellarator optimization magnetic field to contain the plasma 47

48 Earth Science Applications Community Many small groups that aggregate for projects (and separate afterwards) The Earth Complex system Independent domains with interfaces Solid Earth Ocean Atmosphere Physics, chemistry and/or biology Applications Earth observation by satellite Seismology Hydrology Climate Meteorology, Space Weather Geosciences Mars Atmosphere Pollution Database Collection 48

49 Earth Sciences: Earthquake analysis Seismic software application determines: Epicentre, magnitude, mechanism May make it possible to predict future earthquakes Assess potential impact on specific regions Analysis of Indonesian earthquake (28 March 2005) Data from French seismic sensor network GEOSCOPE transmitted to IPGP within 12 hours after the earthquake Solution found within 30 hours after earthquake occurred 10 times faster on the Grid than on local computers Results Not an aftershock of December 2004 earthquake Different location (different part of fault line further south) Different mechanism Rapid analysis of earthquakes is important for relief efforts 49

50 How to set up your own Grid 50 Slide from O.Smirnova (NDGF)

51 What next? 51

52 European e-infrastructuree Need to prepare permanent, common Grid infrastructure Ensure the long-term sustainability of the European e-infrastructure independent of short project funding cycles Coordinate the integration and interaction between National Grid Infrastructures (NGIs) Operate the European level of the production Grid infrastructure for a wide range of scientific disciplines to link NGIs The T.Strizh EGEE (LIT, project JINR) -Bob Jones -EGEE'08-22 September

53 Grid establishes itself as a scientific tool EU invests into Grids: Over 450 m invested in last 8 years to support Grid research and e-infrastructure e deployments One Call towards the end of 2009 (indicative budget 95 m): m Create the European Grid Initiative (EGI) Software and services in support of EGI and HPC User Communities focus on e-infrastructure e needs of important user-communities Support Measures including international cooperation and prospective and strategic studies 53

54 What future holds ARC consortium (NorduGrid( NorduGrid,, NDGF, KnowARC et al), together with glite and UNICORE, contribute to creation of the Universal Middleware Distribution (UMD) for the European Grid Initiative (EGI) Sites and VOs that use ARC will get an access to the European e-science e infrastructure, just like those that use glite or UNICORE What about Clouds? Technically, very similar to Grids: distributed, service- oriented However, Clouds business model is closer to that of HPC Single administrative domain, carefully selected resources 54

55 The Future of Grids From e-infrastructures e to Knowledge Infrastructures Network infrastructure connects computing and data resources and allows their seamless usage via Grid infrastructures Federated resources and new technologies enable new application fields: Distributed digital libraries Distributed data mining Digital preservation of cultural heritage Data curation Knowledge Infrastructure Major Opportunity for Academic and Businesses alike 55 KNOWLEDGE. INFRASTRUCTURE GRID. INFRASTRUCTURE NETWORK. INFRASTRUCTURE 55

56 Mirco Mazzucato DUBNA Grids, clouds, supercomputers.. Grids, clouds, supercomputers, etc. Grids Supercomputers Collaborative environment Expensive Distributed resources Low latency interconnects (political/sociological) Applications peer reviewed Commodity hardware (also Parallel/coupled applications supercomputers) Traditional interfaces (login) (HEP) data management Also SC grids (DEISA, Teragrid) Complex interfaces (bug not feature) Many different problems: Amenable to different Clouds solutions Volunteer computing Proprietary (implementation) Simple mechanism to access Economies of scale in management No right answer millions CPUs Commodity hardware Difficult if (much) data involved Virtualisation for service provision and Control of environment Ł check encapsulating application environment Community building people Details of physical resources hidden Simple interfaces (too simple?) involved in Science Potential for huge amounts of real work 56 Ian Bird 56

57 Software-as as-a-service (SaaS( SaaS) Platform-as as-a-service (PaaS( PaaS) Infrastructure-as as-a-service (IaaS( IaaS) Everything as a Service (XaaS( XaaS)

58 MACS Lab DNA-Array Application Layer Radiology Application Grid Layer Virtual Laboratory layer.

59 Some Desktop Grids World Community Grid - IBM ( PCs Leiden Classical Grid - Education on Grid ( boinc.gorlaeus.net/) PCs SZTAKI - Hungarian initiative ( PCs AlmereGrid ( almeregrid.nl) ) PCs PS3GRID (Based on Playstations) (

60 Real World Problems Taking Us BEYOND PETASCALE 1 ZFlops 100 EFlops 10 EFlops 1 EFlops 100 PFlops 10 PFlops 1 PFlops 100 TFlops 10 TFlops 1 TFlops 100 GFlops 10 GFlops 1 GFlops What we can just model today with <100TF SUM Of Top500 #1 Example Aerodynamic Real Analysis: World Challenges: 1 Petaflops Full Laser modeling Optics: of an aircraft in all 10 conditions Petaflops Green Molecular airplanes Dynamics in Biology: 20 Petaflops Genetically Aerodynamic Design: tailored medicine 1 Exaflops Computational Cosmology: 10 Exaflops Understand the origin of the universe Turbulence in Physics: 100 Exaflops Synthetic Computational fuels Chemistry: everywhere 1 Zettaflops Source: Dr. Steve Chen, The Growing HPC Momentum in China, Accurate extreme June 30 weather prediction th, 2006, Dresden, Germany 100 MFlops

61 Reach Exascale by 2018 From GigFlops to ExaFlops ~1987 Sustained GigaFlop ~1997 Sustained TeraFlop 2008 Sustained PetaFlop ~2018 Sustained ExaFlop Note: Numbers are based on Linpack Benchmark. Dates are approximate. The pursuit of each milestone has led to important breakthroughs in science and engineering. Source: IDC In Pursuit of Petascale Computing: Initiatives Around the World, 2007

62

63 Суперкомпьютер МГУ Ломоносов

64 Japan Courtesy of Satoshi Matsuoka, Tokyo Institute of Technology, Japan, ISC-2010

65 Grid-infrastructure infrastructure at JINR

66 Development of the JINR Grid - environment 66

67 JINR in the Russian Data Intensive Grid infrastructure (RDIG) The Russian consortium RDIG (Russian Data Intensive Grid), was set up in September 2003 as a national federation in the EGEE project. Now the RDIG infrastructure comprises 17 Resource Centers with > 5000 CPU (12000 ksi2k) and > 3200 TB of disc storage. RDIG Resource Centres: ITEP JINR-LCG2 Kharkov-KIPT RRC-KI RU-Moscow-KIAM RU-Phys-SPbSU RU-Protvino-IHEP RU-SPbSU Ru-Troitsk-INR ru-impb-lcg2 ru-moscow-fian ru-moscow-gcras ru-moscow-mephi ru-pnpi-lcg2 ru-moscow-sinp - BY-NCPHEP 67

68 Development and maintenance of RDIG e-infrastructure - support of basic grid-services; - Support of Regional Operations Center (ROC); - Support of Resource Centers (RC) in Russia; - RDIG Certification Authority; - RDIG Monitoring and Accounting; - participation in integration, testing, certification of grid-software; - support of Users, Virtual Organization (VO) and application; - User & Administrator training and education; - Dissemination, outreach and Communication grid activities.

69 VOs Infrastructure VO's (all RC's): dteam ops Most RC support the WLCG/EGEE VO's Alice Atlas CMS LHCb Supported by some RC's: gear Biomed Fusion Regional VO's Ams, eearth, photon, rdteam, rgstest Flagship applications: LHC, Fusion (toward to ITER), nanotechnology Current interests from: medicine, engineering 69

70 LHC Computing Grid Project (LCG) The protocol between CERN, Russia and JINR on a participation in LCG Project has been approved in The tasks of the Russian institutes in the LCG have been defined as: LCG software testing; evaluation of new Grid technologies (e.g. Globus toolkit 3) in a context of using in the LCG; event generators repository, data base of physical events: support and development.

71 Worldwide LHC Computing Grid Project (WLCG) The tasks of the Russian institutes & JINR in the LCG (2009 years): Task 1. MW (glite) Testsuit (supervisor O. Keeble) Task 2. LCG vs Experiments (supervisor I. Bird) Task 3. LCG monitoring (supervisor J. Andreeva) Task 4/5. Genser/ MCDB ( supervisor A. Ribon)

72 Worldwide LHC Computing Grid Project (WLCG) The protocol between CERN, Russia and JINR on participation in LCG Project was approved in MoU on Worldwide LHC Computing Grid (WLCG) signed by JINR in October, 2007 The tasks of the JINR in the WLCG: WLCG-infrastructure support and development at JINR; participation in WLCG middleware testing/evaluation, participation in Service/Data Challenges, grid monitoring and accounting tools development; FTS-monitoring and testing; participation in ARDA activities in coordination with experiments; JINR LCG portal support and development; HEP applications; MCDB development; support of JINR Member States in the WLCG activities; User & Administrator training and education. 72

73 Integration, testing, certification of new MW Testing glite components: - Metadata catalog, Fireman catalog, gridftp, glite-amga metadata service, dcache, -testing glite for ATLAS and CMS, -- Collaboration with the condor-g team and development of monitoring for CONDOR -- development dashboard Evaluation of new MW: JINR, KIAM, SINP MSU -OMII (Open Middleware Infrastructure Institute) - Globus Toolkit 3 & 4;

74 Participation in WLCG MCDB Development MCDB access libraries have been integrated to CMSSW software Several improvements have been made in MCDB software New improved XML schema has been developed for High Energy Physics Markup Language (HepML( HepML) The program libraries have been developed to work with new HepML schema

75 HepWeb Overview Provides: WEB access to computing resources of LIT for Monte Carlo simulations of hadron-hadron, hadronnucleus, and nucleus-nucleus interactions, by means of most popular generators. Realization: service - oriented architecture. Goals: Monte Carlo simulations at the server Provide physicists with new calculation/simulation tools Mirror site of GENSER of the LHC Computing GRID project Provide physicists with informational and mathematical support Introduce young physicists into HEP world 75

76 RDIG monitoring&accounting Monitoring allows to keep an eye on parameters of Grid sites' operation in real time Accounting - resources utilization on Grid sites by virtual organizations and single users Monitored values CPUs - total /working / down/ free / busy Jobs - running / waiting Storage space - used / available Network - Available bandwidth Accounting values Number of submitted jobs Used CPU time Totally sum in seconds Normalized (with WNs productivity) Average time per job Waiting time Totally sum in seconds Average ratio waiting/used CPU time per job Physical memory Average per job JINR CICC 76

77 FTS (FILE TRANSFER SERVICE) MONITORING FOR WORLDWIDE LHC COMPUTING GRID (WLCG) PROJECT A monitoring systemis developed which provides a convenient and reliable tool for receiving detailed information about the FTS current stateand the analysis of errorson data transfer channels, maintaining FTS functionality and optimization of the technical support process. The system could seriously improve the FTS reliability and performance. 77

78 USER- INTERFACE AND VISUALIZATION SERVICE DEVELOPMENT FOR VIRTUAL ORGANIZATION SUPPORT IN HIGH ENERGY PHYSICS S. Mitsyn (LIT) LHC Project Support Grid Monitoring:Deals with decentralized structures involving a large amount of data. Its proper representation is an essential part of the monitoring process. Google Earth offers a quite informative and visually attractive representation which mapping Grid infrastructure objects, processes and events on a geographic map. 78

79 Portal egee-rdig.ruru 79

80 Deletion service for ATLAS Distributed Data Management (DDM) The ATLAS Distributed Data Management project (DQ2) is responsible for the replication, access and bookkeeping of ATLAS data across more than 100 distributed grid sites. One of the most important service of DQ2 is deletion service. This distributed service interacts with 3rd party grid middleware and the DQ2 catalogs to serve data deletion requests on the grid. Furthermore, it also takes care of retrial strategies, check-pointing transactions, and performance throttling, ensuring DQ2's scalability and fault tolerance. Works on Deletion service include: support current version of software. Step by step development of new version of deletion service (more stable, with extended monitoring system). Works in development included: building new interfaces between parts of deletion service (based on web service technology), creating new database schema, rebuilding deletion service core part, builds extended interfaces with mass storage systems, extend deletion monitoring system

81 Deletion service for ATLAS DDM. Current progress Created HTTP client-server components for deletion service (in preproduction testing) Created monitoring web site backend with using AJAX technology and Django framework (in preproduction testing) with data export in JSON format Created new database schema (including triggers, sequences, etc) In works migration scenario for new DB schema, extension monitoring system. In nearest future: rebuilding deletion service core Finish of main development works for new version of deletion service (in current roadmap) excepted at the and of summer.

82 ATLAS Tier-3 status in Dubna We plan to try PROOF/Xrootd to run ATLAS analysis. LNP analysis cluster LIT analysis facility Central JINR SE is used as data storage

83 Remote ATLAS Control Room in Dubna MOTIVATION Monitoring of the detector at any time Participation of the subsystem experts from Dubna in the shifts and data quality checks remotely Training the shifters before they come to CERN

84 Production Normalised CPU time per EGEE Region (June 2009-May 2010) 84

85 Russia and JINR Normalized CPU time per SITE (June May 2010) 85

86 Production Normalised CPU time per EGEE site for VO LHC (June 2009 May 2010) GRID-site CPU time Num CPU (Core) FZK-LCG 41,936, RAL-LCG2 27,043, IN2P3-CC 24,462, GRIF 20,083, NIKHEF-ELPROD 17,162, DESY-HH 16,413, UKI-GLASGOW 16,404, INFN-T1 15,683, PIC 14,520, SARA-MATRIX 14,492, TRIUMF-LCG2 14,490, IN2P3-CC-T2 14,059, JINR 13,503, NDGF-T1 12,519,

87 Development of the JINR Grid-environment Network level: links between Moscow and Dubna on the basis of state-of-the-art technologies DWDM and 10Gb Ethernet. JINR Local area network : JINR High-speed backbone construction 10Gbps Resource level: requirements of the LHC experiments stimulate the development of a global Grid-infrastructure, together with the resource centers of all the cooperating organizations. First of all, this is of primary concern for such large research centers as the JINR. To reach effective processing and analysis of the experimental data, further increase in the JINR CICC performance and disk space is needed. 87

88 Grid training and education distributed training infrastructure: glite user trainings for students of Dubna University and University Centre of JINR, grid site administrators trainings for JINR member-states, testbed for grid developers, testbed for middleware evaluation, GILDA cooperation 88

89 Participation in GridNNN project Grid support for Russian national nanotechnology network To provide for science and industry an effective access to the distributed computational, informational and networking facilities Expecting breakthrough in nanotechnologies Supported by the special federal program Main points based on a network of supercomputers (about 15-30) has two grid operations centers (main and backup) is a set of grid services with unified interface partially based on Globus Toolkit 4 89

90 GridNNN infrastructure 10 resource centers at the moment in different regions of Russia RRC KI, «Chebyshev» (MSU), IPCP RAS, CC FEB RAS, ICMM RAS, JINR, SINP MSU, PNPI, KNC RAS, SPbSU 90

91 JINR in GridNNN projects Development activities Monitoring and accounting system Registration system of grid services and sites User support service Application area Virtual organization for molecular dynamics calculations Adjustment of some JINR's applications to parrallel execution in GridNNN environment Infrastructure for applications development and testing 91

92 Grid cooperation with JINR member-states Common project, grants (Czech, Slovak, Germany, South Africa, Belarus, Bulgaria, Ukraine, Romania Protocols, agreements (Armenia, Belarus, Bulgaria, Moldova, Poland, Czech, Slovak grid site administrators trainings (Ukraine, Romania, Uzbekistan, Azerbaijan Courses, Lectures, Practical training for students and users (Egipt, Bulgaria Consulting (Cuba, Georgia, Kazakhstan, Mongolia, Vietnam, Korea 92

93 Frames for Grid cooperation of JINR Worldwide LHC Computing Grid (WLCG); Enabling Grids for E-sciencE (EGEE); RDIG Development (Project of FASI) CERN-RFBR project Grid Monitoring from VO perspective BMBF grant Development of the Grid-infrastructure and tools to provide joint investigations performed with participation of JINR and German research centers Development of Grid segment for the LHC experiments was supported in frames of JINR-South Africa cooperation agreement; NATO project "DREAMS-ASIA (Development of grid EnAbling technology in Medicine&Science for Central ASIA); JINR -FZU AS Czech Republic Project The GRID infrastructure for the physics experiments NASU-RFBR project Development and support of LIT JINR and NSC KIPT grid-infrastructures for distributed CMS (CERN) data processing during the first two years of the Large Hadron Collider operation Project Elaboration of distributed computing JINR-Armenia grid-infrastructure for carrying out mutual scientific investigations JINR-Romania cooperation Hulubei-Meshcheryakov programme Project "SKIF-GRID" (Program of Belarussian-Russian Union State). Project GridNNN (National Nanotechnological Net) 93

94 Applied level developments of the JINR Grid-environment (I) 1. The applied level covers the user applications working in a virtual organization (VO) environment which comprises both users and owners of computing resources. 2. In the existing Grid-systems, a VO defines a collaboration of specialists in some area, who combine their efforts to achieve a common aim. 3. The virtual organization is a flexible structure that can be formed dynamically and may have a limited life-time. VOs working within the WLCG project are the VOs on the LHC experiments - ATLAS, CMS, Alice, LHCb,, the first three being carried out with the noticeable and direct participation of the JINR. Nowadays, as a Grid-segment of the EGEE/RDIG, the JINR CICC supports computations of the virtual organizations registered in RDIG ( LHC experiments, BioMed,, PHOTON, eearth,, Fusion, HONE, Panda. In the future, as the interest arises at a large-scale level, VOs can be organized at JINR in the fields of nuclear physics and condensed matter physics and, most probably, in the new promising direction related to the research of the nanostructure properties. 94

95 Applied level developments of the JINR Grid-environment (II) The creation of new VOs gets possible and necessary under maturation of the algorithmic approaches to the problem solution, the development of corresponding mathematical methods and tools: methods and tools for simulation of physical processes and analysis of experimental data software and computer complexes for experimental data processing; numerical methods, algorithms and software for modeling complex physical systems; methods, algorithms and software of computer algebra; new computing paradigms; adaptation of specialized software for solving problems within the Grid- environment. 95

96 4-rd International Conference "Distributed Computing and Grid-technologies in Science and Education 28 June 3 July,

97 WEB-PORTAL GRID AT JINR ГРИДВОИЯИ : A new informational resource has been created at JINR: web-portal GRID AT JINR. The content includes the detailed information on the JINR grid-site and JINR s participation in grid projects: КОНЦЕПЦИЯГРИД ГРИД-технологии ГРИД-проекты Консорциум РДИГ ГРИД-САЙТ ОИЯИ Инфраструктура исервисы Схема Статистика Поддержка ВОиэкспериментов ATLAS CMS CBM и PANDA HONE Какстатьпользователем ОИЯИВГРИД-ПРОЕКТАХ WLCG ГридННС EGEE Проекты РФФИ ПроектыИНТАС СКИФ-ГРИД ТЕСТИРОВАНИЕГРИД-ПО СТРАНЫ-УЧАСТНИЦЫОИЯИ МОНИТОРИНГИАККАУНТИНГ RDIG-мониторинг dcache-мониторинг Dashboard FTS-мониторинг Н1 МС-мониторинг ГРИД-КОНФЕРЕНЦИИ GRID NEC ОБУЧЕНИЕ Учебнаягрид-инфраструктура Курсыилекции Учебныематериалы ДОКУМЕНТАЦИЯ Статьи Учебныематериалы НОВОСТИ КОНТАКТЫ

98 WEB-PORTAL GRID AT JINR ГРИДВОИЯИ :

99 Useful References: Grid Café: OPEN GRID FORUM: GLOBUS: TERAGRID RID: Open Science Grid: opensciencegrid.org/ LCG: lcg.web.cern.ch/lcg/ EGEE: EGEE-RDIG: egee-rdig.ru EGI: egi.eu/ International Science Grid this Week:

100 The blind men and the elephant in the room Cyberinfrastructure SaaS SOA Shared Infrastructure/ Shared Services Web 2.0 Grids Automation Virtualization

101 Thank you for your attention! 101

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

GRID Activity at Russia and JINR

GRID Activity at Russia and JINR GRID Activity at Russia and JINR Korenkov Vladimir (LIT,JINR) MMCP-2009, Dubna,, 07.07.09 Grid computing Today there are many definitions of Grid computing: The definitive definition of a Grid is provided

More information

Distributed e-infrastructures for data intensive science

Distributed e-infrastructures for data intensive science Distributed e-infrastructures for data intensive science Bob Jones CERN Bob.Jones CERN.ch Overview What is CERN The LHC accelerator and experiments The Computing needs of the LHC The World wide LHC

More information

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch Preparing for High-Luminosity LHC Bob Jones CERN Bob.Jones cern.ch The Mission of CERN Push back the frontiers of knowledge E.g. the secrets of the Big Bang what was the matter like within the first

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Travelling securely on the Grid to the origin of the Universe

Travelling securely on the Grid to the origin of the Universe 1 Travelling securely on the Grid to the origin of the Universe F-Secure SPECIES 2007 conference Wolfgang von Rüden 1 Head, IT Department, CERN, Geneva 24 January 2007 2 CERN stands for over 50 years of

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The LHC computing model and its evolution. Dr Bob Jones CERN

The LHC computing model and its evolution. Dr Bob Jones CERN The LHC computing model and its evolution Dr Bob Jones CERN Bob.Jones CERN.ch CERN was founded 1954: 12 European States Today: 20 Member States ~ 2300 staff ~ 790 other paid personnel > 10000 users

More information

The grid for LHC Data Analysis

The grid for LHC Data Analysis The grid for LHC Data Analysis ICAP 2006 Conference Chamonix 5 October 2006 Les Robertson - CERN LHC Computing Grid Project Leader The LHC Computing Challenges 1. Data After reduction by triggers and data

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

High Performance Computing from an EU perspective

High Performance Computing from an EU perspective High Performance Computing from an EU perspective DEISA PRACE Symposium 2010 Barcelona, 10 May 2010 Kostas Glinos European Commission - DG INFSO Head of Unit GÉANT & e-infrastructures 1 "The views expressed

More information

Introduction to Grid Infrastructures

Introduction to Grid Infrastructures Introduction to Grid Infrastructures Stefano Cozzini 1 and Alessandro Costantini 2 1 CNR-INFM DEMOCRITOS National Simulation Center, Trieste, Italy 2 Department of Chemistry, Università di Perugia, Perugia,

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

Grid Challenges and Experience

Grid Challenges and Experience Grid Challenges and Experience Heinz Stockinger Outreach & Education Manager EU DataGrid project CERN (European Organization for Nuclear Research) Grid Technology Workshop, Islamabad, Pakistan, 20 October

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN) A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens

More information

CHIPP Phoenix Cluster Inauguration

CHIPP Phoenix Cluster Inauguration TheComputing Environment for LHC Data Analysis The LHC Computing Grid CHIPP Phoenix Cluster Inauguration Manno, Switzerland 30 May 2008 Les Robertson IT Department - CERN CH-1211 Genève 23 les.robertson@cern.ch

More information

The EGEE-III Project Towards Sustainable e-infrastructures

The EGEE-III Project Towards Sustainable e-infrastructures The EGEE-III Project Towards Sustainable e-infrastructures Erwin Laure EGEE-III Technical Director Erwin.Laure@cern.ch www.eu-egee.org EGEE-II INFSO-RI-031688 EGEE and glite are registered trademarks EGEE

More information

Computing grids, a tool for international collaboration and against digital divide Guy Wormser Director of CNRS Institut des Grilles (CNRS, France)

Computing grids, a tool for international collaboration and against digital divide Guy Wormser Director of CNRS Institut des Grilles (CNRS, France) Computing grids, a tool for international collaboration and against digital divide Guy Wormser Director of CNRS Institut des Grilles (CNRS, France) www.eu-egee.org EGEE and glite are registered trademarks

More information

EGI: Linking digital resources across Eastern Europe for European science and innovation

EGI: Linking digital resources across Eastern Europe for European science and innovation EGI- InSPIRE EGI: Linking digital resources across Eastern Europe for European science and innovation Steven Newhouse EGI.eu Director 12/19/12 EPE 2012 1 EGI European Over 35 countries Grid Secure sharing

More information

HPC Technology Trends

HPC Technology Trends HPC Technology Trends High Performance Embedded Computing Conference September 18, 2007 David S Scott, Ph.D. Petascale Product Line Architect Digital Enterprise Group Risk Factors Today s s presentations

More information

AGIS: The ATLAS Grid Information System

AGIS: The ATLAS Grid Information System AGIS: The ATLAS Grid Information System Alexey Anisenkov 1, Sergey Belov 2, Alessandro Di Girolamo 3, Stavro Gayazov 1, Alexei Klimentov 4, Danila Oleynik 2, Alexander Senchenko 1 on behalf of the ATLAS

More information

Federated data storage system prototype for LHC experiments and data intensive science

Federated data storage system prototype for LHC experiments and data intensive science Federated data storage system prototype for LHC experiments and data intensive science A. Kiryanov 1,2,a, A. Klimentov 1,3,b, D. Krasnopevtsev 1,4,c, E. Ryabinkin 1,d, A. Zarochentsev 1,5,e 1 National

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS

GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS M. Dulea, S. Constantinescu, M. Ciubancan, T. Ivanoaica, C. Placinta, I.T. Vasile, D. Ciobanu-Zabet Department of Computational

More information

On the employment of LCG GRID middleware

On the employment of LCG GRID middleware On the employment of LCG GRID middleware Luben Boyanov, Plamena Nenkova Abstract: This paper describes the functionalities and operation of the LCG GRID middleware. An overview of the development of GRID

More information

Introduction to Grid Computing

Introduction to Grid Computing Milestone 2 Include the names of the papers You only have a page be selective about what you include Be specific; summarize the authors contributions, not just what the paper is about. You might be able

More information

RDMS CMS Computing Activities before the LHC start

RDMS CMS Computing Activities before the LHC start RDMS CMS Computing Activities before the LHC start RDMS CMS computing model Tiers 1 CERN Collaborative centers: RCC RCC MSU, MSU, RRC RRC KI KI Kharkov Minsk JINR Erevan SINP MSU RDMS CMS Tier2 Tbilisi

More information

On the EGI Operational Level Agreement Framework

On the EGI Operational Level Agreement Framework EGI-InSPIRE On the EGI Operational Level Agreement Framework Tiziana Ferrari, EGI.eu EGI Chief Operations Officer 1 Outline EGI and its ecosystem EGI Service Infrastructure Operational level agreements

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

Moving e-infrastructure into a new era the FP7 challenge

Moving e-infrastructure into a new era the FP7 challenge GARR Conference 18 May 2006 Moving e-infrastructure into a new era the FP7 challenge Mário Campolargo European Commission - DG INFSO Head of Unit Research Infrastructures Example of e-science challenges

More information

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.

More information

Grid and Cloud Activities in KISTI

Grid and Cloud Activities in KISTI Grid and Cloud Activities in KISTI March 23, 2011 Soonwook Hwang KISTI, KOREA 1 Outline Grid Operation and Infrastructure KISTI ALICE Tier2 Center FKPPL VO: Production Grid Infrastructure Global Science

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

Bob Jones. EGEE and glite are registered trademarks. egee EGEE-III INFSO-RI

Bob Jones.  EGEE and glite are registered trademarks. egee EGEE-III INFSO-RI Bob Jones EGEE project director www.eu-egee.org egee EGEE-III INFSO-RI-222667 EGEE and glite are registered trademarks Quality: Enabling Grids for E-sciencE Monitoring via Nagios - distributed via official

More information

e-infrastructures in FP7 INFO DAY - Paris

e-infrastructures in FP7 INFO DAY - Paris e-infrastructures in FP7 INFO DAY - Paris Carlos Morais Pires European Commission DG INFSO GÉANT & e-infrastructure Unit 1 Global challenges with high societal impact Big Science and the role of empowered

More information

HPC IN EUROPE. Organisation of public HPC resources

HPC IN EUROPE. Organisation of public HPC resources HPC IN EUROPE Organisation of public HPC resources Context Focus on publicly-funded HPC resources provided primarily to enable scientific research and development at European universities and other publicly-funded

More information

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

The European DataGRID Production Testbed

The European DataGRID Production Testbed The European DataGRID Production Testbed Franck Bonnassieux CNRS/UREC ENS-Lyon France DataGrid Network Work Package Manager Franck.Bonnassieux@ens-lyon.fr Presentation outline General DataGrid project

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

e-infrastructure: objectives and strategy in FP7

e-infrastructure: objectives and strategy in FP7 "The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" e-infrastructure: objectives and strategy in FP7 National information

More information

CERN and Scientific Computing

CERN and Scientific Computing CERN and Scientific Computing Massimo Lamanna CERN Information Technology Department Experiment Support Group 1960: 26 GeV proton in the 32 cm CERN hydrogen bubble chamber 1960: IBM 709 at the Geneva airport

More information

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1

A distributed tier-1. International Conference on Computing in High Energy and Nuclear Physics (CHEP 07) IOP Publishing. c 2008 IOP Publishing Ltd 1 A distributed tier-1 L Fischer 1, M Grønager 1, J Kleist 2 and O Smirnova 3 1 NDGF - Nordic DataGrid Facilty, Kastruplundgade 22(1), DK-2770 Kastrup 2 NDGF and Aalborg University, Department of Computer

More information

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore. Garuda : The National Grid Computing Initiative Of India Natraj A.C, CDAC Knowledge Park, Bangalore. natraj@cdacb.ernet.in 1 Agenda About CDAC Garuda grid highlights Garuda Foundation Phase EU-India grid

More information

Advanced School in High Performance and GRID Computing November Introduction to Grid computing.

Advanced School in High Performance and GRID Computing November Introduction to Grid computing. 1967-14 Advanced School in High Performance and GRID Computing 3-14 November 2008 Introduction to Grid computing. TAFFONI Giuliano Osservatorio Astronomico di Trieste/INAF Via G.B. Tiepolo 11 34131 Trieste

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

The CHAIN-REDS Project

The CHAIN-REDS Project Co-ordination & Harmonisation of Advanced e-infrastructures for Research and Education Data Sharing The CHAIN-REDS Project Federico Ruggieri INFN ISGC 2013 Taipei 18 March 2013 Research Infrastructures

More information

Some aspect of research and development in ICT in Bulgaria. Authors Kiril Boyanov and Stefan Dodunekov

Some aspect of research and development in ICT in Bulgaria. Authors Kiril Boyanov and Stefan Dodunekov Some aspect of research and development in ICT in Bulgaria Authors Kiril Boyanov and Stefan Dodunekov Introduction The development of economy and research is determined by a number of factors, among which:

More information

e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011

e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011 e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011 Science Park Amsterdam a world of science in a city of inspiration > Faculty of Science

More information

Experience of Data Grid simulation packages using.

Experience of Data Grid simulation packages using. Experience of Data Grid simulation packages using. Nechaevskiy A.V. (SINP MSU), Korenkov V.V. (LIT JINR) Dubna, 2008 Contant Operation of LCG DataGrid Errors of FTS services of the Grid. Primary goals

More information

Grid Computing September 2010 Marian Babik CERN. The LHC Computing Grid Marian Babik (orig. by Marian Babik (orig. by Rafal Otto, GridCafe),

Grid Computing September 2010 Marian Babik CERN. The LHC Computing Grid Marian Babik (orig. by Marian Babik (orig. by Rafal Otto, GridCafe), Grid Computing September 2010 Marian Babik CERN The LHC Computing Grid Marian Babik (orig. by Marian Babik (orig. by Rafal Otto, GridCafe), Outline Networking Web Web 2.0 Distributed computing Grid Cloud

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools

Outline. Infrastructure and operations architecture. Operations. Services Monitoring and management tools EGI-InSPIRE EGI Operations Tiziana Ferrari/EGI.eu EGI Chief Operations Officer 1 Outline Infrastructure and operations architecture Services Monitoring and management tools Operations 2 Installed Capacity

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

PROJECT FINAL REPORT. Tel: Fax:

PROJECT FINAL REPORT. Tel: Fax: PROJECT FINAL REPORT Grant Agreement number: 262023 Project acronym: EURO-BIOIMAGING Project title: Euro- BioImaging - Research infrastructure for imaging technologies in biological and biomedical sciences

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Federated Data Storage System Prototype based on dcache

Federated Data Storage System Prototype based on dcache Federated Data Storage System Prototype based on dcache Andrey Kiryanov, Alexei Klimentov, Artem Petrosyan, Andrey Zarochentsev on behalf of BigData lab @ NRC KI and Russian Federated Data Storage Project

More information

EUMETSAT EXPERIENCE WITH MULTICAST ACROSS GÉANT

EUMETSAT EXPERIENCE WITH MULTICAST ACROSS GÉANT 1 EUMETSAT EXPERIENCE WITH MULTICAST ACROSS GÉANT Lothar.Wolf@eumetsat.int Competence Area Manager for Data Services OVERVIEW EUMETSAT Background WAN links Multicast accross GÉANT infrastructure Summary

More information

The EU DataGrid Testbed

The EU DataGrid Testbed The EU DataGrid Testbed The European DataGrid Project Team http://www.eudatagrid.org DataGrid is a project funded by the European Union Grid Tutorial 4/3/2004 n 1 Contents User s Perspective of the Grid

More information

GÉANT Mission and Services

GÉANT Mission and Services GÉANT Mission and Services Vincenzo Capone Senior Technical Business Development Officer CREMLIN WP2 Workshop on Big Data Management 15 February 2017, Moscow GÉANT - Networks Manages research & education

More information

Easy Access to Grid Infrastructures

Easy Access to Grid Infrastructures Easy Access to Grid Infrastructures Dr. Harald Kornmayer (NEC Laboratories Europe) On behalf of the g-eclipse consortium WP11 Grid Workshop Grenoble, France 09 th of December 2008 Background in astro particle

More information

EuroHPC Bologna 23 Marzo Gabriella Scipione

EuroHPC Bologna 23 Marzo Gabriella Scipione EuroHPC Bologna 23 Marzo 2018 Gabriella Scipione g.scipione@cineca.it EuroHPC - Europe's journey to exascale HPC http://eurohpc.eu/ What EuroHPC is a joint collaboration between European countries and

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

N. Marusov, I. Semenov

N. Marusov, I. Semenov GRID TECHNOLOGY FOR CONTROLLED FUSION: CONCEPTION OF THE UNIFIED CYBERSPACE AND ITER DATA MANAGEMENT N. Marusov, I. Semenov Project Center ITER (ITER Russian Domestic Agency N.Marusov@ITERRF.RU) Challenges

More information

Grid Computing at the IIHE

Grid Computing at the IIHE BNC 2016 Grid Computing at the IIHE The Interuniversity Institute for High Energies S. Amary, F. Blekman, A. Boukil, O. Devroede, S. Gérard, A. Ouchene, R. Rougny, S. Rugovac, P. Vanlaer, R. Vandenbroucke

More information

Challenges of Big Data Movement in support of the ESA Copernicus program and global research collaborations

Challenges of Big Data Movement in support of the ESA Copernicus program and global research collaborations APAN Cloud WG Challenges of Big Data Movement in support of the ESA Copernicus program and global research collaborations Lift off NCI and Copernicus The National Computational Infrastructure (NCI) in

More information

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

The Virtual Observatory and the IVOA

The Virtual Observatory and the IVOA The Virtual Observatory and the IVOA The Virtual Observatory Emergence of the Virtual Observatory concept by 2000 Concerns about the data avalanche, with in mind in particular very large surveys such as

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

EUMEDCONNECT3 and European R&E Developments

EUMEDCONNECT3 and European R&E Developments EUMEDCONNECT3 and European R&E Developments David West DANTE 17 September 2012 INTERNET2 Middle SIG, Abu Dhabi The Research and Education Network for the Mediterranean Covering GEANT Other regional network

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

EGI federated e-infrastructure, a building block for the Open Science Commons

EGI federated e-infrastructure, a building block for the Open Science Commons EGI federated e-infrastructure, a building block for the Open Science Commons Yannick LEGRÉ Director, EGI.eu www.egi.eu EGI-Engage is co-funded by the Horizon 2020 Framework Programme of the European Union

More information

The EPIKH, GILDA and GISELA Projects

The EPIKH, GILDA and GISELA Projects The EPIKH Project (Exchange Programme to advance e-infrastructure Know-How) The EPIKH, GILDA and GISELA Projects Antonio Calanducci INFN Catania (Consorzio COMETA) - UniCT Joint GISELA/EPIKH School for

More information

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms

Grid Computing. MCSN - N. Tonellotto - Distributed Enabling Platforms Grid Computing 1 Resource sharing Elements of Grid Computing - Computers, data, storage, sensors, networks, - Sharing always conditional: issues of trust, policy, negotiation, payment, Coordinated problem

More information

Grid Scheduling Architectures with Globus

Grid Scheduling Architectures with Globus Grid Scheduling Architectures with Workshop on Scheduling WS 07 Cetraro, Italy July 28, 2007 Ignacio Martin Llorente Distributed Systems Architecture Group Universidad Complutense de Madrid 1/38 Contents

More information

The National Research and Education Network. Problems and Solutions

The National Research and Education Network. Problems and Solutions The National Research and Education Network. Problems and Solutions Vladimir Sahakyan Director of the Institute for Informatics and Automation Problems of the National Academy of Sciences of the Republic

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

High Performance Computing Course Notes Grid Computing I

High Performance Computing Course Notes Grid Computing I High Performance Computing Course Notes 2008-2009 2009 Grid Computing I Resource Demands Even as computer power, data storage, and communication continue to improve exponentially, resource capacities are

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

JINR cloud infrastructure

JINR cloud infrastructure Procedia Computer Science Volume 66, 2015, Pages 574 583 YSC 2015. 4th International Young Scientists Conference on Computational Science V.V. Korenkov 1,2, N.A. Kutovskiy 1,2, N.A. Balashov 2, A.V. Baranov

More information

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac. g-eclipse A Framework for Accessing Grid Infrastructures Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.cy) EGEE Training the Trainers May 6 th, 2009 Outline Grid Reality The Problem g-eclipse

More information

e-infrastructures in FP7: Call 7 (WP 2010)

e-infrastructures in FP7: Call 7 (WP 2010) e-infrastructures in FP7: Call 7 (WP 2010) Call 7 Preliminary information on the call for proposals FP7-INFRASTRUCTURES-2010-2 (Call 7) subject to approval of the Research Infrastructures WP 2010 About

More information

National R&E Networks: Engines for innovation in research

National R&E Networks: Engines for innovation in research National R&E Networks: Engines for innovation in research Erik-Jan Bos EGI Technical Forum 2010 Amsterdam, The Netherlands September 15, 2010 Erik-Jan Bos - Chief Technology Officer at Dutch NREN SURFnet

More information

The glite middleware. Ariel Garcia KIT

The glite middleware. Ariel Garcia KIT The glite middleware Ariel Garcia KIT Overview Background The glite subsystems overview Security Information system Job management Data management Some (my) answers to your questions and random rumblings

More information