Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 3 May 2012 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/
LHC Computing Large Hadron Collider the worlds largest microscope quarks 'looking at the fundamental forces of nature 10-15 m 27 km circumference CERN, Genève atom nucleus ~ 20 PByte of data per year, ~ 60 000 modern PC style computers
Atlas Trigger Design Level 1 Hardware based, online Accepts 75 KHz, latency 2.5 ms 160 GB/s Level 2 500 Processor farm Accepts 2 KHz, latency 10 ms 5 GB/s Event Filter 1600 processor farm Accepts 200 Hz, ~1 s per event Incorporates alignment, calibration 300 MB/s From: The ATLAS trigger system, Srivas Prasad
Balloon (30 Km) Signal/Background 10-9 Stack of CDs w/ 1 year LHC data! (~ 20 Km) Data volume (high rate) X (large number of channels) X (4 experiments) 20 PetaBytes new data per year Concorde (15 Km) Compute power (event complexity) X (number of events) X (thousands of users) 60.000 processors Mt. Blanc (4.8 Km)
Scientific Compute e-infrastructure Task parallelism (also known as function parallelism and control parallelism) is a form of parallelization of computer code across multiple processors in parallel computing environments. Task parallelism focuses on distributing execution processes (threads) across different parallel computing nodes. Data parallelism (also known as loop-level parallelism) is a form of parallelization of computing across multiple processors in parallel computing environments. Data parallelism focuses on distributing the data across different parallel computing nodes. From: Key characteristics of SARA and BiG Grid Compute services
What is BiG Grid? Collaborative effort of the NBIC, NCF and Nikhef. Aims to set up a grid infrastructure for scientific research. This research infrastructure contains compute clusters, data storage, combined with specific middleware and software to enable research which needs more than just raw computing power or data storage. We aim to assist scientists from all backgrounds in exploring and using the opportunities offered by the Dutch e-science grid. http://www.biggrid.nl
Nikhef (NDPF) 3336 1600 160 processor cores TByte disk Gbps network SARA (GINA+LISA) 3000 1800 2000 160 processor cores TByte disk TByte tape Gbps network RUG-CIT (Grid) Philips Research Ehv 1600 100 1 processor cores TByte disk Gbps network 400 8 800 10 processor cores GByte disk Gbps network
Image sources: VL-e Consortium Partners Virtual Laboratory for e-science Data integration for genomics, proteomics, etc. analysis Timo Breit et al. Swammerdam Institute of Life Sciences Medical Imaging & fmri Avian Alert & FlySafe Willem Bouten et al. UvA Institute for Biodiversity Ecosystem Dynamics, IBED Silvia Olabarriaga et al. AMC and UvA IvI Bram Koster et al. LUMC Microscopic Imaging group Molecular Cell Biology & 3D Electron Microscopy
Image sources: BiG Grid Consortium Partners BiG Grid SCIAMACHY Wim Som de Cerff et al. KNMI MPI Nijmegen: Psycholinguistics
Image sources: BiG Grid Consortium Partners BiG Grid Leiden Grid Initiative: Computational Chemistry LOFAR: LOw Frequency ARray radio telescope
Grid organisation National Grid Initiatives & European Grid Initiative At the national level a grid infrastructure is offered to national and international users by the NGIs. BiG Grid is (de facto) the Dutch NGI. The 'European Grid Initiative' coordinates the efforts of the different NGIs and ensures interoperability Circa 40 European NGIs, with links to South America and Taiwan Headquarter of EGI is at the Science Park in Amsterdam
Cross-domain and global e-science grids The communities that make up the grid: not under single hierarchical control, temporarily joining forces to solve a particular problem at hand, bringing to the collaboration a subset of their resources, sharing those at their discretion and each under their own conditions.
Challenges: scaling up Grid especially means scaling up: Distributed computing on many, different computers, Distributed storage of data, Large amounts of data (Giga-, Tera-, Petabytes), Large number of files (millions). This gives rise to interesting problems: Remote logins are not always possible on the grid, Debugging a program is a challenge, Regular filesystems tend to choke on millions of files, Storing data is one thing, searching and retrieving turn out to be even bigger challenges.
Challenges: security Why is security so important for an e-science Infrastructure? e-science communities are not under a single hierarchical control; As grid site administrator you are allowing relatively unknown persons to run programs on your computers; All of these computers are connected to the internet using an incredibly fast network: This makes the grid a potentially very dangerous service on the internet
Lessons Learned: Data Management Storing Petabytes of data is possible, but... Retrieving data is harder than you would expect; Organising such amounts of data is non-trivial; Applications are much smaller than the data they need to process always bring your application to the data, if possible; The data about the data (metadata) becomes crucial: location, experimental conditions, date and time Storing the metadata in a database can be a life-saver.
Lessons Learned: Job efficiency A recurring complaint heard about grid computing is low job efficiency (~94%). It is important to know that: Failed jobs almost always did so due to data access issues; If you remove data access issues, job efficiency jumps to ~99%, which is on par with cluster and cloud computing. Mitigation strategies: Replicate files to multiple storage systems; Pre-stage data to specific compute sites; Program for failure.
Lessons Learned: Network bandwidth All data taken by the LHC in CERN is replicated out to 11 Tier-1 centres around Europe. BiG Grid serves as one of those Tier-1's. We always thought and knew we have a good network, but Having a dedicated optical network (OPN) from CERN to the data storage centres (Tier-1s) turned out to be crucial; It turns out that the Network bandwidth between storage and compute clusters is equally important
Questions? http://www.nikhef.nl