Data Processing in HEP: ROOT

Size: px
Start display at page:

Download "Data Processing in HEP: ROOT"

Transcription

1 Data Processing in HEP: ROOT Future Trends in Nuclear Physics Computing, March , Jefferson Lab Pere Mato/CERN

2 Outline HEP Data Processing Software and Computing LHC Computing Grid, LHC software stack ROOT: a modular scientific software framework Main functionalities What s new in ROOT 6 Development beyond ROOT 6 Collaborations and the HEP Software Foundation Conclusions 2

3 Offline Data Processing detector event filter (selection & reconstruction) Event Summary Data (ESD) processed data raw data batch physics analysis event reconstruction Analysis Object Data (AOD) (extracted by physics topic) event simulation Raw data to be stored permanently: >15 PB/year individual physics analysis 3

4 Big Data requires Big Computing The LHC experiments rely on distributed computing resources: WLCG - a global solution, based on the Grid technologies/middleware. distributing the data for processing, user access, local analysis facilities etc. at time of inception envisaged as the seed for global adoption of the technologies Tiered structure Tier-0 at CERN: the central facility for data processing and archival 11 Tier-1s: big computing centers with high quality of service used for most complex/intensive processing operations and archival ~140 Tier-2s: computing centers across the world used primarily for data analysis and simulation. Capacity: ~350,000 CPU cores ~200 PB of disk space ~200 PB of tape space 4

5 J. Incandela for the CMS COLLABORATION H #γγ candidate S/B%Weighted%Mass%Distribution%! Sum%of%mass%distributions%for%each%event%class,%weighted%by%S/B%! B%is%integral%of%background%model%over%a%constant%signal%fraction%interval% July 4th 2012 The Status of the Higgs Search July 4th 2012 The Status of the Higgs Search J. Incandela for the CMS COLLABORATION A Success Story! 43 5

6 The LHC Software Stack Each experiment maintains its own application software A total of ~50M lines of code; mainly written in C++ Few hundred part-time developers; mostly PhD students More a bazaar than a cathedral Software Structure Common packages and infrastructure: Fundamental common software products and libraries data storage, histograms, simulation toolkits etc. Consistent stack of external software packages GSL, Minuit, Qt, python, (~100 in total) Development infrastructure procedures and tools Event Experiment Framework Simulation Applications Det Desc. Data Mngmt. Core Libraries non-hep specific software packages Calib. Distrib. Analysis 6

7 Software Components Foundation Libraries Simulation Toolkits Basic types Event generators Utility libraries Detector simulation System isolation libraries Statistical Analysis Tools Mathematical Libraries Special functions Minimization, Random Numbers Histograms, N-tuples Fitting Interactivity and User Interfaces Data Organization Event Data Event Metadata (Event collections) Detector Conditions Data Data Management Tools Object Persistency Data Distribution and Replication GUI, Scripting Interactive analysis Data Visualization and Graphics Event and Geometry displays Distributed Applications Parallel processing Grid computing 7

8 HEP Software Frameworks HEP Experiments develop Software Frameworks General Architecture of the Event processing applications To achieve coherency and to facilitate software re-use Hide technical details to the end-user Physicists (providers of the Algorithms) Applications are developed by customizing the Framework By composition of elemental Algorithms to form complete applications Using third-party components wherever possible and configuring them Example the Gaudi Framework used by ATLAS and LHCb among others Message Service JobOptions Service Particle Prop. Service Other Services Application Manager Algorithm Algorithm Event Selector Event Data Service Detec. Data Service Histogram Service Transient Event Store Transient Detector Store Transient Histogram Store Converter Converter Converter Persistency Service Persistency Service Persistency Service Data Files Data Files Data Files 8

9 ROOT At the root of the experiments, project started in 1995 Open Source project (LGPL3) mainly written in C++; 4 MLOC ROOT provides (amongst other things): C++ interpreter, Python bindings Efficient data storage mechanism Advanced statistical analysis algorithms histogramming, fitting, minimization, statistical methods Multivariate analysis, machine learning methods Scientific visualization: 2D/3D graphics, PDF, Latex Geometrical modeler PROOF parallel query engine 9

10 ROOT in Numbers Ever increasing number of users 9336 forum members, posts, 1300 on mailing list Used by basically all HEP experiments and beyond As of today 177 PB of LHC data stored in ROOT format ALICE: 30PB, ATLAS: 55PB, CMS: 85PB, LHCb: 7PB 10

11 ROOT in Plots 11

12 ROOT Object Persistency Scalable, efficient, machine independent, orthogonal to data model Based on object serialization to a buffer Object versioning Automatic schema evolution (backward and forward compatibility) Compression Easily tunable granularity and clustering Remote access XROOTD, HTTP, HDFS, Amazon S3, Self describing file format (stores reflection information) Used to store all LHC data (basically all HEP data) 12

13 Object Containers - TTree Column-wise container for very large number of objects of the same type (events) Minimum amount of overhead per entry Typically not fitting in memory! Objects can be clustered per sub object or even per single attribute (clusters are called branches) Each branch can be read individually A branch is a column Physicists perform final data analysis processing large TTrees 13

14 EVE Event Display 14

15 PROOF-The Parallel Query Engine A system for running ROOT queries in parallel on a large number of distributed computers or many-core machines PROOF is designed to be a transparent, scalable and adaptable extension of the local interactive ROOT analysis session For optimal CPU load it needs fast data access (SSD, disk, network) as queries are often I/O bound The packetizer is the heart of the system It runs on the client/master and hands out work to the workers It takes data locality and storage type into account Tries to avoid storage device overload It makes sure all workers end at the same time 15

16 Various Flavors of PROOF PROOF-Lite (optimized for single many-core machines) Zero configuration setup (no config files and no daemons) Workers are processes and not threads for added robustness Once your analysis runs on PROOF Lite it will also run on PROOF Works with exactly the same user code as PROOF Dedicated PROOF Analysis Facilities (multi-user) Cluster of dedicated physical nodes E.g. department cluster, experiment dedicated farm Some local storage, sandboxing, basic scheduling, basic monitoring PROOF on Demand (single-user) Create a temporary dedicated PROOF cluster on batch resources (Grid or Cloud) Uses an resource management system to start daemons Each user gets a private cluster Sandboxing, daemon in single-user mode (robustness) 16

17 Current Version: ROOT 6 An evolution of ROOT5 Like before, but better Old functionalities preserved, new ones added ROOT6 and ROOT5 are compatible: Old ROOT files are readable with the 6 version New ROOT files are readable with the 5 version Old macros can be executed with ROOT6 (if written in proper C++, see next slides) 17

18 CLING Replaces CINT: a radical change at the core of ROOT Based on LLVM and CLANG libraries. Piggy back on a production quality compiler rather than using an old C parser Future-safe - CLANG is an active C++ compiler Full support for C++11/14 with carefully selected extensions Script s syntax is much stricter (proper C++) Use a C++ just in time compiler (JIT) A C++11 package (e.g. needs at least gcc 4.8 to build) Support for more architectures (ARM64, PowerPC64) 18

19 Proper C++ Interpreter root [0] auto vars = groot->getlistofglobals() (class TCollection *) 0x7f8df2b36690 root [1] for (auto && var : *vars) cout << var->getname() << endl; groot gpad ginterpreter... C++11 root [0] auto f=[](int a){return a*2;} (class (lambda at ROOT_prompt_0:1:8) root [1] f(2) (int) 4 C++11 root [2] #include <random> root [3] #include <functional> root [4] std::mt19937_64 myengine; root [5] std::normal_distribution<float> mydistr(125.,12.); root [6] auto mygaussian = std::bind(mydistr,myengine); root [7] mygaussian() (typename bind_return<_fd, _Td, tuple<> >::type) e 19

20 PyROOT The ROOT Python extension module (PyROOT) allows users to interact with any C++ class from Python Generically, without the need to develop specific bindings Mapping C++ constructs to Python equivalent Give access to the whole Python ecosystem # Example: displaying a ROOT histogram from Python from ROOT import grandom,tcanvas,th1f c1 = TCanvas('c1','Example',200,10,700,500) hpx = TH1F('hpx','px',100,-4,4) for i in xrange(25000): px = grandom.gaus() i = hpx.fill(px) hpx.draw() 20

21 Python without Dictionaries A.h #include <iostream> class A { public: A(const char* n) : m_name(n){} void printname() {std::cout<< m_name << std::endl;} private: const std::string m_name; }; int dummy() {return 42;} >>> import ROOT >>> ROOT.gInterpreter.ProcessLine('#include "A.h"') 0L >>> a = ROOT.A('my name') >>> a.printname() my name >>> ROOT.dummy() 42 python Great potential with many 3rd party libraries!! 21

22 Exploiting JIT: New TFormula Example of using the new JIT capability of LLVM/CLANG Pre-parsing of expressions (as before) but now using a real compiler I/O backward compatibility has been preserved auto f = new TFormula( F, [0]+[1]*x ); auto f = new TFormula("F","[](double *x, double *p) {return p[0]+p[1]*x[0];},1,2); 22

23 ROOT-R Interfaces R functions available in ROOT Wrapper classes for R data objects (e.g. TRDataFrame) R plugins for minimization R plugins for TMVA decision tree libraries (xgboost), neural network, and support vector machine packages template<class T> T fun(t x) { return x+x; } auto r = TRInterface::Instance(); r[ func ] << fun<tvector>; r << print(fun(c(0.5,1,1.5,2,2.5))) ;

24 MultiProc package Developed a new lightweight framework for multi-process applications Inspired by the Python multiprocessing module Idea to re-implement Proof-Lite using it Distribute work to a number of fork() d workers, then collect results std::vector or TObjArray Main advantage: workers have access to complete master state defaults to # of cores C/C++ function loaded macro std::function lambda TPool pool(8) auto result = pool.map(fun, args); std::container initializer list TCollection& unsigned N auto result = pool.mapreduce(fun, args, redfun); Reduce function 24

25 Build and Testing System ROOT uses the CMake cross-platform build-generator tool as a primary build system Native windows builds, support for many build tools: GNU make, Ninja, Visual Studio, Xcode, etc See instructions at Classic configure/make will still be maintained, but it will not be upgraded with new functionality, platforms or modules. Unit and Integration tests (~1400) have been migrate to CTest Binary installations are packaged with CPack Nightly and Continuous integration builds are automated and scheduled with Jenkins, as well as all the release procedures 25

26 ROOTbooks The integration of ROOT with the Jupyter notebook technology (ROOTBook) is well advanced Ideal for training material Possible way to document and share analysis Still some work to do in the following areas: Disseminate the use of ROOTbooks Make the JS visualization the default Add R Take a tour with Binder (no account needed) 26

27 Development Beyond ROOT 6

28 Main Challenges ROOT is 20 years old, and some parts require re-engineering and modernization Need to exploit modern hardware (many-core, GPU, etc.) to boost performance Modernize implementations (C++11 constructs, use existing libraries, etc.) Modernize C++ API to improve robustness eventually giving up on backward/forward compatibility Require the collaboration of the community to ensure evolution and sustainability Facilitate contributions to ROOT without engaging our responsibility in the maintenance and user support Layered software modules or plugins that can bring new functionality to the end-users 28

29 Development Main Directions Cling Interpreter and its full exploitation C++11/14, JIT compilation opens many possibilities (automatic differentiation, improved interactivity, etc.) Modern C++ and Python interfaces Make ROOT simpler and more robust. Explore better C++ interfaces making use of new standards (C++14, C++17) Machine Learning Modernize and improve TMVA with new and more flexible design and integration of new methods Parallelization Seek for any opportunity in ROOT to do things in parallel to better exploit the new hardware (e.g. Ntuple processing, I/O, Fitting, etc.) 29

30 Development Main Directions (2) Packaging and modularization Incorporate easily third party packages (e.g. VecGeom in TGeom) Build/install modules and plugins on demand. Facilitate contributors to provide new functionality Re-thinking user interface New ways to provide thin-client web-based user interfaces Javascript and ROOTbooks (Jupyter notebooks) ROOT as-a-service Thin client plugged directly into a ROOT supercomputing cloud, computing answers quickly, efficiently, and without scalding your lap 30

31 ROOT as-a-service Combines naturally the work on parallelization to exploit many cores and nodes together with the new web-based interface to provide a modern and satisfying user experience Build on top of cloud security, storage and compute services Working towards a Pilot Service that is able to serve requests for a medium number of users (O(100)) Implemented using the CERN palette of IT services See for example presentation at CS3, ETH Zurich 31

32 ROOT Versions current legacy production version ROOT 5 is frozen except for critical bug fixes 6.02 First functionally complete ROOT 6 version. Superseded by Used by the LHC experiments in production current production Released beginning of December /02 - latest patch release 6.07/04 - development release Testing latest features 32

33 Collaborations ROOT has currently many external contributors: in 2015 we had commits from 57 different authors Active Collaborate with us and Ideas web page Interest of other projects to collaborate with ROOT DIANA Data Intensive ANAlysis, 4-year NSF funded Focus on analysis software, including ROOT and its ecosystem Three primary goals: performance, interoperability, support for collaborative analysis Other initiatives in the pipeline Need to integrate these collaborations and achieve a coherent program of work 33

34 HEP Software Foundation Goals Share expertise, raise awareness of existing software and solutions Catalyze new common projects, promote commonality Aid in creating, discovering, using and sustaining common software Support training career development for software and computing Framework for attracting effort and support to common projects Provide a structure to set priorities and goals for the work Facilitate wider connections; while it is a HEP community effort, it should be open to form the basis for collaboration with other sciences 34

35 Activities and Working Groups Working Group Objectives Forum - Mailing list Communication and information exchange Training Software Packaging Software Licensing Software Projects Development tools and services Address communication issues and building the knowledge base Technical notes Organization of training and education, learning from similar initiatives Package building and deployment, runtime and virtual environments Recommendation for HSF licence(s) Define incubator and other project membership or association levels. Developing templates Access to build, test, integration services and development tools hep-sf-tech-forum hep-sf-training-wg hep-sf-packaging-wg hep-sf-tech-forum hep-sf-tech-forum hep-sf-tech-forum HSF Workshop, Paris May 2-4,

36 Conclusion ROOT is one of the main ingredients in the software of each HEP experiment Development started in 1995 and being improved continuously Data storage, statistical data analysis and scientific graphics are the main features Used in many other scientific fields, like astrophysics and biology but also in finance Development ideas following the axes: Modernization of C++ & Python API, machine learning, parallelization, packaging, web interfaces and ROOT as a service. Collaborations and contributions most welcome Major player in the HEP Software Foundation 36

PyROOT: Seamless Melting of C++ and Python. Pere MATO, Danilo PIPARO on behalf of the ROOT Team

PyROOT: Seamless Melting of C++ and Python. Pere MATO, Danilo PIPARO on behalf of the ROOT Team PyROOT: Seamless Melting of C++ and Python Pere MATO, Danilo PIPARO on behalf of the ROOT Team ROOT At the root of the experiments, project started in 1995 Open Source project (LGPL2) mainly written in

More information

Packages. LASER Summer School on Software Engineering 2013, Elba Island, Italy Pere Mato/CERN. Lecture 3. Monday, September 9, 13

Packages. LASER Summer School on Software Engineering 2013, Elba Island, Italy Pere Mato/CERN. Lecture 3. Monday, September 9, 13 Frameworks and Common Packages LASER Summer School on Software Engineering 2013, Elba Island, Italy Pere Mato/CERN Lecture 3 1 Frameworks Event Applications Det Desc. Calib. Experiment Framework Simulation

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

The Run 2 ATLAS Analysis Event Data Model

The Run 2 ATLAS Analysis Event Data Model The Run 2 ATLAS Analysis Event Data Model Marcin Nowak, BNL On behalf of the ATLAS Analysis Software Group and Event Store Group 16 th International workshop on Advanced Computing and Analysis Techniques

More information

Expressing Parallelism with ROOT

Expressing Parallelism with ROOT Expressing Parallelism with ROOT https://root.cern D. Piparo (CERN) for the ROOT team CHEP 2016 2 This Talk ROOT helps scientists to express parallelism Adopting multi-threading (MT) and multi-processing

More information

ROOT Trips & Tricks. Ole Hansen. Jefferson Lab. Hall A & C Analysis Workshop June 26 27, 2017

ROOT Trips & Tricks. Ole Hansen. Jefferson Lab. Hall A & C Analysis Workshop June 26 27, 2017 ROOT Trips & Tricks Ole Hansen Jefferson Lab Hall A & C Analysis Workshop June 26 27, 2017 Ole Hansen (Jefferson Lab) ROOT Trips & Tricks Analysis Workshop 2017 1 / 25 Brief Introduction Ole Hansen (Jefferson

More information

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE

INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE Andreas Pfeiffer CERN, Geneva, Switzerland Abstract The Anaphe/LHC++ project is an ongoing effort to provide an Object-Oriented software environment for

More information

Practical Statistics for Particle Physics Analyses: Introduction to Computing Examples

Practical Statistics for Particle Physics Analyses: Introduction to Computing Examples Practical Statistics for Particle Physics Analyses: Introduction to Computing Examples Louis Lyons (Imperial College), Lorenzo Moneta (CERN) IPMU, 27-29 March 2017 Introduction Hands-on session based on

More information

Existing Tools in HEP and Particle Astrophysics

Existing Tools in HEP and Particle Astrophysics Existing Tools in HEP and Particle Astrophysics Richard Dubois richard@slac.stanford.edu R.Dubois Existing Tools in HEP and Particle Astro 1/20 Outline Introduction: Fermi as example user Analysis Toolkits:

More information

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop

ALICE ANALYSIS PRESERVATION. Mihaela Gheata DASPOS/DPHEP7 workshop 1 ALICE ANALYSIS PRESERVATION Mihaela Gheata DASPOS/DPHEP7 workshop 2 Outline ALICE data flow ALICE analysis Data & software preservation Open access and sharing analysis tools Conclusions 3 ALICE data

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

PyROOT Automatic Python bindings for ROOT. Enric Tejedor, Stefan Wunsch, Guilherme Amadio for the ROOT team ROOT. Data Analysis Framework

PyROOT Automatic Python bindings for ROOT. Enric Tejedor, Stefan Wunsch, Guilherme Amadio for the ROOT team ROOT. Data Analysis Framework PyROOT Automatic Python bindings for ROOT Enric Tejedor, Stefan Wunsch, Guilherme Amadio for the ROOT team PyHEP 2018 Sofia, Bulgaria ROOT Data Analysis Framework https://root.cern Outline Introduction:

More information

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era

New strategies of the LHC experiments to meet the computing requirements of the HL-LHC era to meet the computing requirements of the HL-LHC era NPI AS CR Prague/Rez E-mail: adamova@ujf.cas.cz Maarten Litmaath CERN E-mail: Maarten.Litmaath@cern.ch The performance of the Large Hadron Collider

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

ROOT Course. Vincenzo Vitale, Dip. Fisica and INFN Roma 2

ROOT Course. Vincenzo Vitale, Dip. Fisica and INFN Roma 2 ROOT Course Vincenzo Vitale, Dip. Fisica and INFN Roma 2 Introduction This is a basic introduction to ROOT. The purpose of the course is to provide a starting knowledge and some practical experiences on

More information

Machine Learning Software ROOT/TMVA

Machine Learning Software ROOT/TMVA Machine Learning Software ROOT/TMVA LIP Data Science School / 12-14 March 2018 ROOT ROOT is a software toolkit which provides building blocks for: Data processing Data analysis Data visualisation Data

More information

Andrea Sciabà CERN, Switzerland

Andrea Sciabà CERN, Switzerland Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Monitoring of Computing Resource Use of Active Software Releases at ATLAS

Monitoring of Computing Resource Use of Active Software Releases at ATLAS 1 2 3 4 5 6 Monitoring of Computing Resource Use of Active Software Releases at ATLAS Antonio Limosani on behalf of the ATLAS Collaboration CERN CH-1211 Geneva 23 Switzerland and University of Sydney,

More information

Introduction. CS 2210 Compiler Design Wonsun Ahn

Introduction. CS 2210 Compiler Design Wonsun Ahn Introduction CS 2210 Compiler Design Wonsun Ahn What is a Compiler? Compiler: A program that translates source code written in one language to a target code written in another language Source code: Input

More information

arxiv: v1 [cs.se] 7 Dec 2018

arxiv: v1 [cs.se] 7 Dec 2018 arxiv:1812.03149v1 [cs.se] 7 Dec 2018 Extending ROOT through Modules Oksana Shadura 1,, Brian Paul Bockelman 1,, and Vassil Vassilev 2, 1 University of Nebraska Lincoln, 1400 R St, Lincoln, NE 68588, United

More information

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008

The CORAL Project. Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 The CORAL Project Dirk Düllmann for the CORAL team Open Grid Forum, Database Workshop Barcelona, 4 June 2008 Outline CORAL - a foundation for Physics Database Applications in the LHC Computing Grid (LCG)

More information

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser

Data Analysis in ATLAS. Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser Data Analysis in ATLAS Graeme Stewart with thanks to Attila Krasznahorkay and Johannes Elmsheuser 1 ATLAS Data Flow into Analysis RAW detector data and simulated RDO data are reconstructed into our xaod

More information

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS

PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS PARALLEL PROCESSING OF LARGE DATA SETS IN PARTICLE PHYSICS MARINA ROTARU 1, MIHAI CIUBĂNCAN 1, GABRIEL STOICEA 1 1 Horia Hulubei National Institute for Physics and Nuclear Engineering, Reactorului 30,

More information

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear

More information

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010

Worldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010 Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino

Monitoring system for geographically distributed datacenters based on Openstack. Gioacchino Vino Monitoring system for geographically distributed datacenters based on Openstack Gioacchino Vino Tutor: Dott. Domenico Elia Tutor: Dott. Giacinto Donvito Borsa di studio GARR Orio Carlini 2016-2017 INFN

More information

What's New in MySQL 5.7?

What's New in MySQL 5.7? What's New in MySQL 5.7? Norvald H. Ryeng Software Engineer norvald.ryeng@oracle.com Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN

Software and computing evolution: the HL-LHC challenge. Simone Campana, CERN Software and computing evolution: the HL-LHC challenge Simone Campana, CERN Higgs discovery in Run-1 The Large Hadron Collider at CERN We are here: Run-2 (Fernando s talk) High Luminosity: the HL-LHC challenge

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

ATLAS Nightly Build System Upgrade

ATLAS Nightly Build System Upgrade Journal of Physics: Conference Series OPEN ACCESS ATLAS Nightly Build System Upgrade To cite this article: G Dimitrov et al 2014 J. Phys.: Conf. Ser. 513 052034 Recent citations - A Roadmap to Continuous

More information

Opportunities for container environments on Cray XC30 with GPU devices

Opportunities for container environments on Cray XC30 with GPU devices Opportunities for container environments on Cray XC30 with GPU devices Cray User Group 2016, London Sadaf Alam, Lucas Benedicic, T. Schulthess, Miguel Gila May 12, 2016 Agenda Motivation Container technologies,

More information

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN

Belle & Belle II. Takanori Hara (KEK) 9 June, 2015 DPHEP Collaboration CERN 1 Belle & Belle II Takanori Hara (KEK) takanori.hara@kek.jp 9 June, 2015 DPHEP Collaboration Workshop @ CERN 2 Belle Data Belle : started in 1999, data-taking completed in 2010 still keep analysing the

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

CC-IN2P3: A High Performance Data Center for Research

CC-IN2P3: A High Performance Data Center for Research April 15 th, 2011 CC-IN2P3: A High Performance Data Center for Research Toward a partnership with DELL Dominique Boutigny Agenda Welcome Introduction to CC-IN2P3 Visit of the computer room Lunch Discussion

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

ATLAS Experiment and GCE

ATLAS Experiment and GCE ATLAS Experiment and GCE Google IO Conference San Francisco, CA Sergey Panitkin (BNL) and Andrew Hanushevsky (SLAC), for the ATLAS Collaboration ATLAS Experiment The ATLAS is one of the six particle detectors

More information

Big Data Tools as Applied to ATLAS Event Data

Big Data Tools as Applied to ATLAS Event Data Big Data Tools as Applied to ATLAS Event Data I Vukotic 1, R W Gardner and L A Bryant University of Chicago, 5620 S Ellis Ave. Chicago IL 60637, USA ivukotic@uchicago.edu ATL-SOFT-PROC-2017-001 03 January

More information

Virtualizing a Batch. University Grid Center

Virtualizing a Batch. University Grid Center Virtualizing a Batch Queuing System at a University Grid Center Volker Büge (1,2), Yves Kemp (1), Günter Quast (1), Oliver Oberst (1), Marcel Kunze (2) (1) University of Karlsruhe (2) Forschungszentrum

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

cling Axel Naumann (CERN), Philippe Canal (Fermilab), Paul Russo (Fermilab), Vassil Vassilev (CERN)

cling Axel Naumann (CERN), Philippe Canal (Fermilab), Paul Russo (Fermilab), Vassil Vassilev (CERN) cling Axel Naumann (CERN), Philippe Canal (Fermilab), Paul Russo (Fermilab), Vassil Vassilev (CERN) Creating cling, an interactive interpreter interface for clang cling? C++* interpreter* *: not really

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

LHCb Computing Status. Andrei Tsaregorodtsev CPPM

LHCb Computing Status. Andrei Tsaregorodtsev CPPM LHCb Computing Status Andrei Tsaregorodtsev CPPM Plan Run II Computing Model Results of the 2015 data processing 2016-2017 outlook Preparing for Run III Conclusions 2 HLT Output Stream Splitting 12.5 khz

More information

Open data and scientific reproducibility

Open data and scientific reproducibility Open data and scientific reproducibility Victoria Stodden School of Information Sciences University of Illinois at Urbana-Champaign Data Science @ LHC 2015 Workshop CERN Nov 13, 2015 Closing Remarks: Open

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Analytics Platform for ATLAS Computing Services

Analytics Platform for ATLAS Computing Services Analytics Platform for ATLAS Computing Services Ilija Vukotic for the ATLAS collaboration ICHEP 2016, Chicago, USA Getting the most from distributed resources What we want To understand the system To understand

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Large Scale Software Building with CMake in ATLAS

Large Scale Software Building with CMake in ATLAS 1 Large Scale Software Building with CMake in ATLAS 2 3 4 5 6 7 J Elmsheuser 1, A Krasznahorkay 2, E Obreshkov 3, A Undrus 1 on behalf of the ATLAS Collaboration 1 Brookhaven National Laboratory, USA 2

More information

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation ATL-SOFT-PROC-2012-045 22 May 2012 Not reviewed, for internal circulation only AutoPyFactory: A Scalable Flexible Pilot Factory Implementation J. Caballero 1, J. Hover 1, P. Love 2, G. A. Stewart 3 on

More information

Ganga The Job Submission Tool. WeiLong Ueng

Ganga The Job Submission Tool. WeiLong Ueng Ganga The Job Submission Tool WeiLong Ueng wlueng@twgrid.org Objectives This tutorial gives users to understand Why require Ganga in Grid environment What advantages of Ganga The Architecture of Ganga

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France

Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France Giovanni Lamanna LAPP - Laboratoire d'annecy-le-vieux de Physique des Particules, Université de Savoie, CNRS/IN2P3, Annecy-le-Vieux, France ERF, Big data & Open data Brussels, 7-8 May 2014 EU-T0, Data

More information

State of the Dolphin Developing new Apps in MySQL 8

State of the Dolphin Developing new Apps in MySQL 8 State of the Dolphin Developing new Apps in MySQL 8 Highlights of MySQL 8.0 technology updates Mark Swarbrick MySQL Principle Presales Consultant Jill Anolik MySQL Global Business Unit Israel Copyright

More information

Data services for LHC computing

Data services for LHC computing Data services for LHC computing SLAC 1 Xavier Espinal on behalf of IT/ST DAQ to CC 8GB/s+4xReco Hot files Reliable Fast Processing DAQ Feedback loop WAN aware Tier-1/2 replica, multi-site High throughout

More information

Adding CUDA Support to Cling: JIT Compile to GPUs

Adding CUDA Support to Cling: JIT Compile to GPUs Published under CC BY-SA 4.0 DOI: 10.5281/zenodo.1412256 Adding CUDA Support to Cling: JIT Compile to GPUs S. Ehrig 1,2, A. Naumann 3, and A. Huebl 1,2 1 Helmholtz-Zentrum Dresden - Rossendorf 2 Technische

More information

ROOT: An object-orientated analysis framework

ROOT: An object-orientated analysis framework C++ programming for physicists ROOT: An object-orientated analysis framework PD Dr H Kroha, Dr J Dubbert, Dr M Flowerdew 1 Kroha, Dubbert, Flowerdew 14/04/11 What is ROOT? An object-orientated framework

More information

The Materials Data Facility

The Materials Data Facility The Materials Data Facility Ben Blaiszik (blaiszik@uchicago.edu), Kyle Chard (chard@uchicago.edu) Ian Foster (foster@uchicago.edu) materialsdatafacility.org What is MDF? We aim to make it simple for materials

More information

Institutional Repository using DSpace. Yatrik Patel Scientist D (CS)

Institutional Repository using DSpace. Yatrik Patel Scientist D (CS) Institutional Repository using DSpace Yatrik Patel Scientist D (CS) yatrik@inflibnet.ac.in What is Institutional Repository? Institutional repositories [are]... digital collections capturing and preserving

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014

StratusLab Cloud Distribution Installation. Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab Cloud Distribution Installation Charles Loomis (CNRS/LAL) 3 July 2014 StratusLab What is it? Complete IaaS cloud distribution Open source (Apache 2 license) Works well for production private

More information

Facilitating Collaborative Analysis in SWAN

Facilitating Collaborative Analysis in SWAN Facilitating Collaborative Analysis in SWAN E. Tejedor, D. Castro, D. Piparo, P. Mato E. Bocchi, J. Moscicki, M. Lamanna, P. Kothuri https://swan.cern.ch July 11th, 2018 CHEP 2018, Sofia (Bulgaria) Introduction

More information

Toward real-time data query systems in HEP

Toward real-time data query systems in HEP Toward real-time data query systems in HEP Jim Pivarski Princeton University DIANA August 22, 2017 1 / 39 The dream... LHC experiments centrally manage data from readout to Analysis Object Datasets (AODs),

More information

Servicing HEP experiments with a complete set of ready integreated and configured common software components

Servicing HEP experiments with a complete set of ready integreated and configured common software components Journal of Physics: Conference Series Servicing HEP experiments with a complete set of ready integreated and configured common software components To cite this article: Stefan Roiser et al 2010 J. Phys.:

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information

The Storage Networking Industry Association (SNIA) Data Preservation and Metadata Projects. Bob Rogers, Application Matrix

The Storage Networking Industry Association (SNIA) Data Preservation and Metadata Projects. Bob Rogers, Application Matrix The Storage Networking Industry Association (SNIA) Data Preservation and Metadata Projects Bob Rogers, Application Matrix Overview The Self Contained Information Retention Format Rationale & Objectives

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February LHC Cloud Computing with CernVM Ben Segal 1 CERN 1211 Geneva 23, Switzerland E mail: b.segal@cern.ch Predrag Buncic CERN E mail: predrag.buncic@cern.ch 13th International Workshop on Advanced Computing

More information

A Geometrical Modeller for HEP

A Geometrical Modeller for HEP A Geometrical Modeller for HEP R. Brun, A. Gheata CERN, CH 1211, Geneva 23, Switzerland M. Gheata ISS, RO 76900, Bucharest MG23, Romania For ALICE off-line collaboration Geometrical modelling generally

More information

Data Analysis R&D. Jim Pivarski. February 5, Princeton University DIANA-HEP

Data Analysis R&D. Jim Pivarski. February 5, Princeton University DIANA-HEP Data Analysis R&D Jim Pivarski Princeton University DIANA-HEP February 5, 2018 1 / 20 Tools for data analysis Eventual goal Query-based analysis: let physicists do their analysis by querying a central

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Deep Learning Photon Identification in a SuperGranular Calorimeter

Deep Learning Photon Identification in a SuperGranular Calorimeter Deep Learning Photon Identification in a SuperGranular Calorimeter Nikolaus Howe Maurizio Pierini Jean-Roch Vlimant @ Williams College @ CERN @ Caltech 1 Outline Introduction to the problem What is Machine

More information

Exploring cloud storage for scien3fic research

Exploring cloud storage for scien3fic research Exploring cloud storage for scien3fic research Fabio Hernandez fabio@in2p3.fr Lu Wang Lu.Wang@ihep.ac.cn 第十六届全国科学计算与信息化会议暨科研大数据论坛 h"p://indico.ihep.ac.cn/conferencedisplay.py?confid=3138 Dalian, July 8th

More information

In I t n er t a er c a t c i t v i e v C+ C + + Com C pilat a ion on (RE ( PL RE ) PL : ) Th T e h L e ean L ean W a W y a by Viktor Kirilov 1

In I t n er t a er c a t c i t v i e v C+ C + + Com C pilat a ion on (RE ( PL RE ) PL : ) Th T e h L e ean L ean W a W y a by Viktor Kirilov 1 Interactive C++ Compilation (REPL): The Lean Way by Viktor Kirilov 1 About me my name is Viktor Kirilov - from Bulgaria 4 years of professional C++ in the games / VFX industries working on personal projects

More information

Lecture 1: January 23

Lecture 1: January 23 CMPSCI 677 Distributed and Operating Systems Spring 2019 Lecture 1: January 23 Lecturer: Prashant Shenoy Scribe: Jonathan Westin (2019), Bin Wang (2018) 1.1 Introduction to the course The lecture started

More information

GAUDI - The Software Architecture and Framework for building LHCb data processing applications. Marco Cattaneo, CERN February 2000

GAUDI - The Software Architecture and Framework for building LHCb data processing applications. Marco Cattaneo, CERN February 2000 GAUDI - The Software Architecture and Framework for building LHCb data processing applications Marco Cattaneo, CERN February 2000 1 Outline Introduction Design choices GAUDI Architecture overview Status

More information

Copyright Khronos Group 2012 Page 1. OpenCL 1.2. August 2012

Copyright Khronos Group 2012 Page 1. OpenCL 1.2. August 2012 Copyright Khronos Group 2012 Page 1 OpenCL 1.2 August 2012 Copyright Khronos Group 2012 Page 2 Khronos - Connecting Software to Silicon Khronos defines open, royalty-free standards to access graphics,

More information

JAIDA, JAS3, WIRED4 and the AIDA tag library experience and new developments

JAIDA, JAS3, WIRED4 and the AIDA tag library experience and new developments SLAC-PUB-12950 March 2008 JAIDA, JAS3, WIRED4 and the AIDA tag library experience and new developments M Donszelmann 1, T Johnson 1, V V Serbo 1, M Turri 1 1 SLAC, 2575 Sand Hill Road, Menlo Park, CA 94025,

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

A Simulation Model for Large Scale Distributed Systems

A Simulation Model for Large Scale Distributed Systems A Simulation Model for Large Scale Distributed Systems Ciprian M. Dobre and Valentin Cristea Politechnica University ofbucharest, Romania, e-mail. **Politechnica University ofbucharest, Romania, e-mail.

More information

VISPA: Visual Physics Analysis Environment

VISPA: Visual Physics Analysis Environment VISPA: Visual Physics Analysis Environment Tatsiana Klimkovich for the VISPA group (O.Actis, M.Erdmann, R.Fischer, A.Hinzmann, M.Kirsch, G.Müller, M.Plum, J.Steggemann) DESY Computing Seminar, 27 October

More information

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan

Storage Virtualization. Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization Eric Yen Academia Sinica Grid Computing Centre (ASGC) Taiwan Storage Virtualization In computer science, storage virtualization uses virtualization to enable better functionality

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Exploiting peer group concept for adaptive and highly available services

Exploiting peer group concept for adaptive and highly available services Computing in High Energy and Nuclear Physics, 24-28 March 2003 La Jolla California 1 Exploiting peer group concept for adaptive and highly available services Muhammad Asif Jan Centre for European Nuclear

More information

Virtualization. A very short summary by Owen Synge

Virtualization. A very short summary by Owen Synge Virtualization A very short summary by Owen Synge Outline What is Virtulization? What's virtulization good for? What's virtualisation bad for? We had a workshop. What was presented? What did we do with

More information

Introducing LLDB for Linux on Arm and AArch64. Omair Javaid

Introducing LLDB for Linux on Arm and AArch64. Omair Javaid Introducing LLDB for Linux on Arm and AArch64 Omair Javaid Agenda ENGINEERS AND DEVICES WORKING TOGETHER Brief introduction and history behind LLDB Status of LLDB on Linux and Android Linaro s contributions

More information

Big Data Analytics and the LHC

Big Data Analytics and the LHC Big Data Analytics and the LHC Maria Girone CERN openlab CTO Computing Frontiers 2016, Como, May 2016 DOI: 10.5281/zenodo.45449, CC-BY-SA, images courtesy of CERN 2 3 xx 4 Big bang in the laboratory We

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

Dynamic Cuda with F# HPC GPU & F# Meetup. March 19. San Jose, California

Dynamic Cuda with F# HPC GPU & F# Meetup. March 19. San Jose, California Dynamic Cuda with F# HPC GPU & F# Meetup March 19 San Jose, California Dr. Daniel Egloff daniel.egloff@quantalea.net +41 44 520 01 17 +41 79 430 03 61 About Us! Software development and consulting company!

More information

Opportunities A Realistic Study of Costs Associated

Opportunities A Realistic Study of Costs Associated e-fiscal Summer Workshop Opportunities A Realistic Study of Costs Associated X to Datacenter Installation and Operation in a Research Institute can we do EVEN better? Samos, 3rd July 2012 Jesús Marco de

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

Monitoring the software quality in FairRoot. Gesellschaft für Schwerionenforschung, Plankstrasse 1, Darmstadt, Germany

Monitoring the software quality in FairRoot. Gesellschaft für Schwerionenforschung, Plankstrasse 1, Darmstadt, Germany Gesellschaft für Schwerionenforschung, Plankstrasse 1, 64291 Darmstadt, Germany E-mail: f.uhlig@gsi.de Mohammad Al-Turany Gesellschaft für Schwerionenforschung, Plankstrasse 1, 64291 Darmstadt, Germany

More information

The ATLAS Distributed Analysis System

The ATLAS Distributed Analysis System The ATLAS Distributed Analysis System F. Legger (LMU) on behalf of the ATLAS collaboration October 17th, 2013 20th International Conference on Computing in High Energy and Nuclear Physics (CHEP), Amsterdam

More information