Running Jobs in the Vacuum

Size: px
Start display at page:

Download "Running Jobs in the Vacuum"

Transcription

1 Running Jobs in the Vacuum A. McNab 1, F. Stagni 2 and M. Ubeda Garcia 2 1 School of Physics and Astronomy, University of Manchester, UK 2 CERN, Switzerland andrew.mcnab@cern.ch fstagni@cern.ch mario.ubeda.garcia@cern.ch Abstract. We present a model for the operation of computing nodes at a site using Virtual Machines (VMs), in which VMs are created and contextualized for experiments by the site itself. For the experiment, these VMs appear to be produced spontaneously in the vacuum rather having to ask the site to create each one. This model takes advantage of the existing pilot job frameworks adopted by many experiments. In the Vacuum model, the contextualization process starts a job agent within the VM and real jobs are fetched from the central task queue as normal. An implementation of the Vacuum scheme, Vac, is presented in which a VM factory runs on each physical worker node to create and contextualize its set of VMs. With this system, each node s VM factory can decide which experiments VMs to run, based on site-wide target shares and on a peer-to-peer protocol in which the site s VM factories query each other to discover which VM types they are running. A property of this system is that there is no gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes, avoiding several central points of failure. Finally, we describe tests of the Vac system using jobs from the central LHCb task queue, using the same contextualization procedure for VMs developed by LHCb for Clouds. 1. Introduction There is now a considerable amount of interest within the scientific grid and HEP communities in running jobs in virtualized environments on Cloud services[1, 2, 3]. Virtualized environments provide several attractive features for sites and experiments, particularly by decoupling operating system versions between the bare iron run by the site and the environment seen by the experiment software. This shifts responsibility for supporting the experiments operating system requirements from the site to the experiments own operations teams, in return for the experiment having greater control within the VM. Cloud environments largely depend on virtualization technologies to decouple the specific requirements of customers, and are also the most widely discussed way of providing virtualized environments to experiments that want to use this technology. In this paper we present the Vacuum model as an alternative way of providing virtualized environments, and locate its place within the evolution of Grid and Cloud models since the turn of this century. Whereas Grid and Cloud models require that experiments push each job or VM request into each site, in the Vacuum model they see their VMs apparently being produced spontaneously by sites in the vacuum, to use an analogy with the quantum mechanical production of virtual particles in nature. These virtual machines then request work from the experiments central services, and so the Vacuum model is an inversion of the more usual Cloud Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Ltd 1

2 model of Infrastructure-as-a-Service, and these sites can be said to operate as Infrastructure-asa-Client (IaaC). 2. Grid, Cloud, and Vacuum Figures 1, 2, and 3 compare the three models and illustrate their shared aspects, and the progressive simplification that has occurred. In all of the models, LHC experiments such as LHCb and some of the larger experiments elsewhere have developed pilot job systems[4, 5, 6, 7], which operate as private overlay grids in which tasks are pulled from central task queues. The Grid was originally conceived as a push system, but during the 2000s the experiments added pilot jobs in the payloads they pushed to sites, which in turn pull tasks when they arrive at sites. These overlay grids, such as LHCb s DIRAC, are also readily adaptable for use with Cloud sites, in which case the VM runs the client part of the pilot job and requests tasks from the central task queue as before. As with Grids, Cloud interfaces provide a general way for users (or customers) to run arbitrary work at a site. However, we have observed that many HEP sites run jobs for a comparatively small number of experiments or other virtual organisations. For example, at the University of Manchester Tier-2 centre, in excess of 95% of the resources are used by the ATLAS and LHCb experiments. Consequently, the flexbility provided by Cloud and Grid interfaces is largely unneeded for the vast majority of the site s work. This observation prompted an examination of simpler ways of providing virtualized environments to experiments, with the aim of simplifying the local support burden and benefitting from the robustness associated with simplicity. The resulting Vacuum model is a third way of making use of nodes at a site. The Vacuum model can be defined as a scenario in which VMs are created and contextualized for experiments by the site itself. The contexualization procedures are supplied in advance by the experiments and launch clients within the virtual machines to obtain tasks to work on from the experiments central queue of tasks. Figure 1. The Grid model: originally conceived with central brokers forwarding users jobs to sites, the Grid has evolved towards direct submission of jobs to sites. Large experiments have created their own overlay infrastructures, such as LHCb s DIRAC, with users jobs fetched from a central task queue by pilot jobs once they arrive at the site. Figure 2. The Cloud model: an evolution of the demand for server rental in the commercial world, which has led to highly elastic VM hosting services. Some experiments are now looking at using either commercial Cloud providers or sites using software developed for Clouds, and are adding support for VMs to their private infrastructures. As described below, LHCb has demonstrated using a common contextualization procedure 2

3 for Cloud and Vacuum sites, using a client or JobAgent adapted from their existing grid pilot jobs. In this way, these two models are complementary and the Vacuum can be seen as a way of operating VMs at a site for a specific set of experiments, but using a considerably simpler software stack than at Cloud or Grid sites. Figure 3. The Vacuum model: provides a third alternative, in which VMs are created for experiments by sites using a contextualization provided in advance by the experiment. As before, user jobs are fetched from the experiment s central task queues. This leverages the existing work on supporting VMs for Clouds and the experiment s private overlay infrastructures such as DIRAC. 3. Vac implementation An implementation of the Vacuum scheme, Vac, has been developed at Manchester. A VM Factory daemon runs on each physical worker node to create and contextualise that node s set of VMs. The libvirt[8] and Linux KVM[9] VM frameworks are used by Vac for lower level VM management functions. Each node s VM Factory decides which experiments VMs to run, and this choice is based on site-wide target shares and information gathered by a peer-to-peer protocol between the factory nodes in that site or Vac space. This decision is made whenever a VM slot becomes available. Local copies of common configuration files are updated when necessary by the sites preferred operating system management tool, such as Puppet[10]. This scheme allows sites to provide virtual environments to multiple experiments and still maintain the fair share allocation of capacity between experiments that is currently achieved with grid interfaces to conventional batch queuing systems. However, this is achieved without a gate keeper service, head node, or batch system accepting and then directing jobs to particular worker nodes. Instead, the Vac factories co-ordinate the site s behaviour amongst themselves using the peer-to-peer protocol and their copies of the configuration files. This avoids several central points of failure, and considerably simplifies the installation and maintenance of these sites. The Vac software and documentation is made available under the BSD license from the Vac website[11] UDP protocol Each factory has a list of all the factories in the same Vac space and queries the other factory nodes when deciding what VM flavours to create. These queries are sent in UDP packets containing a JSON-encoded Python dictionary. Each factory runs a responder process which listens on UDP port 995 and replies to these queries when they are received. The responses are UDP packets containing a JSON-encoded 3

4 dictionary for each VM a ssigned to that factory. These replies include the VM s state (shutdown / starting / running), the VM type (roughly corresponding to the experiment), the VM s start time, and the outcomes of the last VM instances run for each VM type. The UDP messages also include randomly chosen cookies to prevent some simpler denial of service attacks Backoff procedure To avoid loading the experiment s central Matcher/TaskQueue service with requests when no work is available to this site, Vac implements a backoff procedure. If a VM finishes and returns no work / banned / site misconfigured outcomes to the factory, then it is classed as aborted. If no outcome is returned, then if the VM finishes after less than the parameter fizzle seconds (by default 600 seconds) then it is also classed as an abort. For each VM type, if an abort has happened on any factory in the space within a time range given by the parameter backoff seconds (again, 600 seconds by default), then no more VMs of that type will be started. After this initial period has passed, it is possible for a VM to be created for the VM type in question to test the current situation. To avoid a flood of unnecessary VMs, if an abort happened within the last backoff seconds + fizzle seconds and any new VMs have run for less than fizzle seconds, then no more VMs of that type will be started. This gives time for one or two trial VMs to run and potentially abort if no work is available. Once backoff seconds + fizzle seconds have passed without any aborts, then VMs can start again as fast as slots become available. For the worst case scenario of an experiment having no work for the site for an extended period of time, the backoff procedure typically results in one or two VMs being started for the experiment every backoff seconds which run for one to five minutes before failing to find work and aborting. Although this does represent wasted resources, it only amounts to one or two cores per Vac space being used occasionally when no work is available from the experiment. This cost of treating the factories as loosely coupled peers which follow the backoff procedure has to be compared to the saving of not requiring a dedicated multicore head node machine, which would centrally orchestrate the operation of the VMs or jobs across the space in a traditional grid or cloud site Target shares To attempt to achieve the allocation of resources between experiments that the site wants, Vac uses the concept of target shares. These are simply the desired fractions of the resources to be delivered to each experiment. When deciding which type of VM to create, each Vac factory node tries to start types which are currently under represented in the Vac space according to their target shares. The backoff procedure described above prevents the excessive creation of VMs for experiments which currently have no work. This approach is designed to be highly robust and to allow nodes to carry on working even when their neighbours have problems. To avoid relying on a central head node which decides which VM type to create, the target shares are recorded in a configuration file which exists on each factory node. It is assumed that the site will use their existing node configuration mechanism (such as Puppet) to copy these files to the nodes when required. The overall design of the Vac system envisages that these target shares may be modified over time to achieve particular shares over the timescale of months, quarters, or years. As Vac makes accounting records available in standard log file formats expected by batch and grid accounting systems, it would be possible for a site to use their local accounting database, such as APEL, to automate the generation of the target shares file to favour experiments which have not received the desired share so far in the accounting period. Again, the advantage of this retrospective rather than dynamic approach is that if the automatic regeneration of target shares files fails, 4

5 the factory nodes are still able to carry on creating and running VMs for the experiments as all of their direct dependencies are internal to each node Graceful termination and shutdown codes Vac s strategy for graceful termination is based around the proposed HEPiX machinefeatures mechanism[12]. When starting a VM, the Vac factory creates directories containing the machinefeatures files and exports them into the VM using NFS. The VM contextualization then mounts these directories within the VM. The shutdowntime value is used by Vac to notify the VM of the absolute time at which it will be destroyed if still running. In practice, if the contextualization imposes limits on the length of tasks requested from the experiment s central task queue then the VM will normally be able to terminate before the enforced shutdown time is reached. Nevertheless, this mechanism provides protection in cases where the VM fails to terminate itself and would otherwise continue occupying resources indefinitely. In addition, the shutdown command option is provided to the VM by Vac, which is to be run by the VM when it terminates. We have extended this command by allowing it to take command-line arguments, which are recorded in a writeable NFS directory hosted by the Vac factory. The arguments allow the VM to report why it terminated. They are of particular use in the backoff procedure, which otherwise relies on the fizzle seconds parameter to attempt to determine if a VM failed or ran successfully. The arguments consist of three digit codes followed by human readable messages, in a similar way to status messages in internet protocols such as HTTP[13] and SMTP[14]. The values are listed in Table 1. Table 1. Shutdown codes and messages 100 Shutdown as requested by the VM s host/hypervisor 200 Intended work completed ok 300 No more work available from task queue 400 Site/host/VM is currently banned/disabled from receiving more work 500 Problem detected with environment/vm provided by the site 600 Error related to job agent or application within VM As with HTTP codes, this scheme provides room to insert more numbers for finer grained information in the future. Vac uses this information programmatically, but it may also be useful to site administrators to help identify where responsibility for problems lie, especially given the relatively opaque nature of experiments VMs from a site s point of view. 4. Tests with LHCb production jobs To create a VM for an experiment, the experiment must supply a contextualization procedure to the site which the Vac factory daemon applies. A suitable contextualization was developed by the LHCb collaboration at CERN for use with Clouds[15], and has been extended to work with Vac. During the summer of 2013, we successfully demonstrated running production jobs from the main LHCb central task queues without any modification to the central DIRAC services or operations procedures. During these production runs, Vac was in operation at 3 sites in the UK (Manchester, Lancaster, Imperial College) and successfully ran 3300 normal production jobs from the central task queue. Figure 4 shows sustained job execution during a 30 days period during these tests. 5

6 Figure 4. LHCb job execution rate at Vac sites The contextualization procedure causes the LHCb JobAgent previously used within pilot jobs to be started in the VM. In the simplest mode, the JobAgent requests one task from the LHCb central task queue, runs it as a job for up to 24 hours, and then terminates the virtual machine. The JobAgent s existing mechanism for discovering the time left in a batch queue slot is being extended to use the equivalent HEPiX machinefeatures information, and this could be used to run several tasks consecutively within the same VM instance before the hard shutdown time imposed by Vac is reached. 5. Conclusion We have presented an alternative model for operating VMs at sites and successfully demonstrated running LHCb production jobs at several such sites in the UK as part of the standard LHCb distributed computing infrastructure based on DIRAC. This is a general approach which could be used by any site and by any experiment which prepares a contextualization procedure to allow its jobs to run inside VMs. As such, we next aim to increase the number of sites evaluating the Vac software and demonstrate creating and contextualisaing VMs for other experiments with Vac. References [1] Victor Mendez Munoz et al 2012 J. Phys.: Conf. Ser [2] Fernando Harald Barreiro Megino et al 2012 J. Phys.: Conf. Ser [3] Tony Cass 2012 J. Phys.: Conf. Ser [4] T Maeno et al 2012 J. Phys.: Conf. Ser [5] F Stagni et al 2012 J. Phys.: Conf. Ser [6] M Cinquilli et al 2012 J. Phys.: Conf. Ser

7 [7] S Bagnasco et al 2008 J. Phys.: Conf. Ser [8] [9] [10] Puppet from PuppetLabs, [11] The Vac website, [12] [13] Berners-Lee T, Fielding R, Frystyk H, Gettys J, Leach P, Mogul J and Masinter L, RFC2616 Hypertext Transfer Protocol - HTTP/1.1 (Internet Engineering Task Force) Section 10 [14] Postel J, RFC 821 Simple Mail Transfer Protocol (Internet Engineering Task Force) Section 4.2 [15] The paper Integration of Cloud Resources in the LHCb Distributed Computing was presented at Computing in High Energy and Nuclear Physics

LHCb experience running jobs in virtual machines

LHCb experience running jobs in virtual machines LHCb experience running jobs in virtual machines Andrew McNab, University of Manchester Federico Stagni & Cinzia Luzzi, CERN on behalf of the LHCb collaboration Overview Starting from DIRAC + Grid CernVM

More information

DIRAC pilot framework and the DIRAC Workload Management System

DIRAC pilot framework and the DIRAC Workload Management System Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online

More information

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine

Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine Journal of Physics: Conference Series OPEN ACCESS Using Puppet to contextualize computing resources for ATLAS analysis on Google Compute Engine To cite this article: Henrik Öhman et al 2014 J. Phys.: Conf.

More information

Use of containerisation as an alternative to full virtualisation in grid environments.

Use of containerisation as an alternative to full virtualisation in grid environments. Journal of Physics: Conference Series PAPER OPEN ACCESS Use of containerisation as an alternative to full virtualisation in grid environments. Related content - Use of containerisation as an alternative

More information

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID The WLCG Motivation and benefits Container engines Experiments status and plans Security considerations Summary and outlook STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID SWISS EXPERIENCE

More information

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Improved ATLAS HammerCloud Monitoring for Local Site Administration Improved ATLAS HammerCloud Monitoring for Local Site Administration M Böhler 1, J Elmsheuser 2, F Hönig 2, F Legger 2, V Mancinelli 3, and G Sciacca 4 on behalf of the ATLAS collaboration 1 Albert-Ludwigs

More information

ATLAS Nightly Build System Upgrade

ATLAS Nightly Build System Upgrade Journal of Physics: Conference Series OPEN ACCESS ATLAS Nightly Build System Upgrade To cite this article: G Dimitrov et al 2014 J. Phys.: Conf. Ser. 513 052034 Recent citations - A Roadmap to Continuous

More information

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data D. Barberis 1*, J. Cranshaw 2, G. Dimitrov 3, A. Favareto 1, Á. Fernández Casaní 4, S. González de la Hoz 4, J.

More information

DIRAC distributed secure framework

DIRAC distributed secure framework Journal of Physics: Conference Series DIRAC distributed secure framework To cite this article: A Casajus et al 2010 J. Phys.: Conf. Ser. 219 042033 View the article online for updates and enhancements.

More information

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear

More information

CernVM-FS beyond LHC computing

CernVM-FS beyond LHC computing CernVM-FS beyond LHC computing C Condurache, I Collier STFC Rutherford Appleton Laboratory, Harwell Oxford, Didcot, OX11 0QX, UK E-mail: catalin.condurache@stfc.ac.uk Abstract. In the last three years

More information

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February

13th International Workshop on Advanced Computing and Analysis Techniques in Physics Research ACAT 2010 Jaipur, India February LHC Cloud Computing with CernVM Ben Segal 1 CERN 1211 Geneva 23, Switzerland E mail: b.segal@cern.ch Predrag Buncic CERN E mail: predrag.buncic@cern.ch 13th International Workshop on Advanced Computing

More information

ATLAS software configuration and build tool optimisation

ATLAS software configuration and build tool optimisation Journal of Physics: Conference Series OPEN ACCESS ATLAS software configuration and build tool optimisation To cite this article: Grigory Rybkin and the Atlas Collaboration 2014 J. Phys.: Conf. Ser. 513

More information

Evolution of Database Replication Technologies for WLCG

Evolution of Database Replication Technologies for WLCG Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View

More information

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation

AutoPyFactory: A Scalable Flexible Pilot Factory Implementation ATL-SOFT-PROC-2012-045 22 May 2012 Not reviewed, for internal circulation only AutoPyFactory: A Scalable Flexible Pilot Factory Implementation J. Caballero 1, J. Hover 1, P. Love 2, G. A. Stewart 3 on

More information

Report. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution

Report. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution CERN-ACC-2013-0237 Wojciech.Sliwinski@cern.ch Report Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution W. Sliwinski, I. Yastrebov, A. Dworak CERN, Geneva, Switzerland

More information

Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem

Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem Journal of Physics: Conference Series PAPER OPEN ACCESS Towards Monitoring-as-a-service for Scientific Computing Cloud applications using the ElasticSearch ecosystem Recent citations - Andrei Talas et

More information

Scientific Cluster Deployment and Recovery Using puppet to simplify cluster management

Scientific Cluster Deployment and Recovery Using puppet to simplify cluster management Journal of Physics: Conference Series Scientific Cluster Deployment and Recovery Using puppet to simplify cluster management To cite this article: Val Hendrix et al 2012 J. Phys.: Conf. Ser. 396 042027

More information

Recent developments in user-job management with Ganga

Recent developments in user-job management with Ganga Recent developments in user-job management with Ganga Currie R 1, Elmsheuser J 2, Fay R 3, Owen P H 1, Richards A 1, Slater M 4, Sutcliffe W 1, Williams M 4 1 Blackett Laboratory, Imperial College London,

More information

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation Journal of Physics: Conference Series PAPER OPEN ACCESS Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation To cite this article: R. Di Nardo et al 2015 J. Phys.: Conf.

More information

Clouds at other sites T2-type computing

Clouds at other sites T2-type computing Clouds at other sites T2-type computing Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are used in a variety of ways for Tier-2 type computing MC simulation, production

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

The High-Level Dataset-based Data Transfer System in BESDIRAC

The High-Level Dataset-based Data Transfer System in BESDIRAC The High-Level Dataset-based Data Transfer System in BESDIRAC T Lin 1,2, X M Zhang 1, W D Li 1 and Z Y Deng 1 1 Institute of High Energy Physics, 19B Yuquan Road, Beijing 100049, People s Republic of China

More information

Testing an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system

Testing an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system Testing an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system M Pezzi 1, M Favaro 1, D Gregori 1, PP Ricci 1, V Sapunenko 1 1 INFN CNAF Viale Berti Pichat 6/2

More information

Monitoring ARC services with GangliARC

Monitoring ARC services with GangliARC Journal of Physics: Conference Series Monitoring ARC services with GangliARC To cite this article: D Cameron and D Karpenko 2012 J. Phys.: Conf. Ser. 396 032018 View the article online for updates and

More information

LHCb Computing Strategy

LHCb Computing Strategy LHCb Computing Strategy Nick Brook Computing Model 2008 needs Physics software Harnessing the Grid DIRC GNG Experience & Readiness HCP, Elba May 07 1 Dataflow RW data is reconstructed: e.g. Calo. Energy

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

The Diverse use of Clouds by CMS

The Diverse use of Clouds by CMS Journal of Physics: Conference Series PAPER OPEN ACCESS The Diverse use of Clouds by CMS To cite this article: Anastasios Andronis et al 2015 J. Phys.: Conf. Ser. 664 022012 Recent citations - HEPCloud,

More information

Integration of the guse/ws-pgrade and InSilicoLab portals with DIRAC

Integration of the guse/ws-pgrade and InSilicoLab portals with DIRAC Journal of Physics: Conference Series Integration of the guse/ws-pgrade and InSilicoLab portals with DIRAC To cite this article: A Puig Navarro et al 2012 J. Phys.: Conf. Ser. 396 032088 Related content

More information

ATLAS Distributed Computing Experience and Performance During the LHC Run-2

ATLAS Distributed Computing Experience and Performance During the LHC Run-2 ATLAS Distributed Computing Experience and Performance During the LHC Run-2 A Filipčič 1 for the ATLAS Collaboration 1 Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia E-mail: andrej.filipcic@ijs.si

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

Evolution of Cloud Computing in ATLAS

Evolution of Cloud Computing in ATLAS The Evolution of Cloud Computing in ATLAS Ryan Taylor on behalf of the ATLAS collaboration 1 Outline Cloud Usage and IaaS Resource Management Software Services to facilitate cloud use Sim@P1 Performance

More information

Report on the HEPiX Virtualisation Working Group

Report on the HEPiX Virtualisation Working Group Report on the HEPiX Virtualisation Working Group Thomas Finnern Owen Synge (DESY/IT) The Arts of Virtualization > Operating System Virtualization Core component of today s IT infrastructure > Application

More information

Dynamic Extension of a Virtualized Cluster by using Cloud Resources

Dynamic Extension of a Virtualized Cluster by using Cloud Resources Dynamic Extension of a Virtualized Cluster by using Cloud Resources Oliver Oberst, Thomas Hauth, David Kernert, Stephan Riedel, Günter Quast Institut für Experimentelle Kernphysik, Karlsruhe Institute

More information

CMS users data management service integration and first experiences with its NoSQL data storage

CMS users data management service integration and first experiences with its NoSQL data storage Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.

More information

Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE

Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE Journal of Physics: Conference Series PAPER OPEN ACCESS Lightweight scheduling of elastic analysis containers in a competitive cloud environment: a Docked Analysis Facility for ALICE Recent citations -

More information

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View

More information

Overview of ATLAS PanDA Workload Management

Overview of ATLAS PanDA Workload Management Overview of ATLAS PanDA Workload Management T. Maeno 1, K. De 2, T. Wenaus 1, P. Nilsson 2, G. A. Stewart 3, R. Walker 4, A. Stradling 2, J. Caballero 1, M. Potekhin 1, D. Smith 5, for The ATLAS Collaboration

More information

TC-IOT M2M CORE Services Protocol. User Manual. Version: 1.0 Date:

TC-IOT M2M CORE Services Protocol. User Manual. Version: 1.0 Date: TC-IOT M2M CORE Services Protocol User Manual Version: 1.0 Date: Document Name: TC-IOT M2M CORE Services Protocol - User Manual Version: 1.0 Date: Document ID: TC_IOT_M2M_CORE_Protocol_User_Manual_EN_v1.0

More information

Benchmarking and accounting for the (private) cloud

Benchmarking and accounting for the (private) cloud Journal of Physics: Conference Series PAPER OPEN ACCESS Benchmarking and accounting for the (private) cloud To cite this article: J Belleman and U Schwickerath 2015 J. Phys.: Conf. Ser. 664 022035 View

More information

CMS experience of running glideinwms in High Availability mode

CMS experience of running glideinwms in High Availability mode CMS experience of running glideinwms in High Availability mode I Sfiligoi 1, J Letts 1, S Belforte 2, A McCrea 1, K Larson 3, M Zvada 4, B Holzman 3, P Mhashilkar 3, D C Bradley 5, M D Saiz Santos 1, F

More information

Exploiting Virtualization and Cloud Computing in ATLAS

Exploiting Virtualization and Cloud Computing in ATLAS Journal of Physics: Conference Series Exploiting Virtualization and Cloud Computing in ATLAS To cite this article: Fernando Harald Barreiro Megino et al 2012 J. Phys.: Conf. Ser. 396 032011 View the article

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

PoS(EGICF12-EMITC2)106

PoS(EGICF12-EMITC2)106 DDM Site Services: A solution for global replication of HEP data Fernando Harald Barreiro Megino 1 E-mail: fernando.harald.barreiro.megino@cern.ch Simone Campana E-mail: simone.campana@cern.ch Vincent

More information

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano.

PoS(ACAT2010)039. First sights on a non-grid end-user analysis model on Grid Infrastructure. Roberto Santinelli. Fabrizio Furano. First sights on a non-grid end-user analysis model on Grid Infrastructure Roberto Santinelli CERN E-mail: roberto.santinelli@cern.ch Fabrizio Furano CERN E-mail: fabrzio.furano@cern.ch Andrew Maier CERN

More information

Maintaining Traceability in an Evolving Distributed Computing Environment

Maintaining Traceability in an Evolving Distributed Computing Environment Journal of Physics: Conference Series PAPER OPEN ACCESS Maintaining Traceability in an Evolving Distributed Computing Environment To cite this article: I Collier and R Wartel 2015 J. Phys.: Conf. Ser.

More information

An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1

An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1 Journal of Physics: Conference Series PAPER OPEN ACCESS An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1 Related content - Topical Review W W Symes - MAP Mission C. L. Bennett,

More information

Tier 3 batch system data locality via managed caches

Tier 3 batch system data locality via managed caches Journal of Physics: Conference Series PAPER OPEN ACCESS Tier 3 batch system data locality via managed caches To cite this article: Max Fischer et al 2015 J. Phys.: Conf. Ser. 608 012018 Recent citations

More information

A data handling system for modern and future Fermilab experiments

A data handling system for modern and future Fermilab experiments Journal of Physics: Conference Series OPEN ACCESS A data handling system for modern and future Fermilab experiments To cite this article: R A Illingworth 2014 J. Phys.: Conf. Ser. 513 032045 View the article

More information

Streamlining CASTOR to manage the LHC data torrent

Streamlining CASTOR to manage the LHC data torrent Streamlining CASTOR to manage the LHC data torrent G. Lo Presti, X. Espinal Curull, E. Cano, B. Fiorini, A. Ieri, S. Murray, S. Ponce and E. Sindrilaru CERN, 1211 Geneva 23, Switzerland E-mail: giuseppe.lopresti@cern.ch

More information

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland Abstract. The Data and Storage Services group at CERN is conducting

More information

The DMLite Rucio Plugin: ATLAS data in a filesystem

The DMLite Rucio Plugin: ATLAS data in a filesystem Journal of Physics: Conference Series OPEN ACCESS The DMLite Rucio Plugin: ATLAS data in a filesystem To cite this article: M Lassnig et al 2014 J. Phys.: Conf. Ser. 513 042030 View the article online

More information

ATLAS operations in the GridKa T1/T2 Cloud

ATLAS operations in the GridKa T1/T2 Cloud Journal of Physics: Conference Series ATLAS operations in the GridKa T1/T2 Cloud To cite this article: G Duckeck et al 2011 J. Phys.: Conf. Ser. 331 072047 View the article online for updates and enhancements.

More information

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY

Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY Journal of Physics: Conference Series OPEN ACCESS Monitoring System for the GRID Monte Carlo Mass Production in the H1 Experiment at DESY To cite this article: Elena Bystritskaya et al 2014 J. Phys.: Conf.

More information

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Ulrike Schnoor (CERN) Anton Gamel, Felix Bührer, Benjamin Rottler, Markus Schumacher (University of Freiburg) February 02, 2018

More information

The ALICE Glance Shift Accounting Management System (SAMS)

The ALICE Glance Shift Accounting Management System (SAMS) Journal of Physics: Conference Series PAPER OPEN ACCESS The ALICE Glance Shift Accounting Management System (SAMS) To cite this article: H. Martins Silva et al 2015 J. Phys.: Conf. Ser. 664 052037 View

More information

Phronesis, a diagnosis and recovery tool for system administrators

Phronesis, a diagnosis and recovery tool for system administrators Journal of Physics: Conference Series OPEN ACCESS Phronesis, a diagnosis and recovery tool for system administrators To cite this article: C Haen et al 2014 J. Phys.: Conf. Ser. 513 062021 View the article

More information

Security in the CernVM File System and the Frontier Distributed Database Caching System

Security in the CernVM File System and the Frontier Distributed Database Caching System Security in the CernVM File System and the Frontier Distributed Database Caching System D Dykstra 1 and J Blomer 2 1 Scientific Computing Division, Fermilab, Batavia, IL 60510, USA 2 PH-SFT Department,

More information

arxiv: v1 [cs.dc] 7 Apr 2014

arxiv: v1 [cs.dc] 7 Apr 2014 arxiv:1404.1814v1 [cs.dc] 7 Apr 2014 CernVM Online and Cloud Gateway: a uniform interface for CernVM contextualization and deployment G Lestaris 1, I Charalampidis 2, D Berzano, J Blomer, P Buncic, G Ganis

More information

Improvements to the User Interface for LHCb's Software continuous integration system.

Improvements to the User Interface for LHCb's Software continuous integration system. Journal of Physics: Conference Series PAPER OPEN ACCESS Improvements to the User Interface for LHCb's Software continuous integration system. Related content - A New Nightly Build System for LHCb M Clemencic

More information

Analysis and improvement of data-set level file distribution in Disk Pool Manager

Analysis and improvement of data-set level file distribution in Disk Pool Manager Journal of Physics: Conference Series OPEN ACCESS Analysis and improvement of data-set level file distribution in Disk Pool Manager To cite this article: Samuel Cadellin Skipsey et al 2014 J. Phys.: Conf.

More information

Modular and scalable RESTful API to sustain STAR collaboration's record keeping

Modular and scalable RESTful API to sustain STAR collaboration's record keeping Journal of Physics: Conference Series PAPER OPEN ACCESS Modular and scalable RESTful API to sustain STAR collaboration's record keeping To cite this article: D Arkhipkin et al 2015 J. Phys.: Conf. Ser.

More information

Exploiting peer group concept for adaptive and highly available services

Exploiting peer group concept for adaptive and highly available services Computing in High Energy and Nuclear Physics, 24-28 March 2003 La Jolla California 1 Exploiting peer group concept for adaptive and highly available services Muhammad Asif Jan Centre for European Nuclear

More information

Benchmarking the ATLAS software through the Kit Validation engine

Benchmarking the ATLAS software through the Kit Validation engine Benchmarking the ATLAS software through the Kit Validation engine Alessandro De Salvo (1), Franco Brasolin (2) (1) Istituto Nazionale di Fisica Nucleare, Sezione di Roma, (2) Istituto Nazionale di Fisica

More information

Evaluation of Apache Hadoop for parallel data analysis with ROOT

Evaluation of Apache Hadoop for parallel data analysis with ROOT Evaluation of Apache Hadoop for parallel data analysis with ROOT S Lehrack, G Duckeck, J Ebke Ludwigs-Maximilians-University Munich, Chair of elementary particle physics, Am Coulombwall 1, D-85748 Garching,

More information

HammerCloud: A Stress Testing System for Distributed Analysis

HammerCloud: A Stress Testing System for Distributed Analysis HammerCloud: A Stress Testing System for Distributed Analysis Daniel C. van der Ster 1, Johannes Elmsheuser 2, Mario Úbeda García 1, Massimo Paladin 1 1: CERN, Geneva, Switzerland 2: Ludwig-Maximilians-Universität

More information

When (and how) to move applications from VMware to Cisco Metacloud

When (and how) to move applications from VMware to Cisco Metacloud White Paper When (and how) to move applications from VMware to Cisco Metacloud What You Will Learn This white paper will explain when to migrate various applications running in VMware virtual machines

More information

Monitoring of large-scale federated data storage: XRootD and beyond.

Monitoring of large-scale federated data storage: XRootD and beyond. Monitoring of large-scale federated data storage: XRootD and beyond. J Andreeva 1, A Beche 1, S Belov 2, D Diguez Arias 1, D Giordano 1, D Oleynik 2, A Petrosyan 2, P Saiz 1, M Tadel 3, D Tuckett 1 and

More information

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing

One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool. D. Mason for CMS Software & Computing One Pool To Rule Them All The CMS HTCondor/glideinWMS Global Pool D. Mason for CMS Software & Computing 1 Going to try to give you a picture of the CMS HTCondor/ glideinwms global pool What s the use case

More information

Database on Demand: insight how to build your own DBaaS

Database on Demand: insight how to build your own DBaaS Journal of Physics: Conference Series PAPER OPEN ACCESS Database on Demand: insight how to build your own DBaaS Related content - DataBase on Demand R Gaspar Aparicio, D Gomez, I Coterillo Coz et al. To

More information

Managing a tier-2 computer centre with a private cloud infrastructure

Managing a tier-2 computer centre with a private cloud infrastructure Journal of Physics: Conference Series OPEN ACCESS Managing a tier-2 computer centre with a private cloud infrastructure To cite this article: Stefano Bagnasco et al 2014 J. Phys.: Conf. Ser. 523 012012

More information

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN

Application of Virtualization Technologies & CernVM. Benedikt Hegner CERN Application of Virtualization Technologies & CernVM Benedikt Hegner CERN Virtualization Use Cases Worker Node Virtualization Software Testing Training Platform Software Deployment }Covered today Server

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

AliEn Resource Brokers

AliEn Resource Brokers AliEn Resource Brokers Pablo Saiz University of the West of England, Frenchay Campus Coldharbour Lane, Bristol BS16 1QY, U.K. Predrag Buncic Institut für Kernphysik, August-Euler-Strasse 6, 60486 Frankfurt

More information

EEC-682/782 Computer Networks I

EEC-682/782 Computer Networks I EEC-682/782 Computer Networks I Lecture 20 Wenbing Zhao w.zhao1@csuohio.edu http://academic.csuohio.edu/zhao_w/teaching/eec682.htm (Lecture nodes are based on materials supplied by Dr. Louise Moser at

More information

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider

From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider From raw data to new fundamental particles: The data management lifecycle at the Large Hadron Collider Andrew Washbrook School of Physics and Astronomy University of Edinburgh Dealing with Data Conference

More information

DIRAC Distributed Secure Framework

DIRAC Distributed Secure Framework DIRAC Distributed Secure Framework A Casajus Universitat de Barcelona E-mail: adria@ecm.ub.es R Graciani Universitat de Barcelona E-mail: graciani@ecm.ub.es on behalf of the LHCb DIRAC Team Abstract. DIRAC,

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Authentication and Authorization of End User in Microservice Architecture

Authentication and Authorization of End User in Microservice Architecture Journal of Physics: Conference Series PAPER OPEN ACCESS Authentication and Authorization of End User in Microservice Architecture To cite this article: Xiuyu He and Xudong Yang 2017 J. Phys.: Conf. Ser.

More information

The ATLAS PanDA Pilot in Operation

The ATLAS PanDA Pilot in Operation The ATLAS PanDA Pilot in Operation P. Nilsson 1, J. Caballero 2, K. De 1, T. Maeno 2, A. Stradling 1, T. Wenaus 2 for the ATLAS Collaboration 1 University of Texas at Arlington, Science Hall, P O Box 19059,

More information

HTCondor Week 2015: Implementing an HTCondor service at CERN

HTCondor Week 2015: Implementing an HTCondor service at CERN HTCondor Week 2015: Implementing an HTCondor service at CERN Iain Steers, Jérôme Belleman, Ulrich Schwickerath IT-PES-PS HTCondor Week 2015 HTCondor at CERN 2 Outline The Move Environment Grid Pilot Local

More information

A new petabyte-scale data derivation framework for ATLAS

A new petabyte-scale data derivation framework for ATLAS Journal of Physics: Conference Series PAPER OPEN ACCESS A new petabyte-scale data derivation framework for ATLAS To cite this article: James Catmore et al 2015 J. Phys.: Conf. Ser. 664 072007 View the

More information

Volunteer Computing at CERN

Volunteer Computing at CERN Volunteer Computing at CERN BOINC workshop Sep 2014, Budapest Tomi Asp & Pete Jones, on behalf the LHC@Home team Agenda Overview Status of the LHC@Home projects Additional BOINC projects Service consolidation

More information

The JINR Tier1 Site Simulation for Research and Development Purposes

The JINR Tier1 Site Simulation for Research and Development Purposes EPJ Web of Conferences 108, 02033 (2016) DOI: 10.1051/ epjconf/ 201610802033 C Owned by the authors, published by EDP Sciences, 2016 The JINR Tier1 Site Simulation for Research and Development Purposes

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

Popularity Prediction Tool for ATLAS Distributed Data Management

Popularity Prediction Tool for ATLAS Distributed Data Management Popularity Prediction Tool for ATLAS Distributed Data Management T Beermann 1,2, P Maettig 1, G Stewart 2, 3, M Lassnig 2, V Garonne 2, M Barisits 2, R Vigne 2, C Serfon 2, L Goossens 2, A Nairz 2 and

More information

Online remote monitoring facilities for the ATLAS experiment

Online remote monitoring facilities for the ATLAS experiment Journal of Physics: Conference Series Online remote monitoring facilities for the ATLAS experiment To cite this article: S Kolos et al 2011 J. Phys.: Conf. Ser. 331 022013 View the article online for updates

More information

Distributed Systems. Web Services (WS) and Service Oriented Architectures (SOA) László Böszörményi Distributed Systems Web Services - 1

Distributed Systems. Web Services (WS) and Service Oriented Architectures (SOA) László Böszörményi Distributed Systems Web Services - 1 Distributed Systems Web Services (WS) and Service Oriented Architectures (SOA) László Böszörményi Distributed Systems Web Services - 1 Service Oriented Architectures (SOA) A SOA defines, how services are

More information

Performance Modelling Lecture 12: PEPA Case Study: Rap Genius on Heroku

Performance Modelling Lecture 12: PEPA Case Study: Rap Genius on Heroku Performance Modelling Lecture 12: PEPA Case Study: Rap Genius on Heroku Jane Hillston & Dimitrios Milios School of Informatics The University of Edinburgh Scotland 2nd March 2017 Introduction As an example

More information

We invented the Web. 20 years later we got Drupal.

We invented the Web. 20 years later we got Drupal. We invented the Web. 20 years later we got Drupal. CERN s perspective on adopting Drupal as a platform. DrupalCon, London 2011 Today we ll look at. What is CERN? Challenges of the web at CERN Why Drupal

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

Client/Server Computing & Socket Programming

Client/Server Computing & Socket Programming COMP 431 Internet Services & Protocols Client/Server Computing & Socket Programming Jasleen Kaur January 29, 2019 Application-Layer Protocols Overview Application-layer protocols define:» The types of

More information

A framework to monitor activities of satellite data processing in real-time

A framework to monitor activities of satellite data processing in real-time Journal of Physics: Conference Series PAPER OPEN ACCESS A framework to monitor activities of satellite data processing in real-time To cite this article: M D Nguyen and A P Kryukov 2018 J. Phys.: Conf.

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

CCNA Exploration Network Fundamentals. Chapter 03 Application Functionality and Protocols

CCNA Exploration Network Fundamentals. Chapter 03 Application Functionality and Protocols CCNA Exploration Network Fundamentals Chapter 03 Application Functionality and Protocols Updated: 27/04/2008 1 3.1 Applications: The Interface Between Human and Networks Applications provide the means

More information

HEP replica management

HEP replica management Primary actor Goal in context Scope Level Stakeholders and interests Precondition Minimal guarantees Success guarantees Trigger Technology and data variations Priority Releases Response time Frequency

More information

Evaluation of the Huawei UDS cloud storage system for CERN specific data

Evaluation of the Huawei UDS cloud storage system for CERN specific data th International Conference on Computing in High Energy and Nuclear Physics (CHEP3) IOP Publishing Journal of Physics: Conference Series 53 (4) 44 doi:.88/74-6596/53/4/44 Evaluation of the Huawei UDS cloud

More information

Implementing the Twelve-Factor App Methodology for Developing Cloud- Native Applications

Implementing the Twelve-Factor App Methodology for Developing Cloud- Native Applications Implementing the Twelve-Factor App Methodology for Developing Cloud- Native Applications By, Janakiram MSV Executive Summary Application development has gone through a fundamental shift in the recent past.

More information