Phronesis, a diagnosis and recovery tool for system administrators

Similar documents
ECFS: A decentralized, distributed and faulttolerant FUSE filesystem for the LHCb online farm

ATLAS Nightly Build System Upgrade

The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure

ComPWA: A common amplitude analysis framework for PANDA

ATLAS Tracking Detector Upgrade studies using the Fast Simulation Engine

Improved ATLAS HammerCloud Monitoring for Local Site Administration

Dataflow Monitoring in LHCb

The ALICE Glance Shift Accounting Management System (SAMS)

ATLAS software configuration and build tool optimisation

CASTORFS - A filesystem to access CASTOR

CMS - HLT Configuration Management System

The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure

The CMS data quality monitoring software: experience and future prospects

Improvements to the User Interface for LHCb's Software continuous integration system.

Geant4 application in a Web browser

SNiPER: an offline software framework for non-collider physics experiments

The High-Level Dataset-based Data Transfer System in BESDIRAC

Evolution of Database Replication Technologies for WLCG

CMS High Level Trigger Timing Measurements

The Database Driven ATLAS Trigger Configuration System

A Tool for Conditions Tag Management in ATLAS

A data handling system for modern and future Fermilab experiments

Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns

The NOvA DAQ Monitor System

DIRAC distributed secure framework

Monitoring ARC services with GangliARC

Streamlining CASTOR to manage the LHC data torrent

An SQL-based approach to physics analysis

Monte Carlo Production on the Grid by the H1 Collaboration

Performance of popular open source databases for HEP related computing problems

Servicing HEP experiments with a complete set of ready integreated and configured common software components

Development of DKB ETL module in case of data conversion

DIRAC pilot framework and the DIRAC Workload Management System

CMS users data management service integration and first experiences with its NoSQL data storage

WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.

GStat 2.0: Grid Information System Status Monitoring

INSPECTOR, A ZERO CODE IDE FOR CONTROL SYSTEMS USER INTERFACE DEVELOPMENT

The Design and Optimization of Database

The DMLite Rucio Plugin: ATLAS data in a filesystem

Evaluation of the Huawei UDS cloud storage system for CERN specific data

Geant4 Computing Performance Benchmarking and Monitoring

Automating usability of ATLAS Distributed Computing resources

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

File Access Optimization with the Lustre Filesystem at Florida CMS T2

DIRAC File Replica and Metadata Catalog

Monte Carlo Production Management at CMS

How the Monte Carlo production of a wide variety of different samples is centrally handled in the LHCb experiment

Use of containerisation as an alternative to full virtualisation in grid environments.

Update of the BESIII Event Display System

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Large Scale Software Building with CMake in ATLAS

Testing an Open Source installation and server provisioning tool for the INFN CNAF Tier1 Storage system

ilcdirac and Continuous Integration: Automated Testing for Distributed Computing

System level traffic shaping in disk servers with heterogeneous protocols

Early experience with the Run 2 ATLAS analysis model

A self-configuring control system for storage and computing departments at INFN-CNAF Tierl

Data preservation for the HERA experiments at DESY using dcache technology

Modular and scalable RESTful API to sustain STAR collaboration's record keeping

ATLAS operations in the GridKa T1/T2 Cloud

Tests of PROOF-on-Demand with ATLAS Prodsys2 and first experience with HTTP federation

The ATLAS EventIndex: an event catalogue for experiments collecting large amounts of data

Andrea Sciabà CERN, Switzerland

jspydb, an open source database-independent tool for data management

Experience with PROOF-Lite in ATLAS data analysis

LHCb Distributed Conditions Database

A Framework for Securing Databases from Intrusion Threats

Real-time Monitoring, Inventory and Change Tracking for. Track. Report. RESOLVE!

Striped Data Server for Scalable Parallel Data Analysis

Improved Information Retrieval Performance on SQL Database Using Data Adapter

AGIS: The ATLAS Grid Information System

Deploying enterprise applications on Dell Hybrid Cloud System for Microsoft Cloud Platform System Standard

Benchmarking the ATLAS software through the Kit Validation engine

Verification and Diagnostics Framework in ATLAS Trigger/DAQ

The virtual geometry model

A first look at 100 Gbps LAN technologies, with an emphasis on future DAQ applications.

Partial Acquisition Prashant Jain and Michael Kircher

First LHCb measurement with data from the LHC Run 2

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

Efficiency Gains in Inbound Data Warehouse Feed Implementation

Overview of ATLAS PanDA Workload Management

OnCommand Unified Manager

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

THE ATLAS experiment comprises a significant number

Docker Container Manager: A Simple Toolkit for Isolated Work with Shared Computational, Storage, and Network Resources

Multiple variables data sets visualization in ROOT

Popularity Prediction Tool for ATLAS Distributed Data Management

Control and Monitoring of the Front-End Electronics in ALICE

Monitoring the software quality in FairRoot. Gesellschaft für Schwerionenforschung, Plankstrasse 1, Darmstadt, Germany

Design Patterns for Description-Driven Systems

CONTROL AND MONITORING OF ON-LINE TRIGGER ALGORITHMS USING GAUCHO

Online data storage service strategy for the CERN computer Centre G. Cancio, D. Duellmann, M. Lamanna, A. Pace CERN, Geneva, Switzerland

Scientific Cluster Deployment and Recovery Using puppet to simplify cluster management

Suez: Job Control and User Interface for CLEO III

Performance quality monitoring system for the Daya Bay reactor neutrino experiment

The Error Reporting in the ATLAS TDAQ System

SOFTWARE ENGINEERING. To discuss several different ways to implement software reuse. To describe the development of software product lines.

Evolution of Database Replication Technologies for WLCG

Database on Demand: insight how to build your own DBaaS

Michael Böge, Jan Chrin

An Analysis of Storage Interface Usages at a Large, MultiExperiment Tier 1

Transcription:

Journal of Physics: Conference Series OPEN ACCESS Phronesis, a diagnosis and recovery tool for system administrators To cite this article: C Haen et al 2014 J. Phys.: Conf. Ser. 513 062021 View the article online for updates and enhancements. Related content - Artificial intelligence in the service of system administrators C Haen, V Barra, E Bonaccorsi et al. - A New Nightly Build System for LHCb M Clemencic and B Couturier - Systematic profiling to monitor and specify the software refactoring process of the LHCb experiment Ben Couturier, E Kiagias and Stefan B Lohn This content was downloaded from IP address 46.3.199.218 on 11/01/2018 at 12:52

Phronesis, a diagnosis and recovery tool for system administrators C HAEN 1, V BARRA 2, E BONACCORSI 3 and N NEUFELD 3 1 Univ. Blaise Pascal, 63006 Clermont-ferrand cedex, France 2 LIMOS, UMR 6158 CNRS, Univ. Blaise Pascal, 63006 Clermont-ferrand cedex, France 3 European Organization for Nuclear Research, CERN CH-1211, Genève 23, Switzerland E-mail: christophe.haen@cern.ch Abstract. The LHCb experiment relies on the Online system, which includes a very large and heterogeneous computing cluster. Ensuring the proper behavior of the different tasks running on the more than 2000 servers represents a huge workload for the small operator team and is a 24/7 task. At CHEP 2012, we presented a prototype of a framework that we designed in order to support the experts. The main objective is to provide them with steadily improving diagnosis and recovery solutions in case of misbehavior of a service, without having to modify the original applications. Our framework is based on adapted principles of the Autonomic Computing model, on Reinforcement Learning algorithms, as well as innovative concepts such as Shared Experience. While the submission at CHEP 2012 showed the validity of our prototype on simulations, we here present an implementation with improved algorithms and manipulation tools, and report on the experience gained with running it in the LHCb Online system. 1. Introduction LHCb [1] is one of the four large experiments at the Large Hadron Collider at CERN. This experiment relies on a large computing infrastructure [2] to (i) control the data acquisition system and the detector, and (ii) manage the data it produces. The team in charge of the installation and the administration of this system comprises less than 10 people, with three full time workers. To help the system administrators to reach their goal of high availability, we have attempted to provide them with a software which would propose a diagnosis and recovery solution in case of problems, improve with experience and act as a knowledge and problem history database. The paper we published at CHEP 2012 [3] introduced the concepts we used in our software. The validity of these concepts was proven on several simulations. Since then, the algorithms were improved, the software code consolidated and manipulation tools were developed. Further simulations were run to test deeper the ability of the software, and it has now been deployed on a much larger scale in the LHCb Online environment. 2. LISA: LearnIng approach for System Administration In [3], we presented methods that address problems similar to ours. These methods were expert systems [4] and autonomic computing principles like MAPE-K loop [5]. Based on these historical approaches and adding innovative concepts such as the Shared Experience principle, we now define the methodology of our framework as follows: Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. Published under licence by IOP Publishing Ltd 1

Linux systems represent the greatest share of the Online environment. We thus decided to focus only on them. Network or Windows-based machine diagnoses are not addressed. Because of the great variety of software running on the LHCb Online HLT farm, our solution needs to be as generic as possible. As files and processes are the components of any application, we decided to use them as basic blocks for our diagnoses. To each type of problem that can be encountered with such entities like wrong file permission, wrong process user, etc is associated a default recovery solution. Note that this method is eqaully valid on Windows servers as it is generic enough. Perform no monitoring, but rather wait to be informed of problems by external sources Existing implementations associate one MAPE-K loop instance to one system and rely on multi-agent theory for synchronization and cooperation. Our approach is to have a single loop for all the systems. This allows the software to spot the dependencies between the various systems. By using Reinforcement Learning algorithms, we improve the diagnostic speed and scalability by reducing the amount of components that are checked before finding the faulty one. The Shared Experience principle consists of sharing the experience between similar systems (like two websites). It reduces both the learning phase of the learning algorithms and the description workload of the users. Using Convention over Configuration [6] contributes in reducing the configuration work of the software. Our software offers a default recovery solution with the full procedure for the fix to be taken into account, as well as information regarding previously encountered situations on the same problematic entity. However, the user has to perform the correction himself. 3. Phronesis Our implementation of the above methodology is called Phronesis. modules described in this section. It is divided in several 3.1. Compiler We defined a new configuration grammar that allows us to describe services as a composition of files, processes and other services. This grammar is actually inspired by the object model, where objects would be mainly files, processes or services and the inheritance concept is used to describe the Shared Experience principle. The user can also define two types of rules: Dependency rule: this rule states that one service needs another one to be fully functional. Recovery rule or Trigger: this rule lists what a given recovery action involves. For example, if the recovery action consists of changing the content of a file, a recovery rule could state that it is required to stop a process before changing the file and another one to start it after the modification. The compiler was developed in Python using the pyparsing library [7]. The choice of Python was made because of the dynamic characteristics of Python, such as the introspection mechanism and weak typing. The compiler reads the configuration files and produces an SQL script output. One critical aspect of the compilation is to not lose the experience that was previously gained by the reinforcement algorithm. This is achieved using custom graph-matching algorithms between the configuration files and the current content of the database. 2

3.2. Remote Agent The remote agent is a software program that runs on all the machines the user wants to supervise. Its only purpose is to answer queries from the Core (see 3.3). The complexities of it are at the technical level, and are just implementation details. The query concerns all the attributes of files, processes or the general environment. The agent is developed in C++, using several Boost libraries [8]. 3.3. Core The Core module of the software is the central part which contains all the algorithms used to actually diagnose problems and offer recovery solutions. The main algorithms are listed here: Sorting algorithm: when several problems are reported at the same time, this algorithm has to decide in which order they are analyzed. The order is very important for performance reasons, but also because there might be situations in which one problem cannot be solved before the others are. This algorithm uses Dependency rules to establish the order. Recovery algorithm: once the root cause of a problem is found, it can usually be fixed quite easily (e.g. fix a corrupted file, restart a process). For the changes to be taken into account, extra actions might be required. These actions are defined by Recovery rules. The complication comes from the fact that actions can be required before or after the fix is applied. Computing the full chain of events is a non-trivial task. Reinforcement Learning algorithm: the reinforcement learning algorithm is used to optimize the exploration path from a reported problem to its faulty component. The chosen method consists of keeping track of the paths that were successful in previous cases. Each path has an associated counter which is incremented when the path is faulty. When a new problem is reported, one can rely on these counters to choose the more appropriate path. There are two strategies: either sorting the counters in decreasing orders, either making a weighted random choice. Simulations (see 4.1) show that in average, both strategies are equivalent. Although simple, this method based on counters has great advantages. If a path is reinforced whereas it should not, the user can very easily correct it. The user can also give a priori knowledge. Finally, from a technical point of view, the application of the Shared Experience principle to this method is straightforward. Dependency algorithm: one of the most interesting features of our software is its ability to find dependencies between services based on previous experience. This capacity allows our software to infer new Dependency rules, and thus provides better diagnoses. The implementation is done in C++ and uses Boost libraries. It can be run as a daemon, as an interactive program, or to make a full check of all the services known to it. 3.4. Tools There are two kinds of interactions between the software and the user. Output communication so that the user knows what the software is doing. Input communication for the user to report problems or give feedback. This bidirectional communication is made possible using an Application Programming Interface (API). The output communication is based on an Observer pattern [9], while the input messages are similar to Remote Procedure Calls. Based on the API, several ready-to-use user interfaces were developed: phrutils: a command line tool phrgui: currently being prototyped. A GUI based on the Qt framework [10]. phrxml: only for output communication. This stores all the output into an XML file based ring buffer. 3

phrsimu: an interface used by our simulation software to test the algorithms. phricinga: an interface that gathers data from Icinga [11], the monitoring software used at LHCb. phrweb: a web interface based on phrxml and the Django framework [12]. 4. Results 4.1. Simulations It was important in order to test our algorithms to be able to simulate realistic situations. To achieve this, we developed a complete set of tools to produce Monte-Carlo simulations. Phronesis needs to be compiled in a particular way. The reason is that the simulation tool tests the algorithms of the Core module, and not the code quality of the Agents: when under normal usage, remote servers are queried to get information before processing it; in simulation mode, the query is intercepted and a local Agent is instructed what to return. This allows us to test Phronesis on a single local machine. Another software program is used to randomly generate problems based on user input, inject signals to the Core to mock the agents analysis, interact with it to confirm or deny its diagnoses, and produce statistics about the behavior of Phronesis. This tool reproduces almost any kind of environment. Various situations were simulated, which validated the importance of Dependency rules as well as the Shared Experience principle. It also showed that the two exploring strategies of a faulty service mentioned earlier are equivalent in average. 4.2. Real case application Phronesis is now being deployed on the entire LHCb Online cluster. It is to be noted that it is not a replacement to any solution already in place, but is expected to be in addition to it. At the time of writing, a fair fraction of the LHCb Online system is already covered and the diagnoses we had the opportunity to trigger showed useful. Systems under Phronesis supervision include the log aggregation cluster, the event filter software, the web services and the monitoring infrastructure. Despite the fact that there only a small number of unexpected and unprovoked situations, Phronesis could make several correct diagnoses, and offered appropriate solutions. Among these, several diagnoses were a direct consequence of the Convention Over Configuration approach, because the root cause was pointing at elements which the user did not define manually. Examples of diagnoses are: Full inodes for log servers: the log servers store a large number of tiny files (around 50 000 files with a median size of 100 Kb) on a clustered file system. As a consequence, the pool of inodes was exhausted well before the actual storage space. The solution, correctly suggested by Phronesis, was to remove files. In fact, this problem was spotted before it actually happened because of the default threshold set to 99% of used inodes: it is a great chance, because otherwise all the new logs that would have required a new file would have been silently lost. Incorrect mount options on a web service: one of the web services required a particular folder to be mounted with the write option, which was not the case. Phronesis suggested to remount it with the appropriate option. Although correct, this would not have worked immediately, because an NFS server on which Phronesis had no control was not configured to accept it. Incorrect DIM [13] name server address: the file containing the information was corrupted Various problems on MySQL servers: running out of disk space and errord in the configuration files were among the problems diagnosed by Phronesis on the MySQL database 4

Various problems on the monitoring infrastructure: the mail alerts not being sent tracked down to a process not running, the out-of-date results tracked down to a full disk space and checks not executed because of some servers not running are a few issues that Phronesis correctly diagnosed. In some cases, Phronesis completely missed the root cause of the problems. We have observed two types of failures: Errors due to a situation not foreseen in the design. Examples are disk errors or cluster setups. When it did not imply heavy modifications, the code was improved. Other cases were left for future developments. Errors due to incomplete configuration, like missing information or unsupervised service. The configuration was always updated to cover future occurrences of similar cases. 5. Outlook There is still large room for improvement, both in terms of the technical implementation and of functionality. This includes (i) an extension of the configuration grammar, which is unfortunately more verbose than what we hoped at the beginning, (ii) better native support for cluster systems, and (iii) dynamic constraints on the properties of files and processes. The plan is to add more systems under the supervision of Phronesis and add coverage for the corner cases. We hope to be able to release it as an open source solution that the community would pick up, and further develop. References [1] Augusto A A et al. (LHCb) 2008 JINST 3 S08005 [2] Neufeld N (LHCb) 2003 Nucl. Phys. Proc. Suppl. 120 105 108 [3] Haen C, Barra V, Bonaccorsi E and Neufeld N 2012 Journal of Physics: Conference Series 396 052038 URL http://stacks.iop.org/1742-6596/396/i=5/a=052038 [4] Ginsberg M 1993 Essentials of artificial intelligence (Morgan Kaufmann) ISBN 978-1558602212 [5] IBM 2001 an architectural blueprint for autonomic computing URL "http://www.theregister.co.uk/2003/05/01/autonomic computing the ibm blueprint" [6] Miller J 2009 Microsoft msdn magazine: Design for convention over configuration URL http://msdn.microsoft.com/en-us/magazine/dd419655.aspx [7] McGuire P Pyparsing website URL http://pyparsing.wikispaces.com/ [8] Boost-team 2013 Boost libraries URL http://www.boost.org/ [9] Gamma E, Helm R, Johnson R and Vlissides J 1995 Design patterns: elements of reusable object-oriented software (Boston, MA, USA: Addison-Wesley Longman Publishing Co., Inc.) ISBN 0-201-63361-2 [10] Qt-project 2013 Qt project URL http://qt-project.org/ [11] Haen C, Bonaccorsi E and Neufeld N 2011 Distributed monitoring system based on icinga Proceedings of ICALEPCS2011 pp 1149 1152 URL http://accelconf.web.cern.ch/accelconf/icalepcs2011/papers/wepmu035.pdf [12] Foundation D S 2013 Django website URL https://www.djangoproject.com/ [13] Gaspar C 1993 Dim website URL http://dim.web.cern.ch/dim/ 5