SMI++ object oriented framework used for automation and error recovery in the LHC experiments
|
|
- Shon Blankenship
- 5 years ago
- Views:
Transcription
1 Journal of Physics: Conference Series SMI++ object oriented framework used for automation and error recovery in the LHC experiments To cite this article: B Franek and C Gaspar 2010 J. Phys.: Conf. Ser Related content - The LHCb Run Control F Alessio, M C Barandela, O Callot et al. - Centralized Monitoring of the Microsoft Windows-based computers of the LHC Experiment Control Systems F Varela Rodriguez - The detector control systems for the CMS Resistive Plate Chamber G Polese, P Paolucci, R Gomez- Reino et al. View the article online for updates and enhancements. This content was downloaded from IP address on 16/11/2018 at 23:05
2 SMI++ Object Oriented Framework used for Automation and Error Recovery in the LHC Experiments B Franek 1 and C Gaspar 2 1 Rutherford Appleton Laboratory, Chilton, Didcot, OX11 0QX, UK 2 CERN, 1211 Geneva 23, Switzerland B.Franek@rl.ac.uk Abstract. In the SMI++ framework, the real world is viewed as a collection of objects behaving as finite state machines. These objects can represent real entities, such as hardware devices or software tasks, or they can represent abstract subsystems. A special language SML (State Manager Language) is provided for the object description. The SML description is then interpreted by a Logic Engine (coded in C++) to drive the Control System. This allows rule based automation and error recovery. SMI++ objects can run distributed over a variety of platforms, all communication being handled transparently by an underlying communication system DIM (Distributed Information Management). This framework has been first used by the DELPHI experiment at CERN since 1990 and subsequently by BaBar experiment at SLAC since 1999 for the design and implementation of their experiment control. SMI++ has been adopted at CERN by all LHC experiments in their detector control systems as recommended by the Joint Controls Project. Since then it has undergone many upgrades to provide for varying needs. The main features of the framework and in particular of SML language as well as recent and near future upgrades will be discussed. SMI++ has, so far, been used only by large particle physics experiments. It is, however, equally suitable for any other control applications. 1. Introduction SMI++ is based on the original State Manager concept [1] which was developed by the DELPHI experiment [2] in collaboration with the CERN Computing Division. Then it was known as SMI which stood for State Manager Interface. Since then, the concept has undergone substantial development through a series of upgrades. These were primarily dictated by the user requirements within the experiments which adopted it as a tool for designing their experiment control. The first significant upgrade (SMI++) was completed in June This consisted of re-writing its most important tool, the State Manager, from Ada [3] programming language to C++. In July 1997 it was extensively tested in the DELPHI environment. During that time, the DELPHI experiment control was fully converted from using the 'old' version of SMI to the upgraded version SMI++. At that time it was also adopted by BaBar experiment [4] at SLAC [5] for the design and implementation of their Run Control. From then on, it has been further upgraded in smaller steps, increasing its flexibility, capabilities and efficiency. c 2010 IOP Publishing Ltd 1
3 Domain External commands Abstract objects Associated objects Control world Real world Proxies Concrete Entities Hardware Hardware Hardware Figure 1. Shows the basic concepts of SMI++ : proxies, their associated objects, abstract objects and object grouping into a domain. In 2003 all four LHC experiments at CERN [6]-[9] decided to use it either fully or partially for their experiment control. Through this use by major particle experiments and continuous user feedback, SMI++ has become a well proven, robust and time tested tool. 2. Basic Concepts The real world to be controlled typically consists of a set of concrete entities such as hardware devices or software tasks. In the SMI++ framework this world is described as a collection of objects behaving as Finite State Machines (FSM). These objects are called associated objects because they are associated with an actual piece of hardware or a real software task. Each of these objects interacts with the concrete entity it represents, through a so called proxy process. The proxy process provides a bridge between the real and the SMI++ world while fulfilling two functions: firstly it follows and simplifies the behavior of the concrete entity and secondly it sends to it commands originating from the associated object. By analogy, the control system to be designed is conceived as a set of abstract (or logical) objects behaving as FSMs. They represent abstract entities and contain the control logic. They can also send commands to other objects (associated or abstract) The main attribute of an SMI++ object is its state. In each state it can accept commands that trigger actions. An abstract object, while executing an action, can send commands to other objects, interrogate the states of other objects and eventually change its own state. It can also spontaneously 2
4 respond to state changes of other objects. The associated objects only pass on the received commands to the proxy processes and reflect their states. Top level domain Commands from GUI Figure 2. Shows how in a typical large control system objects are organized into a hierarchy of domains. In order to reduce complexity of large systems, logically related objects are grouped into SMI++ domains. In each domain, the objects are organized in a hierarchical structure and form a subsystem control. Usually only one object (top level object) in each domain is accessed by other domains. The final control system is then constructed as a hierarchy of SMI++ domains. These basic concepts are graphically summarized in figure 1 and figure 2. The framework consists of a set of tools. The most important are the State Manager Language (SML), the State Manager process (SM) and the Application Program Interface (API). 3. State Manager Language The tool used to formally describe the object model of the real world and of the control system is State Manager Language. This language allows for detailed specification of the objects such as their states, actions and associated conditions. The main characteristics of this language are: Finite State Logic Objects are described as finite state machines. The main attribute of an object is its state. Commands sent to an object trigger object actions that can bring about a change in its state. Sequencing An action performed by an abstract object is specified as a sequence of instructions. These consist mainly of commands sent to other objects and of logical tests on states of other objects. Commands sent to objects representing concrete entities (associated objects) are sent off as messages to the proxy processes. 3
5 Asynchronous behaviour In principle, all actions proceed in parallel. This means that a command sent by object-a to object-b does not suspend the instruction sequence of object-a (i.e. object-a does not wait for completion of the command sent to object-b before it continues with its instruction sequence). It is however possible to enforce such wait to achieve synchronization if required. Rule based reasoning Each object can specify logical conditions based on states of other objects. These, when satisfied, will trigger execution of the action specified in the condition. This provides the mechanism for an object to respond to unsolicited state changes of other objects in the system. The main elements of the language are shown in figure 3. In the figure the individual elements are demonstrated as simplified (but fully functional) examples. Consequently, we have not shown more detailed features of the language such as object and action parameters. 4. State Manager This is the key tool of the SMI++ framework. It is a program which at its start-up uses the SML code for a particular domain and becomes so called State Manager Process (SM) of that domain. In the complete operating control system there is therefore one such process per domain. When the process is running, it takes full control of the hardware components assigned to the domain, sequences and synchronizes their activities and responds to spontaneous changes in their behavior. It does this by following the instructions in the SML code and by sending the necessary commands to proxies through their associated objects. In a given domain it is possible to reference objects in other domains. These are then locally treated as associated objects with their relevant proxies being the other SMs. This way, full cooperation among SMs in the control system is achieved. The State Manager was designed using an Object Oriented design tool (Rational Rose/C++) [10] and coded in C++. Its main C++ classes are grouped into two class categories: SML Classes These classes represent all the elements defined in the language such as states, actions, instructions etc. They are all contained within the SMIObject class (representing SMI++ objects). At the startup of the process, they are instantiated from the SML code. Logic Engine Classes Based on external events, these classes 'drive' the instantiations of the language classes. CommHandler takes care of all the communication issues: It detects state changes in remote SMI++ objects and receives external actions coming either from remote objects or from an operator. It also communicates the state changes in local SMI++ objects to the outside world and sends commands from local SMI++ objects to remote objects. Scheduler takes the information collected by CommHandler and operates on the SMIObject instantiations in such a way that in effect each local object executes its own thread. 5. Application Program Interface There are two API libraries available to application designers, in C, C++ and FORTRAN: SMIRTL library It provides the routines used by proxies to connect and to transmit state changes to their associated objects in the SMI++ world and to receive commands from them. SMIUIRTL library It provides the routines used by the processes that require information about the states of objects in the running system. This information is provided in an asynchronous manner the process is notified about the state change as soon as it happens. The library also provides the routines to send commands to any object in the running system. An example of such a process is a user interface. 4
6 Declarations Object object : HV / associated state State : OFF action : SWITCH-ON State state : ON action : SWITCH-OFF etc. State state : NOT-THERE / dead_state Declaration of associated object. The number of states and actions within states depends on the specific properties of the mirrored Hardware. object : CONTROLLER state : READY when statements if any action : START instructions to execute etc. as many states and actions within states as required Declaration of abstract object. In addition to the state and action declarations, for every action the sequence of instructions to be executed is defined. class: HV / associated state : OFF action : SWITCH-ON state : ON action : SWITCH-OFF etc. state : NOT-THERE / dead_state Declaration of a class of objects. object : HV-1 is_of_class HV Once a class of objects is defined, the actual objects of that class are declared by one line declaration. objectset : POWER-SUPPLIES {HV-1,,HV-n} Set of objects can be declared and subsequently used as a whole Instructions insert HV-78 in POWER-SUPPLIES insert instruction inserts specified object into specified object set remove HV-78 from POWER-SUPPLIES remove instruction removes specified object from specified object set do SWITCH-ON HV-1 do instruction sends the specified command to specified object do SWITCH-ON all_in POWER-SUPPLIES This version of do instruction sends the specified command to all the objects in the specified object set if ( HV-3 in_state OFF ) then instructions to execute endif if instruction waits for all the objects specified in the condition (of any boolean complexity) to finish execution of their current actions (if any) and then if the evaluation of the condition gives TRUE value, the specified instructions are executed. move_to ON This instruction terminates the current action and puts the executing object into the specified state if ( any_in POWER-SUPPLIES in_state OFF ) then instructions to execute endif This example of if instruction demonstrates how object sets can be referenced in conditions. Apart from the keyword any_in, also the keyword all_in can be used wait (HV-2, HV90) wait instruction waits for all the objects specified between the brackets to finish execution of their current actions (if any) and then terminates its execution thus allowing the following instruction to be executed. When statement when ( any_in POWER-SUPPLIES in_state OFF ) do RESTART Set of rules in the form of when statements can be placed after the state declaration of an abstract object. It will make this object execute the respective action as soon as the given condition is TRUE. This way the control system can react to asynchronous state changes Figure 3. Basic elements of SML 5
7 There is a generic User interface provided. It is configurable, i.e. the monitored objects can be selectively displayed, moved around the display etc. It is based on Tcl/Tk. However, we found from experience, that users generally prefer to write their own user interfaces tailored to the specifics of their control systems. 6. Distributed Environments State Manager processes representing SMI++ domains can run on a variety of computer platforms. The cooperation between the domains including all exchanges between objects, are embedded in the SMI++ system. All issues related to distribution and heterogeneity of platforms are transparently handled by the underlying communication system DIM [11] on top of TCP/IP. The asynchronous communication mechanism allows the objects to operate in parallel when required. At run time, no matter where a SMI++ process (State Manager or proxy) runs, it is able to communicate with any other process in the system independently of where the processes are located. At user level, the name of the object and its domain uniquely determine its location (address). Processes can move freely from one machine to another and all communications are automatically reestablished. This feature also allows for machine load balancing. The communication layer also provides an automatic recovery from crash situations such as restarting a process. SMI++ is available on any mixed environments comprising: VMS (VAX and ALPHA) and UNIX flavors (OSF, AIX, HPUX, SunOS, Solaris), Linux, Windows, OS9, LynxOS and VXWorks 7. Use of SMI++ in LHC Experiments With the aim to solve problems facing designers of any large control systems effectively, the four LHC experiments at CERN have combined efforts by creating a common control project the Joint Controls Project (JCOP) [12]. Its intention was to define and implement common solutions for their control and monitoring systems. For this purpose a common control Framework [13] has been developed. The JCOP Framework imposes the integration of the various components in a coherent and uniform manner. JCOP defined the Framework as: An integrated set of guidelines and software tools used by detector developers to realize their specific control system application. The Framework will include, as far as possible all templates, standard elements and functions required to achieve a homogeneous control system and to reduce the development effort as much as possible for the developers. The Framework had to be flexible and to allow for the simple integration of components developed separately by different teams. It also had to be scalable to allow a very large number (thousands) of channels. Some of the components of this Framework include: Guidelines imposing rules necessary to build components that can be easily integrated (naming conventions, user interface look and feel, etc.) Drivers for different types of hardware, such as fieldbuses, and PLCs (Programmable Logic Controllers). Ready-made components for commonly used devices configurable for particular applications, such as high voltage power supplies, temperature sensors, etc. From the software point of view, JCOP adopted a hierarchical, tree-like, structure to represent the structure of sub-detectors, sub-systems and hardware components. This hierarchy allows a high degree of independence between components. It provides for integrated control during physics data-taking, both automated and user-driven. In addition, it also allows concurrent use during integration, test or calibration phases. This tree is composed of two types of nodes: Device Units (Devs) which are capable of driving the equipment to which they correspond and "Control Units" (CUs) which correspond to sub-systems and can monitor and control the sub-tree below them. Figure 4 shows this hierarchical architecture. 6
8 Operator1 Operator2 CU1.1 CU CU2.N CU CU3.X CU3.N... Status & Alarms Commands CU4.1 CU4.2 CU4.N Dev1 Dev2 Dev3 DevN To Devices (HW or SW) Figure 4. Shows JCOP Software Architecture consisting of Device and Control units organized in a hierarchical structure. The JCOP architecture was the basis for the development of the common Framework. Each LHC experiment has adopted this architecture and used the Framework tools wherever they found it suitable. This architecture also readily identifies the two main elements - Device Units and Control Units. These elements match perfectly the SMI++ concepts: Device Units correspond to associated objects implemented as proxies and Control Units correspond to SMI domains. This leads to two of the main tools of the Framework:. PVSSII JCOP adopted a SCADA (Supervisory Control And Data Acquisition) system called PVSSII [14]. Device Units are implemented as SMI proxies in terms of PVSSII scripts. SMI++ PVSSII, although providing many needed features, it does not provide for hierarchical control and abstract behavior modeling. In SMI this is easily achieved by implementing Control Units as SMI domains. The PVSSII SMI++ implementation for one PVSSII system is illustrated in figure 5. The overall control system is of course composed of hundreds of such PVSSII systems. The synergy of SMI++ with PVSSII within the Framework provided for several new features: A graphical user interface was designed which allows for the configuration of object types, declaration of states and actions, etc. and for the generation of SML code (such as actions and rules) through the use of wizards. The hierarchical tree of components can also be configured graphically. The PVSSII archiving mechanism can in addition be used to store state transitions and so be able to retrieve the time evolution and long-term statistics of object state changes. 7
9 UIM UIM UIM Ctrl fwfsmsrvr DB DM EVt API PVSS00smi D D D PVSS II SMI++ Figure 5. Integration of SMI++ with PVSSII The SMI++ PVSSII synergy within the JCOP Framework also provides for a clear definition of interfaces and task separation: the PVSSII implementation of device units in terms of scripts contains only basic actions, like RESET or CONFIGURE and it has no intelligence concerning when or in which sequence they should be executed. On the other hand, the logic behavior such as sequencing of actions and responses to the behavior of other objects in the system is described in SML and implemented by the SMI objects. The advantage is that if it is necessary to replace some hardware, then only the PVSSII part is affected. If however the logic behavior needs to change instead, then only the SML description changes Partitioning In order to allow for tests, calibrations etc of part of the system ( a sub-system), the control software must be capable to allow individual sub-systems to be monitored and/or controlled independently of and concurrently with the other sub-systems. This capability is called partitioning. The software is designed so that each Control Unit knows how to partition "out" or "in" its children. Excluding a child from the hierarchy means that its state is ignored by the parent in its decision process and the parent will not send commands to the excluded child. It also means that the child becomes free and can be used by another parent. The partitioning mechanism fits perfectly with the SMI++ concept of object sets and was therefore implemented using this feature. Each child of a Control Unit can be dynamically included or excluded from the relevant set according to the partitioning mode. This functionality was encapsulated in a SMI++ Partitioning Object which is automatically inherited when a Control Unit is created Distribution Both PVSSII and SMI++ allow for the implementation of large distributed and decentralized systems. There is no rule for the mapping of Control Units and Device Units into machines, i.e. there can be one or more of these units per machine depending on their complexity, or other factors. The Framework allows users to describe their system and run it transparently across many computers. Both tools can run in mixed environments comprising Linux and Windows machines, the user can therefore choose the best platform for each specific task Error handling 8
10 Error handling is the capability of the control system to detect errors and to attempt recovery from the resulting error situations. At the same time, it should also inform and guide the operators and record/archive information about the problems in order to maintain statistics and to allow further analysis offline. Since SMI++ is also a rule-based system, errors can be handled and corrective action taken using the same mechanism used for standard system behavior. There is no basic difference between implementing rules like when system configured, start run and when system in error, reset it. The recovery from known error conditions can be automated using the hierarchical control tools based on sub-system s states. In conjunction with the error recovery provided by SMI++ full use is made of the powerful alarm handling tools provided by PVSSII. These allow equipment to generate, to archive, to filter, to summarize alarms and to display them to users. They also allow users to mask and/or acknowledge alarms Automation By including SMI++, the Framework tools can provide for complete automation of a large control system. SMI++ s mechanism for automation of procedures and for automated error-recovery is quite suited for large systems. The recovery mechanism is bottom up, i.e. each object reacts in an eventdriven, asynchronous fashion to changes of the states of its children. It is also distributed, i.e. each sub-system recovers its own errors and automates procedures for its sub-tree. For large physics experiments this is an advantage, since each team knows best how to handle their equipment. This decentralized approach is inherently scalable, since there is no centralized entity examining all faults in the system, which could provoke a bottleneck. Furthermore it allows for parallel recovery. For example, if there is a general power cut, each sub-system can start recovering in parallel with the others, when the power comes back. The Framework tools allow building a completely automated hierarchy based on the states of the devices composing the experiment and on the states of external elements like the LHC accelerator. All four experiments are using the SMI++ component of the framework but to different degrees. ATLAS and CMS use it for the monitoring and control of their Detector Control Systems (DCS). On the other hand, LHCb and ALICE use it also for controlling the Data Acquisition System (DAQ) and for the automation of the complete experiment. This way they achieve a homogenous Experiment Control System (ECS). The schematic view of the LHCb control hierarchy is shown in figure 6 as an example. The control system consists of around 2000 Control Units and around Device units. It is distributed over around 150 PCs. ECS INFR. DCS TFC DAQ HLT LHC Status & Alarms Commands DCS SubDet2 DCS SubDetN DCS DAQ SubDet2 DAQ SubDetN DAQ LV TEMP GAS FEE TELL1 Legend: Control Unit LV Dev1 LV Dev2 LV DevN FEE Dev1 FEE Dev2 FEE DevN Device Unit Figure 6. Schematic diagram of control hierarchy of LHCb experiment. 9
11 8. Conclusion The SMI++ framework is a powerful tool which merges the concepts of object modeling and finite state machines. It allows the implementation of a homogeneous, integrated control system by providing a standardized approach to the control of all types of devices from hardware equipment to software tasks. From a logical point of view, all devices are mapped and integrated into higher level control entities which control and monitor them. These entities are then responsible for the correlation of events and for the overall coordination, automation and operation of the full system in its different running modes. The system is typically distributed over a set of heterogeneous platforms. A control system designed this way is inherently scalable. As the hierarchical tree of the system grows, designers will naturally distribute it over more machines. Because there is no centralized entity, there should be no bottlenecks. The SMI++ framework has become a time tested, robust tool through its use by major particle experiments: the DELPHI experiment at CERN and the BaBar experiment at SLAC in the past, and all four LHC experiments at CERN have used it for the design of either full or partial experiment control. The reader interested in technical details of the SMI++ framework can find these on our Web pages [15] Acknowledgements We would like to thank some of our colleagues at CERN for fruitful discussions. In particular to Ph. Charpentier, M. Jonker, P. Vande Vyvre and A. Vascotto. The ever growing SMI user community also deserves thanks for their valuable feedback. References [1] J. Barlow, B. Franek, M. Jonker, T. Nguyen, P. Vande Vyvre and A. Vascotto, "Run Control in MODEL: The State Manager," IEEE Trans. Nucl. Sci., vol. 36, pp , [2] DELPHI Collaboration, "The DELPHI Detector at LEP," Nucl. Instrum. Methods, A303, pp [3] Grady Booch, Software Engineering with Ada, California, The Benjamin/Cummings Publishing Company, Inc., ISBN [4] BABAR Collaboration, "The BABAR detector," Nucl. Instrum. Methods, A479, 2002, pp [5] The Stanford Linear Accelerator Center, Menlo Park, California, USA [6] ALICE Collaboration, ALICE technical proposal for A Large Ion Collider Experiment at the CERN LHC, CERN/LHCC/95-71 [7] ATLAS Collaboration, The ATLAS Detector at LHC, JINST 3(2008) S08003 [8] CMS Collaboration, CMS technical proposal for Compact Muon Solenoid at the CERN LHC, CERN/LHCC/94-38 [9] LHCb Collaboration, The LHCb Detector at LHC, JINST 3(2008), SO8005 [10] Rational Rose/C++, Rational Software Corporation, 2800 San Tomas Expressway, Santa Clara, CA , USA [11] Gaspar C, Dönszelmann M and Charpentier Ph 2001 DIM, a portable, light weight package for information publishing, data transfer and inter-process communication Computer Physics Communications [12] Myers D R et at 1999, The LHC experiments Joint Controls Project, JCOP Proc. Int. Conf. on Accelerator and Large Experimental Physics Control Systems, (Trieste) Italy [13] Schmeling S et al, 2003 Controls Framework for LHC experiments, Proc. 13 th IEEE-NPSS Real Time Conference, (Montreal) Canada [14] PVSS-II [Online]. Available: [15] SMI++ - State Management Interface [Online]. Available: 10
AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS
AN OVERVIEW OF THE LHC EXPERIMENTS' CONTROL SYSTEMS C. Gaspar, CERN, Geneva, Switzerland Abstract The four LHC experiments (ALICE, ATLAS, CMS and LHCb), either by need or by choice have defined different
More informationExperiment
Experiment Control@LHC An Overview Many thanks to the colleagues in the four experiments and the EN/ICE group, in particular: ALICE: Franco Carena, Vasco Chibante Barroso (DAQ), Andre Augustinus (DCS)
More information2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System
Chapter 8 Online System The task of the Online system is to ensure the transfer of data from the front-end electronics to permanent storage under known and controlled conditions. This includes not only
More informationDetector Control LHC
Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39 Detectors in High Energy
More informationControl and Monitoring of the Front-End Electronics in ALICE
Control and Monitoring of the Front-End Electronics in ALICE Peter Chochula, Lennart Jirdén, André Augustinus CERN, 1211 Geneva 23, Switzerland Peter.Chochula@cern.ch Abstract This paper describes the
More informationControl slice prototypes for the ALICE TPC detector
Control slice prototypes for the ALICE TPC detector S.Popescu 1, 3, A.Augustinus 1, L.Jirdén 1, U.Frankenfeld 2, H.Sann 2 1 CERN, Geneva, Switzerland, 2 GSI, Darmstadt, Germany, 3 NIPN E, Bucharest, Romania
More information2008 JINST 3 S Control System. Chapter Detector Control System (DCS) Introduction Design strategy and system architecture
Chapter 7 Control System 7.1 Detector Control System (DCS) 7.1.1 Introduction The primary task of the ALICE Control System is to ensure safe and correct operation of the ALICE experiment [17]. It provides
More informationStefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.
Stefan Koestner on behalf of the LHCb Online Group (email: Stefan.Koestner@cern.ch) IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Dedicated to B-physics : single arm forward spectrometer
More informationShort Introduction to DCS, JCOP Framework, PVSS. PVSS Architecture and Concept. JCOP Framework concepts and tools.
Hassan Shahzad, NCP Contents Short Introduction to DCS, JCOP Framework, PVSS and FSM. PVSS Architecture and Concept. JCOP Framework concepts and tools. CMS Endcap RPC DCS. 2 What is DCS DCS stands for
More informationDetector Control System for Endcap Resistive Plate Chambers
Detector Control System for Endcap Resistive Plate Chambers Taimoor Khurshid National Center for Physics, Islamabad, Pakistan International Scientific Spring March 01, 2010 Contents CMS Endcap RPC Hardware
More informationATLAS DCS Overview SCADA Front-End I/O Applications
Overview SCADA Front-End I/O Applications ATLAS TDAQ week, July 6th 2001, H.J.Burckhart 1 SCX1 USA15 Common Infrastructure PC Local Area Network Server Operation Local Control Stations (LCS) Expert Subdetector
More informationA Generic Multi-node State Monitoring Subsystem
A Generic Multi-node State Monitoring Subsystem James A. Hamilton SLAC, Stanford, CA 94025, USA Gregory P. Dubois-Felsmann California Institute of Technology, CA 91125, USA Rainer Bartoldus SLAC, Stanford,
More informationATLAS Nightly Build System Upgrade
Journal of Physics: Conference Series OPEN ACCESS ATLAS Nightly Build System Upgrade To cite this article: G Dimitrov et al 2014 J. Phys.: Conf. Ser. 513 052034 Recent citations - A Roadmap to Continuous
More informationDesign specifications and test of the HMPID s control system in the ALICE experiment.
Design specifications and test of the HMPID s control system in the ALICE experiment. E. Carrone For the ALICE collaboration CERN, CH 1211 Geneva 23, Switzerland, Enzo.Carrone@cern.ch Abstract The HMPID
More informationT HE discovery at CERN of a new particle
1 ALFA Detector Control System Luis Seabra, LIP - Laboratório de Instrumentação e Física Experimental de Partículas, Lisbon, Portugal, on behalf of the ALFA group Abstract ALFA (Absolute Luminosity For
More informationThe LHC Compact Muon Solenoid experiment Detector Control System
Journal of Physics: Conference Series The LHC Compact Muon Solenoid experiment Detector Control System To cite this article: G Bauer et al 2011 J. Phys.: Conf. Ser. 331 022009 View the article online for
More informationMotivation Requirements Design Examples Experiences Conclusion
H1DCM Network based Detector Control and Monitoring for the H1 Experiment Seminar on Computing in High Energy Physics G. Eckerlin (H1 Collaboration) Motivation Requirements Design Examples Experiences
More informationIN a system of many electronics boards of many different
356 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 55, NO. 1, FEBRUARY 2008 Building Integrated Remote Control Systems for Electronics Boards Richard Jacobsson, Member, IEEE Abstract This paper addresses several
More informationControl and Spy the TELL1. Cédric Potterat EPFL LPHE
Control and Spy the TELL1 Cédric Potterat EPFL LPHE 1 outlines ECS control PVSS for TELL1 DIM, fwccpc, fwhw Status Recipes outlook 2 PVSS CERN Used for the slow control systems by all LHC experiments Open
More informationCommunication Software for the ALICE TPC Front-End Electronics
Communication Software for the ALICE Front-End Electronics M. Richter 1, J. Alme 1, T. Alt 4, S. Bablok 1, U. Frankenfeld 5, D. Gottschalk 4, R. Keidel 3, Ch. Kofler 3, D. T. Larsen 1, V. Lindenstruth
More informationFirst experiences with the ATLAS pixel detector control system at the combined test beam 2004
Nuclear Instruments and Methods in Physics Research A 565 (2006) 97 101 www.elsevier.com/locate/nima First experiences with the ATLAS pixel detector control system at the combined test beam 2004 Martin
More informationA DAQ system for CAMAC controller CC/NET using DAQ-Middleware
Journal of Physics: Conference Series A DAQ system for CAMAC controller CC/NET using DAQ-Middleware To cite this article: E Inoue et al 2010 J. Phys.: Conf. Ser. 219 022036 Related content - Development
More informationOverview of DCS Technologies. Renaud Barillère - CERN IT-CO
Overview of DCS Technologies Renaud Barillère - CERN IT-CO DCS components Extensions SCADA Supervision OPC or DIM Ethernet PLC FE (UNICOS) Fieldbus Custom FE Process management Fieldbus protocols Field
More informationThe CMS data quality monitoring software: experience and future prospects
The CMS data quality monitoring software: experience and future prospects Federico De Guio on behalf of the CMS Collaboration CERN, Geneva, Switzerland E-mail: federico.de.guio@cern.ch Abstract. The Data
More informationCMS - HLT Configuration Management System
Journal of Physics: Conference Series PAPER OPEN ACCESS CMS - HLT Configuration Management System To cite this article: Vincenzo Daponte and Andrea Bocci 2015 J. Phys.: Conf. Ser. 664 082008 View the article
More informationThe Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland
Available on CMS information server CMS CR -2017/188 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 29 June 2017 (v2, 07 July 2017) Common
More informationDataflow Monitoring in LHCb
Journal of Physics: Conference Series Dataflow Monitoring in LHCb To cite this article: D Svantesson et al 2011 J. Phys.: Conf. Ser. 331 022036 View the article online for updates and enhancements. Related
More informationVERY HIGH VOLTAGE CONTROL FOR ALICE TPC
VERY HIGH VOLTAGE CONTROL FOR ALICE TPC André Augustinus 1, Joachim Baechler 1, Marco Boccioli 1, Uli Frankenfeld 2, Lennart Jirdén 1 1 CERN, Geneva, Switzerland; 2 GSI, Darmstadt, Germany ABSTRACT The
More informationCONTROL AND MONITORING OF ON-LINE TRIGGER ALGORITHMS USING GAUCHO
10th ICALEPCS Int. Conf. on Accelerator & Large Expt. Physics Control Systems. Geneva, 10-14 Oct 2005, WE3A.5-6O (2005) CONTROL AND MONITORING OF ON-LINE TRIGGER ALGORITHMS USING GAUCHO E. van Herwijnen
More informationFront-End Electronics Configuration System for CMS. Philippe Gras CERN - University of Karlsruhe
Front-End Electronics Configuration System for CMS Philippe Gras CERN - University of Karlsruhe Outline Introduction Tracker electronics parameters Tracker beam test DCS overview Electronics configuration
More informationThe TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure
The TDAQ Analytics Dashboard: a real-time web application for the ATLAS TDAQ control infrastructure Giovanna Lehmann Miotto, Luca Magnoni, John Erik Sloper European Laboratory for Particle Physics (CERN),
More informationATLAS software configuration and build tool optimisation
Journal of Physics: Conference Series OPEN ACCESS ATLAS software configuration and build tool optimisation To cite this article: Grigory Rybkin and the Atlas Collaboration 2014 J. Phys.: Conf. Ser. 513
More informationTO KEEP development time of the control systems for LHC
362 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 55, NO. 1, FEBRUARY 2008 Generic and Layered Framework Components for the Control of a Large Scale Data Acquisition System Stefan Koestner, Member, IEEE,
More informationThe Database Driven ATLAS Trigger Configuration System
Journal of Physics: Conference Series PAPER OPEN ACCESS The Database Driven ATLAS Trigger Configuration System To cite this article: Carlos Chavez et al 2015 J. Phys.: Conf. Ser. 664 082030 View the article
More informationDDC Experience at Tilecal
Workshop, Nikhef,Oct 2001 Fernando Varela R. DDC Experience at Tilecal Outline Tilecal testbeam setup DDC Work Findings Improvement suggestions Conclusions This talk report on the first usage of the DDC
More informationThe ALICE Glance Shift Accounting Management System (SAMS)
Journal of Physics: Conference Series PAPER OPEN ACCESS The ALICE Glance Shift Accounting Management System (SAMS) To cite this article: H. Martins Silva et al 2015 J. Phys.: Conf. Ser. 664 052037 View
More informationLHCb Online System BEAUTY-2002
BEAUTY-2002 8th International Conference on B-Physics at Hadron machines June 17-21 2002 antiago de Compostela, Galicia (pain ) Niko Neufeld, CERN EP (for the LHCb Online Team) 1 Mission The LHCb Online
More informationAccelerator Control System
Chapter 13 Accelerator Control System 13.1 System Requirements The KEKB accelerator complex has more than 50,000 control points along the 3 km circumference of the two rings, the LER and HER. Some control
More informationArchitectural Blueprint
IMPORTANT NOTICE TO STUDENTS These slides are NOT to be used as a replacement for student notes. These slides are sometimes vague and incomplete on purpose to spark a class discussion Architectural Blueprint
More informationThe CERN Detector Safety System for the LHC Experiments
The CERN Detector Safety System for the LHC Experiments S. Lüders CERN EP/SFT, 1211 Geneva 23, Switzerland; Stefan.Lueders@cern.ch R.B. Flockhart, G. Morpurgo, S.M. Schmeling CERN IT/CO, 1211 Geneva 23,
More informationA generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade
Journal of Instrumentation OPEN ACCESS A generic firmware core to drive the Front-End GBT-SCAs for the LHCb upgrade Recent citations - The Versatile Link Demo Board (VLDB) R. Martín Lesma et al To cite
More informationTHE ATLAS experiment comprises a significant number
386 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 55, NO. 1, FEBRUARY 2008 Access Control Design and Implementations in the ATLAS Experiment Marius Constantin Leahu, Member, IEEE, Marc Dobson, and Giuseppe
More informationL1 and Subsequent Triggers
April 8, 2003 L1 and Subsequent Triggers Abstract During the last year the scope of the L1 trigger has changed rather drastically compared to the TP. This note aims at summarising the changes, both in
More informationReliability Engineering Analysis of ATLAS Data Reprocessing Campaigns
Journal of Physics: Conference Series OPEN ACCESS Reliability Engineering Analysis of ATLAS Data Reprocessing Campaigns To cite this article: A Vaniachine et al 2014 J. Phys.: Conf. Ser. 513 032101 View
More informationDetector controls meets JEE on the web
Detector controls meets JEE on the web ICALEPCS 2015 Frank Glege Outline Part 1: Web based Remote access to controls systems Part 2: JEE for controls 20.10.2015 Frank Glege 2 About CERN 20.10.2015 Frank
More informationOnline remote monitoring facilities for the ATLAS experiment
Journal of Physics: Conference Series Online remote monitoring facilities for the ATLAS experiment To cite this article: S Kolos et al 2011 J. Phys.: Conf. Ser. 331 022013 View the article online for updates
More informationEvolution of Database Replication Technologies for WLCG
Journal of Physics: Conference Series PAPER OPEN ACCESS Evolution of Database Replication Technologies for WLCG To cite this article: Zbigniew Baranowski et al 2015 J. Phys.: Conf. Ser. 664 042032 View
More informationDesign for High Voltage Control and Monitoring System for CMS Endcap Electromagnetic Calorimeter
Industrial Placement Project Report Design for High Voltage Control and Monitoring System for CMS Endcap Electromagnetic Calorimeter Author: Funai Xing Supervisors: Dr. Helen Heath (Bristol) Dr. Claire
More informationMiniDAQ1 A COMPACT DATA ACQUISITION SYSTEM FOR GBT READOUT OVER 10G ETHERNET 22/05/2017 TIPP PAOLO DURANTE - MINIDAQ1 1
MiniDAQ1 A COMPACT DATA ACQUISITION SYSTEM FOR GBT READOUT OVER 10G ETHERNET 22/05/2017 TIPP 2017 - PAOLO DURANTE - MINIDAQ1 1 Overview LHCb upgrade Optical frontend readout Slow control implementation
More informationWLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers.
WLCG Transfers Dashboard: a Unified Monitoring Tool for Heterogeneous Data Transfers. J Andreeva 1, A Beche 1, S Belov 2, I Kadochnikov 2, P Saiz 1 and D Tuckett 1 1 CERN (European Organization for Nuclear
More informationThe Detector Control System for the HMPID in the ALICE Experiment at LHC.
The Detector Control System for the HMPID in the LICE Eperiment at LHC. De Cataldo, G. for the LICE collaboration INFN sez. Bari, Via G. mendola 173, 70126 Bari, Italy giacinto.decataldo@ba.infn.it bstract
More informationTHE ALICE experiment described in [1] will investigate
980 IEEE TRANSACTIONS ON NUCLEAR SCIENCE, VOL. 53, NO. 3, JUNE 2006 The Control System for the Front-End Electronics of the ALICE Time Projection Chamber M. Richter, J. Alme, T. Alt, S. Bablok, R. Campagnolo,
More informationData Quality Monitoring Display for ATLAS experiment
Data Quality Monitoring Display for ATLAS experiment Y Ilchenko 1, C Cuenca Almenar 2, A Corso-Radu 2, H Hadavand 1, S Kolos 2, K Slagle 2, A Taffard 2 1 Southern Methodist University, Dept. of Physics,
More informationThe High Performance Archiver for the LHC Experiments
The High Performance Archiver for the LHC Experiments Manuel Gonzalez Berges CERN, Geneva (Switzerland) CERN - IT Department CH-1211 Genève 23 Switzerland www.cern.ch/it Outline Context Archiving in PVSS
More informationAndrea Sciabà CERN, Switzerland
Frascati Physics Series Vol. VVVVVV (xxxx), pp. 000-000 XX Conference Location, Date-start - Date-end, Year THE LHC COMPUTING GRID Andrea Sciabà CERN, Switzerland Abstract The LHC experiments will start
More informationWorldwide Production Distributed Data Management at the LHC. Brian Bockelman MSST 2010, 4 May 2010
Worldwide Production Distributed Data Management at the LHC Brian Bockelman MSST 2010, 4 May 2010 At the LHC http://op-webtools.web.cern.ch/opwebtools/vistar/vistars.php?usr=lhc1 Gratuitous detector pictures:
More informationEvolution of Database Replication Technologies for WLCG
Evolution of Database Replication Technologies for WLCG Zbigniew Baranowski, Lorena Lobato Pardavila, Marcin Blaszczyk, Gancho Dimitrov, Luca Canali European Organisation for Nuclear Research (CERN), CH-1211
More informationContemporary Design. Traditional Hardware Design. Traditional Hardware Design. HDL Based Hardware Design User Inputs. Requirements.
Contemporary Design We have been talking about design process Let s now take next steps into examining in some detail Increasing complexities of contemporary systems Demand the use of increasingly powerful
More informationLHCb Distributed Conditions Database
LHCb Distributed Conditions Database Marco Clemencic E-mail: marco.clemencic@cern.ch Abstract. The LHCb Conditions Database project provides the necessary tools to handle non-event time-varying data. The
More informationDISTRIBUTED MEMORY IN A HETEROGENEOUS NETWORK, AS USED IN THE CERN. PS-COMPLEX TIMING SYSTEM
1 DISTRIBUTED MEMORY IN A HETEROGENEOUS NETWORK, AS USED IN THE CERN. PS-COMPLEX TIMING SYSTEM ABSTRACT V. Kovaltsov 1, J. Lewis PS Division, CERN, CH-1211 Geneva 23, Switzerland The Distributed Table
More informationUpdate of the BESIII Event Display System
Journal of Physics: Conference Series PAPER OPEN ACCESS Update of the BESIII Event Display System To cite this article: Shuhui Huang and Zhengyun You 2018 J. Phys.: Conf. Ser. 1085 042027 View the article
More informationVerification and Diagnostics Framework in ATLAS Trigger/DAQ
Verification and Diagnostics Framework in ATLAS Trigger/DAQ M.Barczyk, D.Burckhart-Chromek, M.Caprini 1, J.Da Silva Conceicao, M.Dobson, J.Flammer, R.Jones, A.Kazarov 2,3, S.Kolos 2, D.Liko, L.Lucio, L.Mapelli,
More informationDistributed Control System Overview
Abstract BLU-ICE is a graphical Interface to the Distributed Control System for crystallographic data collection at synchrotron light sources. Designed for highly heterogeneous networked computing environments,
More informationThe AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure
Journal of Physics: Conference Series The AAL project: automated monitoring and intelligent analysis for the ATLAS data taking infrastructure To cite this article: A Kazarov et al 2012 J. Phys.: Conf.
More informationReport. Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution
CERN-ACC-2013-0237 Wojciech.Sliwinski@cern.ch Report Middleware Proxy: A Request-Driven Messaging Broker For High Volume Data Distribution W. Sliwinski, I. Yastrebov, A. Dworak CERN, Geneva, Switzerland
More informationDIRAC pilot framework and the DIRAC Workload Management System
Journal of Physics: Conference Series DIRAC pilot framework and the DIRAC Workload Management System To cite this article: Adrian Casajus et al 2010 J. Phys.: Conf. Ser. 219 062049 View the article online
More informationThe Compact Muon Solenoid Experiment. Conference Report. Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland
Available on CMS information server CMS CR -2015/215 The Compact Muon Solenoid Experiment Conference Report Mailing address: CMS CERN, CH-1211 GENEVA 23, Switzerland 05 October 2015 (v2, 08 October 2015)
More informationLarge scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS
Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.
More informationIntelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment
Intelligence Elements and Performance of the FPGA-based DAQ of the COMPASS Experiment Stefan Huber, Igor Konorov, Dmytro Levit, Technische Universitaet Muenchen (DE) E-mail: dominik.steffen@cern.ch Martin
More informationIntroduction to Geant4
Introduction to Geant4 Release 10.4 Geant4 Collaboration Rev1.0: Dec 8th, 2017 CONTENTS: 1 Geant4 Scope of Application 3 2 History of Geant4 5 3 Overview of Geant4 Functionality 7 4 Geant4 User Support
More informationOverview. About CERN 2 / 11
Overview CERN wanted to upgrade the data monitoring system of one of its Large Hadron Collider experiments called ALICE (A La rge Ion Collider Experiment) to ensure the experiment s high efficiency. They
More informationCouchDB-based system for data management in a Grid environment Implementation and Experience
CouchDB-based system for data management in a Grid environment Implementation and Experience Hassen Riahi IT/SDC, CERN Outline Context Problematic and strategy System architecture Integration and deployment
More informationCAS 703 Software Design
Dr. Ridha Khedri Department of Computing and Software, McMaster University Canada L8S 4L7, Hamilton, Ontario Acknowledgments: Material based on Software by Tao et al. (Chapters 9 and 10) (SOA) 1 Interaction
More informationThe ATLAS Trigger Simulation with Legacy Software
The ATLAS Trigger Simulation with Legacy Software Carin Bernius SLAC National Accelerator Laboratory, Menlo Park, California E-mail: Catrin.Bernius@cern.ch Gorm Galster The Niels Bohr Institute, University
More informationDQM4HEP - A Generic Online Monitor for Particle Physics Experiments
DQM4HEP - A Generic Online Monitor for Particle Physics Experiments Carlos Chavez-Barajas, and Fabrizio Salvatore University of Sussex (GB) E-mail: carlos.chavez.barajas@cern.ch, tom.coates@cern.ch, p.f.salvatore@sussex.ac.uk
More informationRun Control and Monitor System for the CMS Experiment
Run Control and Monitor System for the CMS Experiment V. Brigljevic, G. Bruno, E. Cano, S. Cittolin, A. Csilling, D. Gigi, F. Glege, R. Gomez-Reino, M. Gulmini 1,*, J. Gutleber, C. Jacobs, M. Kozlovszky,
More informationSOFTWARE SCENARIO FOR CONTROL SYSTEM OF INDUS-2
10th ICALEPCS Int. Conf. on Accelerator & Large Expt. Physics Control Systems. Geneva, 10-14 Oct 2005, WE3B.3-70 (2005) SOFTWARE SCENARIO FOR CONTROL SYSTEM OF INDUS-2 ABSTRACT R. K. Agrawal *, Amit Chauhan,
More informationStorage Resource Sharing with CASTOR.
Storage Resource Sharing with CASTOR Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov (IHEP) ben.couturier@cern.ch 16/4/2004 Storage Resource Sharing
More informationThe CMS High Level Trigger System: Experience and Future Development
Journal of Physics: Conference Series The CMS High Level Trigger System: Experience and Future Development To cite this article: G Bauer et al 2012 J. Phys.: Conf. Ser. 396 012008 View the article online
More informationSDC Design patterns GoF
SDC Design patterns GoF Design Patterns The design pattern concept can be viewed as an abstraction of imitating useful parts of other software products. The design pattern is a description of communicating
More informationReal-time Protection for Microsoft Hyper-V
Real-time Protection for Microsoft Hyper-V Introduction Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate of customer adoption. Moving resources to
More informationManaging Asynchronous Data in ATLAS's Concurrent Framework. Lawrence Berkeley National Laboratory, 1 Cyclotron Rd, Berkeley CA 94720, USA 3
Managing Asynchronous Data in ATLAS's Concurrent Framework C. Leggett 1 2, J. Baines 3, T. Bold 4, P. Calafiura 2, J. Cranshaw 5, A. Dotti 6, S. Farrell 2, P. van Gemmeren 5, D. Malon 5, G. Stewart 7,
More informationUnderstanding and Coping with Hardware and Software Failures in a Very Large Trigger Farm
CHEP, LaJolla CA, March 2003 Understanding and Coping with Hardware and Software Failures in a Very Large Trigger Farm Jim Kowalkowski Fermi National Accelerator Laboratory, Batavia, IL, 6050 When thousands
More informationDistributed Computing Environment (DCE)
Distributed Computing Environment (DCE) Distributed Computing means computing that involves the cooperation of two or more machines communicating over a network as depicted in Fig-1. The machines participating
More informationThe ATLAS Conditions Database Model for the Muon Spectrometer
The ATLAS Conditions Database Model for the Muon Spectrometer Monica Verducci 1 INFN Sezione di Roma P.le Aldo Moro 5,00185 Rome, Italy E-mail: monica.verducci@cern.ch on behalf of the ATLAS Muon Collaboration
More informationCSCI Object Oriented Design: Frameworks and Design Patterns George Blankenship. Frameworks and Design George Blankenship 1
CSCI 6234 Object Oriented Design: Frameworks and Design Patterns George Blankenship Frameworks and Design George Blankenship 1 Background A class is a mechanisms for encapsulation, it embodies a certain
More informationINSPECTOR, A ZERO CODE IDE FOR CONTROL SYSTEMS USER INTERFACE DEVELOPMENT
INSPECTOR, A ZERO CODE IDE FOR CONTROL SYSTEMS USER INTERFACE DEVELOPMENT V. Costa, B. Lefort CERN, European Organization for Nuclear Research, Geneva, Switzerland Abstract Developing operational User
More informationA Prototype of the CMS Object Oriented Reconstruction and Analysis Framework for the Beam Test Data
Prototype of the CMS Object Oriented Reconstruction and nalysis Framework for the Beam Test Data CMS Collaboration presented by Lucia Silvestris CERN, Geneve, Suisse and INFN, Bari, Italy bstract. CMS
More informationINTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE
INTRODUCTION TO THE ANAPHE/LHC++ SOFTWARE SUITE Andreas Pfeiffer CERN, Geneva, Switzerland Abstract The Anaphe/LHC++ project is an ongoing effort to provide an Object-Oriented software environment for
More informationCMS users data management service integration and first experiences with its NoSQL data storage
Journal of Physics: Conference Series OPEN ACCESS CMS users data management service integration and first experiences with its NoSQL data storage To cite this article: H Riahi et al 2014 J. Phys.: Conf.
More informationPoS(High-pT physics09)036
Triggering on Jets and D 0 in HLT at ALICE 1 University of Bergen Allegaten 55, 5007 Bergen, Norway E-mail: st05886@alf.uib.no The High Level Trigger (HLT) of the ALICE experiment is designed to perform
More informationEMC VPLEX Geo with Quantum StorNext
White Paper Application Enabled Collaboration Abstract The EMC VPLEX Geo storage federation solution, together with Quantum StorNext file system, enables a global clustered File System solution where remote
More informationSummary of the LHC Computing Review
Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,
More informationEuropean Component Oriented Architecture (ECOA ) Collaboration Programme: Architecture Specification Part 2: Definitions
European Component Oriented Architecture (ECOA ) Collaboration Programme: Part 2: Definitions BAE Ref No: IAWG-ECOA-TR-012 Dassault Ref No: DGT 144487-D Issue: 4 Prepared by BAE Systems (Operations) Limited
More informationPhronesis, a diagnosis and recovery tool for system administrators
Journal of Physics: Conference Series OPEN ACCESS Phronesis, a diagnosis and recovery tool for system administrators To cite this article: C Haen et al 2014 J. Phys.: Conf. Ser. 513 062021 View the article
More informationDistributed OS and Algorithms
Distributed OS and Algorithms Fundamental concepts OS definition in general: OS is a collection of software modules to an extended machine for the users viewpoint, and it is a resource manager from the
More informationELFms industrialisation plans
ELFms industrialisation plans CERN openlab workshop 13 June 2005 German Cancio CERN IT/FIO http://cern.ch/elfms ELFms industrialisation plans, 13/6/05 Outline Background What is ELFms Collaboration with
More informationDISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN. Chapter 1. Introduction
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 1 Introduction Modified by: Dr. Ramzi Saifan Definition of a Distributed System (1) A distributed
More informationPerformance quality monitoring system for the Daya Bay reactor neutrino experiment
Journal of Physics: Conference Series OPEN ACCESS Performance quality monitoring system for the Daya Bay reactor neutrino experiment To cite this article: Y B Liu and the Daya Bay collaboration 2014 J.
More informationISTITUTO NAZIONALE DI FISICA NUCLEARE
ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko
More information