Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39
Detectors in High Energy (Nuclear) Physics Task: measure trajectories and properties of particles Particles liberated from a collision Energy deposited in a medium allows to determine momentum and type of particle M. Richter (UiO) DCS @ LHC Oct 09 2012 2 / 39
Detection Techniques Gas detectors Calorimeters Silicon detectors Time-of-flight (TOF)... just some examples M. Richter (UiO) DCS @ LHC Oct 09 2012 3 / 39
Early Detector Control Rutherford experiment 1909 by Geiger and Marsden Geiger counter M. Richter (UiO) DCS @ LHC Oct 09 2012 4 / 39
Today s experiments Complex system like a factory Compact Muon Solenoid (CMS) Dimensions: 21 m long, 15 m wide and 15 m high Magnetic field: 4 Tesla 12,000 tonnes of iron O(10 7 ) channels to be controlled and read out M. Richter (UiO) DCS @ LHC Oct 09 2012 5 / 39
The Large Hadron Collider (LHC) Collider machine located at CERN/Geneva PbPb - collisions snn = 5.5 TeV L = 10 27 cm 2 s 1 pp - collisions s = 14 TeV L = 10 34 cm 2 s 1 @ ALICE IP L = 5 10 30 cm 2 s 1 M. Richter (UiO) DCS @ LHC Oct 09 2012 6 / 39
LHC Dimensions tunnel of 26.6 km circumference M. Richter (UiO) DCS @ LHC Oct 09 2012 7 / 39
LHC Experiments ALICE ATLAS CMS LHCb M. Richter (UiO) DCS @ LHC Oct 09 2012 8 / 39
LHC Experiments ALICE ATLAS CMS LHCb M. Richter (UiO) DCS @ LHC Oct 09 2012 8 / 39
Challenges in today s experiments Example: CMS Silicon Strip Tracker 9,316,352 readout channels 15,148 modules 24,244 silicon sensors Just one subsystem of CMS How do the control systems scale? How can the collaborative effort be coordinated? What are the requirements for support systems? How is the operational environment for sensors and devices? M. Richter (UiO) DCS @ LHC Oct 09 2012 9 / 39
Detector Control System Basics M. Richter (UiO) DCS @ LHC Oct 09 2012 10 / 39
Tasks of a Detector Control System Continuous, coherent and safe operation of the detector Homogeneous interface to all subdetectors and to the technical infrastructure Ramp up and down of the detector Configuration of electronics LV power distribution and steering HV power distribution and steering Gas systems Synchronization Monitoring Detector Safety In contrast to the detector online systems (data acquisition, trigger), the DCS is all the time in operation M. Richter (UiO) DCS @ LHC Oct 09 2012 11 / 39
A control model Control Levels Supervisory Layer - Back-end (Controls Layer) Field Layer - Front-end Functional modules and devices categorized into different DCS layers common part can be separated from device specific implementations. Operation organized in a process hierarchy M. Richter (UiO) DCS @ LHC Oct 09 2012 12 / 39
Finite State Machines Mathematical model of computation to design a sequence of actions and processes The model consists of states and transitions triggered by actions or conditions M. Richter (UiO) DCS @ LHC Oct 09 2012 13 / 39
Interfaces Interfaces ensure connectivity between modules of the system Communication protocols Module states Bus systems Hardware connectors M. Richter (UiO) DCS @ LHC Oct 09 2012 14 / 39
Top Level Control SCADA - Supervisory Control and Data Acquisition User interface Overall control Visualization of the acquired data Archive M. Richter (UiO) DCS @ LHC Oct 09 2012 15 / 39
Protection Systems Interlocks The concept of interlocks was developed for railway systems An interlock is triggered on a condition which forbids certain actions, e.g. switching on and ramping up An action can only be executed if all interlocks are inactive M. Richter (UiO) DCS @ LHC Oct 09 2012 16 / 39
Protection Systems Interlocks The concept of interlocks was developed for railway systems An interlock is triggered on a condition which forbids certain actions, e.g. switching on and ramping up An action can only be executed if all interlocks are inactive Design the interlocking systems properly! M. Richter (UiO) DCS @ LHC Oct 09 2012 16 / 39
Scalebility of Detector Control Systems The modularity and break up of the system into functional units and state machines facilitate a distributed computing approach. Natural solution to the scalebility issue DCS runs on a network of many nodes Finite State Machines communicate via protocols Logic distributed over all layers of the system which allows a fast reaction time in the Field Layer Data rate to archiving data base can be adjusted M. Richter (UiO) DCS @ LHC Oct 09 2012 17 / 39
Detector Control System @ LHC M. Richter (UiO) DCS @ LHC Oct 09 2012 18 / 39
Conditions Radiation Some of the equipment is installed close to beam pipe and interaction point, radiation can cause failure and damage Magnetic field Equipment inside magnets must be specifically suited Inaccessibility Equipment not accessible because of mechanical reasons and radiation during LHC operation Duration Detectors operated 10-20 years Collaborative development Many people and institutions involved M. Richter (UiO) DCS @ LHC Oct 09 2012 19 / 39
Joint Controls Project (JCOP) Common Controls and Operation for LHC experiments JCOP Framework Detector Safety System (DSS) Gas Control System (GCS) Rack Monitoring and Control System (RCS) JCOP Framework based on a commercial SCADA system - M. Richter (UiO) DCS @ LHC Oct 09 2012 20 / 39
Other Common LHC Control systems Detector Safety System (DSS) Primary goal: Protection of the experiments equipment Detect critical situations and trigger appropriate actions Gas Control System detectors require different types of gas; equipment provided by contractors joint gas control Rack Monitoring and Control common infrastructure and control for rack installation Packaging and Installation Generalized deployment routines M. Richter (UiO) DCS @ LHC Oct 09 2012 21 / 39
Power Supply Commercial devices, important manufacturers: CAEN, WIENER, ISEG, Schneider, Heinziger Communication entirely over OPC M. Richter (UiO) DCS @ LHC Oct 09 2012 22 / 39
Power Supply Example Example: CMS Tracker Power Supply One PowerSupplyUnit(PSU) for a group of 615 modules low voltage supply (2.5 and 1.25V) two high voltage (600V) lines PowerSupplyModule (PSM) combines two PSUs One crate hosts 9 PSMs 6 crates attached to a branch controller Mainframe controls 16 branch controllers 4 Mainframes with 1944 Power Groups and 356 Control Groups The mainframe communicates with the Detector Control System via a standardized protocol - OPC. M. Richter (UiO) DCS @ LHC Oct 09 2012 23 / 39
Hardware solutions in Detector Control Systems Device specific software and hardware implementation are realized in the Field (Device) Layer. Embedded Systems are widely used as they provide flexibility and remote control Field-Programmable Gate Array (FPGA) ATLAS Embedded Local Monitor Board (ELMB) ALICE DCS board Microcontrollers M. Richter (UiO) DCS @ LHC Oct 09 2012 24 / 39
Communication Protocols OPC - OLE for Process Control industry standard mainly driven by automation industry DIM - Distributed Information Management developed and widely used at Cern MODBUS serial communications protocol, industry standard for connecting industrial electronic devices CAN bus - controller area network communication protocol for microcontrollers DIP Cern open source package, message core of DIM M. Richter (UiO) DCS @ LHC Oct 09 2012 25 / 39
Communication Protocol DIM open source communication framework developed at CERN provides network transparent inter-process communication for distributed and heterogeneous environments Server Client architecture Server publishes services Clients can subscribe to any service in the system Server accepts commands sent by a client connection details are hidden from the user framework takes care of the byte ordering and handling of complex data structures dedicated DIM Name Server takes control over all the running clients, servers and their services M. Richter (UiO) DCS @ LHC Oct 09 2012 26 / 39
Some specific implementations M. Richter (UiO) DCS @ LHC Oct 09 2012 27 / 39
ATLAS Detector Control System Only two layers: Back-end and Front-end general-purpose I/O module serves as DCS Front-end interface: Embedded Local Monitor Board (ELMB) M. Richter (UiO) DCS @ LHC Oct 09 2012 28 / 39
Common M. Richter (UiO) software environment DCS @ LHC Oct 09 2012 29 / 39 LHCb Detector Control System LHCb generalizes online and control systems, all under the control of the Experiment Control System (ECS)
LHCb Software Layout Mainly driven by one group, developing all from DAQ system, Event Filter Farm, Communication Protocol and Detector control M. Richter (UiO) DCS @ LHC Oct 09 2012 30 / 39
ALICE Detector Control System DCS ECS DAQ HLT Trigger Four main systems under the control of ECS, four different groups developing and operating the systems M. Richter (UiO) DCS @ LHC Oct 09 2012 31 / 39
ALICE DCS Finite State Machine Finite State Machines on different levels States grouped together and mapped to upper layers M. Richter (UiO) DCS @ LHC Oct 09 2012 32 / 39
ALICE Time Projection Chamber Gas volume 92 m3 Gas mixture: Ne, CO2 (90-10) Maximum electron drift time 250 cm drift: 92µs 72 Readout chambers: MWPCs with cathode pad readout 557568 read out pads and FEE channels 4356 Readout boards 216 Readout Control Units (RCU) Detector Control System includes Gas system Detector Chambers Front-end electronics Calibration system M. Richter (UiO) DCS @ LHC Oct 09 2012 33 / 39
ALICE TPC Finite State Machine Model M. Richter (UiO) DCS @ LHC Oct 09 2012 34 / 39
ALICE TPC Finite State Machine Model M. Richter (UiO) DCS @ LHC Oct 09 2012 34 / 39
DCS for ALICE TPC Front-end electronics M. Richter (UiO) DCS @ LHC Oct 09 2012 35 / 39
DCS board in the Field Layer single board computer Altera EPXA1 FPGA with 32bit ARM processor 100k PLD 8 MB Flash RAM (radiation tolerant) 32 MB SDRAM Ethernet interface JTAG connector Analog to Digital Converter combines LINUX operating system with a Programmable Logic Device (PLD) abstraction layer between hardware and software through device drivers provides low rate data readout path for debugging and monitoring purpose steers and controls the RCU motherboard independently of data readout path update of the RCU firmware readout of the Front-and card and RUC monitoring module M. Richter (UiO) DCS @ LHC Oct 09 2012 36 / 39
Future Development All systems are in production since 2008, the focus is on operation and maintenance Upgrade Plans there will be two Long Shutdowns (LS) in order to upgrade the LHC to the originally foreseen energy Detectors and online systems will be upgraded to handle higher interaction rates Some of the detectors will get a new Front-end, this requires new development in the field layer of the DCS No general redesign of supervisory layer applications planned Update of infrastructure M. Richter (UiO) DCS @ LHC Oct 09 2012 37 / 39
Answers How do the control systems scale? a distributed and modular system How can the collaborative effort be coordinated? common project controls and abstraction functional models interfaces and specifications layers of abstraction reduce need for in-depth knowledge of underlying systems and functionality What are the requirements for support systems? common technical service division: cooling, power distribution, rack services, ventilation How is the operational environment for sensors and devices? radiation magnetic field high temperature stability required inaccessible for most of the time, remote control required M. Richter (UiO) DCS @ LHC Oct 09 2012 38 / 39
Exciting field, combining... Firmware and Software development Device drivers at the boundary between hardware and software Data base functionality High-level abstraction and control Visualization and user interfaces... it s just to get your hands on it! M. Richter (UiO) DCS @ LHC Oct 09 2012 39 / 39
Exciting field, combining... Firmware and Software development Device drivers at the boundary between hardware and software Data base functionality High-level abstraction and control Visualization and user interfaces... it s just to get your hands on it! a smaller project: Geiger counter with USB interface M. Richter (UiO) DCS @ LHC Oct 09 2012 39 / 39