Puppet Theatre Server Management for Dummies... the smarter way of information
page 2/7 In the area of systems for the automatic setup and operation of large-scale landscapes, one tool in particular has been making a name for itself over the past years with an increasing community of users and very finely tuned releases: Puppet. Luke Kanies began working on Puppet in 2005, and later founded the company PuppetLabs. Kanies had been deeply dissatisfied with other configuration tools on the market, none of which offered sufficient options for the management of large-scale landscapes. Initially started as a public domain software, an Open Source version is still available, and features the full scope of the Enterprise package in terms of configuration options, but its scope is limited in terms of additional tools and options. For further informations you can read here. Puppet is currently available in version 3.2.1 and operates on any Unix and Linux systems. Windows support, albeit to a lesser extent, has been supported since version 2.7. Within the scope of our series of articles on the topic of Continuous Integration & Delivery, Puppet and its derivates (Chef, cfengine, etc.) most certainly deserve a mention. In contrast to our other letters, this article will not drill down into the technical detail, but will instead offer information about the system itself, its scope of implementation, and the philosophy on which it was built. Behind the scenes Puppet considers itself to be an Open Source, next generation automation tool. Hiding behind this long name is a Ruby-scripted tool that manages relevant machines via a client- architecture and a declarative language for the formulation of system states and configurations of s and sub-systems (so-called manifests ). Puppet Master is the editor, via which all deliverable configurations are defined. The client Puppet Agent must then be installed on all s, which are to be administered via the system. In default mode, the client will then access the master every 30 minutes, retrieves the latest definitions and logs all activity on the.
page 3/7 How does Puppet work? As mentioned above, Puppet uses a declarative, model-based system for process automation. 1. Definition The desired configuration state is specified via the declarative language of Puppet, and is then stored on the Puppet Master. 2. Simulation Puppet allows the simulation of proposed changes before installation to ensure that the desired state is achieved. 3. Implementation The state stored on the Puppet Master will be deployed automatically, and will amend all configurations accordingly. 4. Reporting As a final step, all differences between the definition encountered on the and the desired definition will be logged.
page 4/7 How is the concept implemented? Let s have a closer look at the implementation of the above mentioned philosophy using a current practical example. A database, an application and a web are the basic requirements for the deployment of a web application. The implementation itself can be carried out on a single computer. Typically, however, the implementation will include several computers one for each component. These computers, or nodes and the components ( resources ) installed on these nodes are closely related with one another, and are governed by a joint lifecycle, including adjustments, changes, and exchanges. The relationships between the individual components and the configurations contained therein can now be mapped via Puppet and stored on the Master. Relational declarations occur via a so-called manifest. The core purpose of a declaration is the definition of the resource, after which they can then be grouped logically into classes. A resource may describe an individual file or package, while a class will contain everything needed to describe a complete service or application (including the config files, daemons, and maintenance tasks). Smaller classes can subsequently be combined with larger ones to describe entire systems, like database or web application. Manifests may contain logic for the simultaneous mapping of multiple nodes. Puppet compiles the manifest in a catalogue prior to the configuration of a node (i.e. the actual deployment), which is valid only for ONE node, and does not contain any ambiguous logic. In a standard agent-master architecture, nodes query the catalogues of the Puppet Master, which is then compiled for the relevant node. Where Puppet is operated as a standalone on a /node, then the catalogues are compiled locally and uploaded directly.
page 5/7 The Extras - Reusable modules Puppet assumes that each infrastructure is clustered for use in modules, and can be reused as needed. The relevantly desired state, or the pre-defined module of an infrastructure component, can either be published via Puppet Forge, or simply be used internally only. in Puppet s own configuration language. This model can then be used in physical, virtual, and even cloud-based environments. Modules can also be combined and orchestrated with larger configurations, which represent entire application stacks ( nodes ). Puppet Forge: The in-house repository offers lots of custom developed modules from and for the community. Link to Puppet Forge current number of available modules: Put simply, modules are reusable Puppet code fragments, which can be shared with other users. If a user cannot find the right module on Puppet Forge, i.e. a special requirement must be satisfied, then the user can model his own module modules App ubunutu (207 modules) debian (166 modules) rhel (132 modules) CentOS (110 modules) centos (74 modules) monitoring (70 modules) networking (64 modules) security (60 modules) applications (55 modules) redhat (53 modules) combine application modules Servers Servers Application Servers App App
page 6/7 The Show - Running the specified status Once the pre-defined configuration modules have been installed, the Puppet Agent for each node will communicate periodically with the Puppet Master to guarantee the consistency between master and agent. The agent of a node will send a status report ( facts ) to the Puppet master. The Puppet Master will then compile a catalogue (configuration map) using these facts to demonstrate how the node should be configured, and sends it back to the agent. Once all changes from the catalogue have been implemented on the node (or the process has been simulated in no-op mode), the agent will return a comprehensive change report back to the Puppet Master. The generated report will be accessible for further processing via other IT systems via API. What do the critics say? Advantages and drawbacks of Puppet Advantages Puppet carries out an automatic roll-out of the configuration updates available on the master to all clients. Puppet ensures slimline configuration files without much complexity. There is, for example, no need for explicitly entering the SSH keys and Kerberos keytab files on each computer. Even the distinction between 32 and 64 bit has fallen by the wayside. Puppet offers optional automatic monitoring of services, and restart of services after a crash. Puppet ensures that an installation will only have to be started one time. Everything else occurs automatically Drawbacks The option for manual client-side configuration adjustments is no longer available or only available with much difficulty, as the Puppet Master will continuously overwrite any changes. In other words: the option to quickly test something no longer exists. But that could also be seen as an advantage.
page 7/7 Bottom line Seeing as the number of virtualised or automated installed systems is on the rise, Puppet could become a faithful companion for the administration of complex processes in configuration and change management. In the context of our topic Continuous Software Integration & Delivery in Enterprise environments, Puppet is a force to be reckoned with, representing a new way for centralised configuration management. That fact is based mainly on the possibilities for validating infrastructure and configuration adjustments in closed test systems, and to subsequently install these in staging and production environments. The procedure not only fosters transparency between development and operations, but can with the integration of virtualisation solutions in Enterprise environments utilise all the advantages of Puppet to their full extent. Looking at Puppet as one of the products featured in our letter series, it does not offer the full scope of functionalities we require in a test series, but will still have to be seen as one of the key players in the context of infrastructure and configuration management due to and specifically because of its current position in the market. In closing it must be said that Puppet should most definitely be part of the toolbox of every build and deployment officer, as it is a powerful tool when it comes to the management of large-scale, heterogeneous environments. Imprint Date: June 2013 Author: Andreas Fränkle Contact: marketing@avato-consulting.com www.avato-consulting.com 2013 avato consulting