ms3-inc.com JAN. 2018 MuleSoft Runtime EE 3.8.x Docker Image Matteo Picciau
MOUNTAIN STATE SOFTWARE SOLUTIONS WHO ARE WE? Founded in January 2010, Mountain State Software Solutions, LLC (MS³) is a Global IT consulting firm based in the Washington DC metropolitan area that specializes in engineering future proof solutions for both commercial and federal customers. MS³ manages a global team and has the ability to operate on a 24-hour development life cycle with over 50 engineers within North America available with a wide range of capabilities within the software development arena. MS³ also employs a resource team located in the Philippines to offer additional cost effective solutions to our clients and partners. With extensive expertise in API integration and enablement as well as Big Data, Dev/Ops, onprem and Cloud solutions as well as ongoing Operations Support, MS³ is a leading provider of enterprise-ready missioncritical software solutions providing globally distributed organizations the ability to meet today s most complex business challenges. WHAT WE DO?
THE AUTHOR Matteo Picciau, a Principal Software Architect at Mountain State Software Solutions, has been a part of the IT industry for over 20 years. In those years, Picciau has demonstrated proven knowledge in distributed computing and integration by offering valued consultancy services to many large organizations in Europe, Asia, and the United States. ABSTRACT This article, intended for software developers and architects who have a basic knowledge of Docker container platform, as well as MuleSoft Anypoint platform (specifically about the standalone runtime engine), provides guidelines on how to build a Runtime EE 3.8.x Docker image. A working sample, equipped with prescriptive instructions, is provided, along with various considerations about the choices made, and about the possible alternate approaches. This article also begins with a high-level overview of software containers, Docker, and the MuleSoft runtime itself, for ease of reading.
Overview What is a container? Containers are software bundles packaged in a format that can run isolated on a shared operating system. Different from virtual machines (VMs, [2]), containers don t ship a full operating system; they ship only the libraries and configurations required to make the individual package work. This approach is more lightweight and manageable, while still guaranteeing that software will always run the same, regardless of where it s deployed [3]. Containers have the same separation/segregation advantages as VMs, but the containers are more lightweight and small because they don t duplicate the entire OS. This provides a new set of possibilities for deployment and CI/CD chains, which were unthinkable with the VMs, due to their large size.
Docker General Concepts Docker [1] is one of the leading and most widely known software container platforms (or engines). It is available as Community Edition (CE) for free, and as Enterprise Edition (EE) with software, support and certification. Docker offers Linux containers (taking advantage of the CGROUPS and NAMESPACES features, ref. [5], [6]), and it is available for a wide range of operating systems, VMs servers and cloud offerings [4]. Docker was first announced in 2013, and it experienced a huge customer base growth (ref. [9] and many more). The Docker container s development and execution process can be summarized as follows: Out of an IMAGE, which is an artifact that can be built (example: a tomcat server node ) from developed Dockerfile, one or more CONTAINERS can be launched/ executed, as summarized in the following diagram: Ref. [1] in case you want more background. Few technical notes: A Dockerfile is a simple text file, containing directives on what the image should be made of. The images you create can be published for the benefit of other developers (Dockerhub, ref. [7], [8]). You can build an image on top of an existing image. For instance, if we want to build an image of some application running on top of a JVM, we can start from an already available Docker image containing the JVM alone, so we avoid re-inventing the wheel (note, this is EXACTLY what we ll showcase in our specific case later on in the article). Note: This is a simplified summarization. Going into the specifics is out of the scope of this white paper. See Ref. [1] (and many more public documents) for more information.
MuleSoft Runtime 3.8.x General Concepts MuleSoft Anypoint Platform, an integration platform for SOA, SAAS, and API, is designed to tie together cloud and on-premises software services [10], with a runtime available both in the cloud (CloudHub, [11]) and on-premises. A) CLOUD: When running in the cloud, the user only needs to take care of providing the code and deploying it. Then, it is the CloudHub framework that provisions the needed infrastructure. Note the main components: 1. Runtime Manager Console: the UI, allowing you to deploy, manage and monitor applications, and configure your account, etc. 2. Platform Services: Set of shared CloudHub platform services and APIs. 3. Global Worker Clouds: this is an elastic cloud of Mule instances that run integration applications. Each instance is an AWS EC2 node, equipped with the Mule Runtime software. Note, in this case each instance will run a replica of a SINGLE application.
B) ON-PREMISE: The Mule Runtime can be run in 3 possible configurations: (Note: this can also be run on-premises, i.e. on your own machines) Stand-Alone (not managed) Managed By the ARM (Anypoint Runtime Manager running in the cloud, this scenario is often called the hybrid ). By the MMC ( MuleSoft Management Console, a Mule admistration framework which runs on-premise). In all cases, different from the cloud deployment, the runtime can be hosting MULTIPLE applications.
Building a MuleSoft Runtime EE 3.8.x Docker Image Use Case Overview We want to provide a working sample of a Mule Runtime Docker image which can fit into any of the three deployment scenarios described above, i.e.: On-premises stand-alone Mule Runtime node ARM-managed on-premises Mule Runtime node MMC-managed on-premises Mule Runtime node With specific reference to the deployment scenarios described in the previous section, this is a case in which the Host is actually a Docker container: To accomplish this, we need to perform the following activities: 1. Build a Docker IMAGE for the Mule Runtime 2. Launch a container out of the image previously built in #1 3. Regristering the containerized Mule Runtime against the management framework (ARM/MMC, where applicable) In the following, we ll walk the reader through all this, by providing a working sample, and by discussing different choices and options.
Setup This article ships with the code: the Dockerfile (used to build the Docker image) and a ZIP file containing a test application. The sample is supposed to work on a real production-like environment, hence the user has to provide a license.lic file (valid MuleSoft license). This sample can work on a machine which runs Docker, but you will also need to download the Linux installable for the Mule Runtime EE version in scope (the code we are providing was tested with 3.8.5 other 3.8.x should work fine). Make sure you re also aware of the MD5 CHECKSUM of the installation tar.gz you re using. Make sure Docker services are running. * Only if you want to work with the runtime registration against MMC: Have an MMC instance available (the code we are providing was tested with 3.8.1, other versions might work too) ** Only if you want to work with the runtime registration against ARM: Have a MuleSoft AnyPoint account available for some testing (and make sure you have the on-prem server registration token on hand, ref. [13], Obtaining the registration token ) Create a directory on your machine where you ll later redirect the Mule logs generated inside the container we re going to launch. Example: C:\ProgramData\mp\dockerrt\mule01\logs. Make sure you ll operate when connected to Internet. Building the Mule Runtime Docker Image Open a command shell in the directory containing the following files: Dockerfile mule-ee-distribution-standalone-3.8.5.tar.gz license.lic testprj01.zip (*) (*) this is a tester application
Building the Mule Runtime Docker Image cont. Launch the following command (verifying the Mule tar.gz s checksum first!): >docker build -build-arg mule_ver=3.8.5 - build-arg mule_ pkg_md5sum=d011616161766049ceb498b814ef3aab -tag mule_rt_ ee_3.8.5. This command, which works on the file named Dockerfile in the current directory, should be creating the Docker image, named mule_rt_ee_3.8.5. You can check this by launching: >docker images The output should be showing the following two items (IDs are not necessarily the same): REPOSITORY TAG IMAGE ID CREATED SIZE... mule_rt_3.8.5 latest 1f34a4c0d4ee 3 seconds ago 1.08GB openjdk 8-jdk 6077adce18ea 11 days ago 74MB... A few comments, to read after (possibly) inspecting the Dockerfile: Note the inheritance from the FROM line! We re building an image ON TOP of an existing one which ONLY ships a JDK (why reinvent the wheel?). Specifically, the image we used (openjdk:8-jdk) ships JDK 1.8.0_141 64b. Note the installable checksum is verified. This is not mandatory, but highly recommended. Note that the Dockerfile is parametric in the Mule version. That is why it s likely this sample will also work with other 3.8.x packages (as long as you can change the version in the build command, without touching the Dockerfile) Note that, as best practices recommend, we re making sure to install (and run) the package with a non-root user. This is accomplished by creating a dedicated group, adding a new user to it, and using this new user to deflate the archive and finally run the mule command upon container launch ( CMD line). Note (COPY commands) that we inject the image not only the Mule installer, but also to the license file and tester application file (copied under Mule s apps/folder, supposed to start upon container startup).
Launching the Container Now, after we have created the image, we want to start a Docker CONTAINER out of it. From the same command shell, launch the following (after doublechecking the local log directory you should have created in the preliminaries): >docker run -d -p 7777:7777 -p 1098:1098 -p 5000:5000 -p 8081:8081 -v/c/programdata/mp/docker-rt/mule01/logs:/ opt/mule/logs -name mule-rt-ee01mule_rt_ee_3.8.8 Upon success, this command should only return the ID of the container just created (named mule-rt-ee01). Check your local logs in the directory you previously created for the Mule logs generated by the container. A few comments to read after inspecting the Dockerfile: Note the -d, this detaches the process. Note the PORT MAPPINGS. With the various -v options, we re making sure that the Mule Runtime exposes, on the LOCAL host, what is appropriate (8081: useful for the testing app; 7777: useful for the MMC registration; 1098: useful for the Runtime s JMX interface; 5000 (author s note: I found this port 5000 indicated as needed in many pointers on the web, none of them specifying for what; this is to be further investigated). Please think of how deep the implications of the -v options are: You can launch, on the SAME machine, a machine, a second instance of this container, just mapping the 7777 to something else, so to avoid local conflicts. Likewise, note the directory mappings (we used only one, for the Mule logs-just to showcase how powerful this option is. Typically, you want your container to be STATELESS, and to save anything OUT of the container itself).
Testing the Container Launch a browser, and hit the following URL: http://localhost:8081/testprg01 A Hello World! [<timestamp>] message should be displayed. Inspecting the Container You can learn more about your container launching commands like docker inspect, or many more. We leave this to the reader (ref. Docker docs). Stopping the Container To stop it, try the command: >docker stop mule-rt-ee01 Note: You can use its ID, also which you d see by the docker container ls -a command). Re-starting the Container To re-start it, try the command: >docker start mule-rt-ee01 Note: You can use its ID, also which you d see by the docker container ls -a command).
OPTIONAL Registering to MMC If you want to register this container to your MMC instance, just proceed as you would have done if it was a normal on-premises runtime node, ref. [12]. Consider, as server URL, the following: http://localhost:7777/mmc-support Note: This setting works because of the -v option which mapped the container s 7777 listening port to the local HOST s 7777 port). After verifying the registration, you can de-register the container and stop it. OPTIONAL Registering to ARM If you want to register this container to CloudHub/ARM, just proceed as you would have done if it was a normal on-premises runtime node, ref. [13], Obtaining the registration token. >docker exec -it mule-rt-ee01 /opt/mule/bin/anc_setup -H <registration token> <registration name for the server> Note: You could have used the container ID instead of its name. As with the registration name for the server, it is just of your choosig, representing that specific node, eg docker-mule-01. END Cleanup Now you can stop the container again with the instructions above outlined ( Stopping the container ). If, after this, you want to perform a full cleanup, then you can also: Delete the container (docker container rm mule-rt-ee01) Delete the Mule image (docker image rm mule_rt_ee_3.8.5 Delete the JDK parent image (docker image rm <find out this value as an exercise>)
Misc Tech Notes (tips, open points, etc.) Leaving the container naked, or putting the application in it In our exercise, we opted for the embedded applications. There is not a best way to go about this; it all depends on the use case. Naked containers mean more versatility in their usage. Application-equipped containers mean a portable and executable object, which can easily be used in multiple instances with no need to worry about deployment. The optimal decision depends on your specific scenario, including your CI/CD requirements. Note: Don t forget you can have the container exposing the apps folder in the same way we did with logs so we can deploy apps by dropping ZIP Mule deployable over there. In the case of not-naked, how many applications to put in Once you decide to put applications in an image, it can be helpful to consider whether you want to place one or many applications. The Mule Runtime node is designed to be ultra-performant, and it has many internal scalability features, with multiple applications recommeneded and supported even in high numbers (dozens). Finding the right balance with the actual number is a different story. Usually the trade-off is somewhere in between. A lower number of apps (usually making the container more componentized and reusable), for easier troubleshooting and allowing for tailored tuning of the engine if the set of applications in scope requires so. A higher number of apps to better exploit the Mule Runtime power, reduce the overall memory footprint, and alleviate the maintenance (less objects to maintain).
In case of not-naked... cont. Each project you ll work in will demand a decision from this angle, and there is not a recommendation that fits all scenarios, so use judgment and analyze your specific case. The best recommendation we can give is to make sure your harness is flexible in a scripted way, to have the decisions eventually changed over time, and to adjust for varying needs. Ideally, the landscape will look like the following the diagram: The developer automates by means of scripting the process of building images out of applications being downloaded from a software repository like Nexus, Artifactory, etc. Structuring your images Consider that you may also build a HIERARCHY of images to streamline your developments. For example, you may build a NAKED-MULE-RUNTIME image using FROM directive, like we did in this exercise, and build the various images containing the application. This will avoid that. In each application-related image Dockerfile, you will repeat the same code to get the rutime installed.
Managing the ARM/MMC registration from within the image definition Theoretically, it would be possible to embed appropriate commands (also CURL ones) in the CMD directive, so to have registration taken care of automatically upon container startup. This option has not been explored in detail, however we would not recommend it since it introduces strong dependencies. It is better to leave this aspect to orchestration layers, (Jenkins or alike, in the scope of CI/CD), or likewise for de-registration. REST APIs exposed by MMC [15] and CloudHub Runtime manager [15] can be leveraged for this purpose. License Management Following the pattern implied by the exercise, it appears that we need to rebuild the image upon license renewal. Doesn t that seem like a big administrative burden, especially since license replacement is infrequent? However, other approaches can be investigated (e.g. exposing more directories from the Mule runtime container, to allow license refresh without re-building the image). Additional Directories to Expose Other directories, from the Mule container, may be useful in exposing various purposes. For instance, it could be interesting to explore scenarios in which we expose configuration files to give more flexibility. Additional Refinements Approaches to avoid hardcoding the mule01 user s password in the Dockerfile can be explored. References [1] https://www.docker.com/what-docker [2] https://en.wikipedia.org/wiki/virtual_machine [3] http://searchservervirtualization.techtarget.com/ definition/container-based-virtualization-operatingsystem-level-virtualization. [4] https://www.docker.com/pricing [5] https://www.slideshare.net/jpetazzo/anatomyof-a-container-namespaces-cgroups-somefilesystem-magic-linuxcon [6] http://www.haifux.org/lectures/299/netlec7.pdf [7] https://hub.docker.com [8] https://docs.docker.com/docker-hub [9] https://www.contino.io insights/whos-using-docker [10] https://www.mulesoft.com [11] https://docs.mulesoft.com/runtime-manager/cloudhub-architecture [12] https://docs.mulesoft.com/mule-management-console/v/3.8/setting-up-mmc-mule-esb-communications [13] https://docs.mulesoft.com/runtime-manager/ managing-servers [14] https://anypoint.mulesoft.com/apiplatform/any- point-platform/#/portals/organizations/68ef9520-24e9-4cf2-b2f5-620025690913/apis/8617/versions/2321502/ pages/107964 [15] https://docs.mulesoft.com/mule-management-console/v/3.8/rest-api-reference
CONCLUSION In this white paper, we provided an overview about software containerization and, specifically, about Docker [1]. Also, we have shown how to build a Docker container for a Mulesoft Anypoint Platform Runtime engine [10],[11], by providing a working sample along with comments, recommendations, best practices, and discussions about possible alternative approaches and future developments.
www.ms3-inc.com contact@ms3-inc.com 304-579-8100 308 S Fairfax Blvd. Ranson, WV 25438