What are Docker and Jenkins for?

Just virtualize Jenkins and Co. with Docker - or not?

Whether in a software project at the university or in commercial projects, I come across a dockerized build environment everywhere. But is it really that simple? Continuous integration (CI) with Docker to virtualizeas portrayed in most blog posts?

When we did an automated build and continuous deployment for one of our 4 + 1 projects in the Jenkins container wanted to set up we encountered some obstacles. It was particularly time-consuming to use Docker within a Docker container. In the following, I would like to explain to you how you can avoid these pitfalls and save a lot of time in troubleshooting.

I am assuming that you have already set up a virtualized infrastructure with version management, private repository and Jenkins for your project. Of course, it is not a problem for you to configure a Jenkins job that initiates a Maven build with a test suite. Furthermore, I can imagine that you have already carried out some experiments with Docker locally and started your application in a container.

The plan: the virtualization of a continuous integration with Docker

Our To-do list for a dockerized CI looks like this:

  1. Create a Jenkins job that builds a Docker image and pushes it into a private Docker registry.
  2. Create a second Jenkins job that pulls the Docker image from the private registry and deploys it on the server using Docker Compose.

After my first experiments with Docker, I figured that it would be pretty quick to build Docker images and start containers with Jenkins. I would just have to install Docker inside the Jenkins container and everything would be DONE.

So there was the first problem in figuring out how to add the Docker Command Line Interface (CLI) and a Docker daemon to the Jenkins image. To add Docker Community Edition (CE) to the Jenkins image, all you have to do is add the following RUN command to its Dockerfile.

RUN usermod -a -G docker-host jenkins

Docker executes all processes in its container out-of-the-box with root rights - and of course that poses a huge security risk! So we're restricting the Jenkins container's rights by adding a new docker-host group to it. The group ID is not chosen at random, it must match the Docker group of the host. Since the user and group IDs of the container and the host are managed by a common Linux kernel, you also add the Jenkins user to the host's Docker group.

For the second step of our plan, an automated deployment, In addition to the Docker CLI, the Jenkins container also requires Docker Compose. Docker-Compose is a tool that allows the developer to define several Docker containers in just one Docker-Compose file and to start them at the same time with just one command. If you are not yet familiar with Docker-Compose, I recommend that you get started with its documentation. We want to focus more on the problems with the Jenkins container in this blog post.

Docker-Compose is an open source project and was developed with Python. That's why it's best to install it with Python's package manager pip in our Jenkins image, as the following listing shows.

According to the theory, we would now have installed the required tooling and would only need to create the Jenkins jobs for steps 1 and 2, which call the corresponding Docker commands. But can that really have been all?

Docker-in-Docker vs. Docker-out-of-Docker

During my research, I came across a large number of blog posts that advised against using Docker within a Docker container. The Docker-in-Docker (DinD) approach describes a naive solutionwhich can lead to data corruption. Let's imagine that we have several Docker containers with large Docker images. In order to save storage space and not pull every image from the Docker hub several times, we would mount the directory from the Docker host into each container. This technique would allow all of our containers to use the host's Docker images. However, this is exactly what will doom us as soon as several containers try to write to the images at the same time (see Using Docker-in-Docker for your CI or testing environment? Think twice).

Now this fact is not a big problem in our environment, as we only use a container with DinD. However, in the second step we want to deploy an application from the Jenkins container alongside it. This is exactly the case for this scenario Docker-out-of-Docker (DooD) otherwise we cannot access the host's Docker daemon.

To use DooD, the Docker CLI must also be added to the Jenkins image, but it implicitly uses the host's Docker daemon. The following figure illustrates the software-related Differences between DinD and DooD.


The Docker CLI writes its commands on a Unix domain socket (socket), the Docker daemon reads and executes these commands. With both DinD and DooD, the host's Docker daemon starts the containers for version management, the private Docker registry and Jenkins. The figure also shows that Docker CLI and daemon also run natively in the Jenkins container and communicate via their own socket. With DinD, the Jenkins Docker daemon starts the app inside its container, which initially makes the app inaccessible from outside the host.

For DooD we mount the socket in the Jenkins container, so Jenkins Docker-CLI also writes its commands to the socket of the host. In addition, the host's Docker daemon executes the Jenkins container commands. This allows us to deploy the app alongside them from the Jenkins container.

Now you are surely asking yourself like us this technology apply. All it takes is to mount the host's socket in the Jenkins container when starting with the option.

However, access to this socket should only be allowed to trustworthy containers. The Docker daemon is a powerful process and can have a major impact on the host operating system, e.g. B. by writing routing tables or adjusting the firewall configuration.

There's a plugin for that

Thanks to its extensive community, Jenkins offers its users many plugins - also for the integration of Docker. After my first research, I came across a multitude of plugins and was unsure which one to use, but now you can Docker plugin from CloudBeesrecommend. It focuses exclusively on building and pushing images and its operation is quite intuitive.

The following screenshot shows the configuration of the CloudBees Docker plugin. The repository name will later serve as an identifier for your Docker image. In addition, this plugin allows the Docker image to be marked with a tag, with Docker setting the tag implicitly based on the time stamp. In addition, this plugin would like to know the Resource Identifier (URI) of the Docker host and the Resource Locator (URL) of the private Docker registry before you. Since we are using DooD, these are the path to the socket and the port on which the host offers the Nexus.

If this plugin should not execute the Docker build in the root of your repository, you can set a user-defined directory in the build context and the location of the Docker file in the Dockerfile Path. At this point we have to make a clear distinction between the terms repository and registry. A repository describes the location from which your source code and, above all, your Dockerfile originate, whereas a registry only allows developers to exchange finished Docker images.



The last step is to select the Docker CLI installation from a drop-down menu. The CloudBees plugin implicitly attaches the URL of the private Docker registry as a prefix to the image and automatically pushes it into the registry after a successful build.

So we can tick off the first step of our plan, as we can now build and push Docker images with the help of a Jenkins job. Let us now turn to step 2, the deployment with the Jenkins job.

A quick word about the Docker registry

I have already outlined the difference between a repository and a registry, where the registry means a private Docker hub. During my research, I came across different terms, but I mistakenly assumed that they meant the same thing. These two terms are Docker registry and Docker index.

According to the old Docker registry API V1, a Docker hub consists of a registry and an index. The registry therefore saves the images and is responsible for pulling and pushing. However, the index manages all users and their access rights. The registry also delegates authentication to the index in the event of a request.

However, the concept of the index is out of date and the distribution management of images has been replaced by a new project that implements the Docker registry API V2. Sonatype’s Nexus primarily supports version 2 of the API but can also process requests against version 1 if this has been explicitly configured. The following screenshot shows how Nexus supports Docker registry API V1 with one click.

Continuous deployment using a Jenkins job

Unfortunately, Jenkins does not offer a plugin that supports Docker-Compose. So we only have the option of a Jenkins job checking out the repository from Git and calling it on its shell.

So that Docker-Compose really pulls the images from the private registry on your host, of course you mustn't forget that Day before the image name to append in the Docker Compose file. If you forget that day, Docker may start local images that you built in previous experiments. This shortcoming is only noticeable when you ask yourself why the new features are not yet running in the container.

We are now at the point where we have implemented both the first and the second step of our plan. It would be nice if our developers could pull the Docker images from the private registry and play around with them locally. The following illustration shows you what I mean.


The developer has installed his own Docker tooling locally and perhaps started a container with a local Docker registry. He now also wants to pull Docker images from the host's private registry. All he needs to do is replace the tag in the Docker Compose file with the registry URL.

But Docker throws up a meaningless error message that suggests a faulty URL. After trying different variants of the URL for a while (with and without http: //), I finally found the solution in a GitHub issue. Docker is simply not able to handle URLs with hyphae.

Error response from daemon: error unmarshalling content: invalid character '<' looking for beginning of value

The issue was opened in 2014 and actually fixed in a newer version, but unfortunately the same error occurred with us. The only possibility is, other than z. B. Sonatype recommends not naming its registry and not using Hyphens. Similar problems arise when trying to push a Docker image to a registry that is not running on the host's root. Docker cannot differentiate between registry URL, tag and image name. If we e.g. For example, if you want to push an image, Docker does not know whether a prefix is ​​part of the image or part of the URL. For this circumstance, Nexus Repository Connectors. These allow a port to be configured under which the registry can be addressed directly.

I hope I can give you some insight into that Working with Docker and Jenkins and you can now use these tools a little easier. If you want to know which Dangers of experiments with Docker on the Internet lurk for you, I recommend the blog post of my colleague Roland.