In our series of Docker blogs, we aim to walk you through deploying, managing, and extending Docker. This is the second blog post in this series, so you might want to check out the first one at this link. We first introduced you to the basics of Docker and now it’s time to talk about configuration management, components, and commands. After that, we will start to use Docker to build containers and services to perform a variety of tasks.
Docker with configuration management
Since Docker was announced, there have been a lot of discussions about where Docker fits with configuration management tools like Puppet and Chef. Docker includes an image-building and image-management solution. One of the drivers for modern configuration management tools was the response to the “golden image” model.
With golden images, you end up with massive and unmanageable image sprawl: large numbers of (deployed) complex images in varying states of versioning. You create randomness and exacerbate entropy in your environment as your image use grows. Images also tend to be heavy and unwieldy. This often forces manual change or layers of deviation and unmanaged configuration on top of images, because the underlying images lack appropriate flexibility.
Compared to traditional image models, Docker is a lot more lightweight: images are layered, and you can quickly iterate on them. There is some legitimate argument to suggest that these attributes alleviate many of the management problems traditional images present.
Docker’s technical components
Docker can be run on any x64 host running a modern Linux kernel. We recommend kernel version 3.10 and later. It has low overhead and can be used on servers, desktops, or laptops. Run inside a virtual machine, you can also deploy Docker on OS X and Microsoft Windows. It includes:
• A native Linux container format that Docker calls libcontainer.
• Linux kernel namespaces, which provide isolation for filesystems, processes, and networks.
• Filesystem isolation: each container is its own root filesystem.
• Process isolation: each container runs in its own process environment.
• Network isolation: separate virtual interfaces and IP addressing between containers.
• Resource isolation and grouping: resources like CPU and memory are allocated individually to each Docker container using the cgroups, or control groups, kernel feature.
• Copy-on-write: filesystems are created with copy-on-write, meaning they are layered and fast and require limited disk usage.
• Logging: STDOUT, STDERR, and STDIN from the container are collected, logged, and available for analysis or troubleshooting.
• Interactive shell: You can create a pseudo-tty(PTY) and attach it to STDIN to provide an interactive shell to your container.
Docker user interfaces
You can also potentially use a graphical user interface to manage Docker once you’ve got it installed. Currently, there are a small number of Docker user interfaces and web consoles available in various states of development, including
• Shipyard — Shipyard gives you the ability to manage Docker resources, including containers, images, hosts, and more from a single management interface. It’s open-source, and the code is available from https://github. com/ehazlett/shipyard.
• Kitematic — is a GUI for OS X and Windows that helps you run Docker locally and interact with the Docker Hub. It’s a free product released by Docker Inc.
- ADD — copies the files from a source on the host into the container’s own filesystem at the set destination.
- CMD — can be used for executing a specific command within the container.
- ENTRYPOINT — sets a default application to be used every time a container is created with the image.
- ENV — sets environment variables
- EXPOSE — associates a specific port to enable networking between the container and the outside world
- FROM — defines the base image used to start the build process
- MAINTAINER — defines the full name and email address of the image creator
- RUN — the central executing directive for Dockerfiles
- USER — sets the UID (or username) which is to run the container.
- VOLUME — used to enable access from the container to a directory on the host machine.
- WORKDIR — sets the path where the command, defined with CMD, is to be executed.
- LABEL — allows you to add a label to your docker image
The Docker filesystem layers
When Docker first starts a container, the initial read-write layer is empty. As changes occur, they are applied to this layer; for example, if you want to change a file, then that file will be copied from the read-only layer below into the read-write layer. The read-only version of the file will still exist but is now hidden underneath the copy.
Docker Compose and Docker Swarm
In addition to solitary containers, we can also run Docker containers in stacks and clusters, which Docker calls swarms. The Docker ecosystem contains two more tools:
• Docker Compose — which allows you to run stacks of containers to represent application stacks, for example, web server, application server, and database server containers running together to serve a specific application. Docker Compose is currently available for Linux, Windows, and OS X. It can be installed directly as a binary, via Docker for Mac or Windows, or via a Python Pip package.
• Docker Swarm — which allows you to create clusters of containers, called swarms, that allow you to run scalable workloads. Swarm is shipped and integrated into Docker since Docker 1.12. Prior to that, it was a standalone application licensed with the Apache 2 license.
While Docker itself has many alternatives, if we take into account its simplicity, security, rapid deployment, and its ROI, Docker is well worth looking into.
In this blog post, we’ve gone ahead and analyzed configuration management, components, and basic commands Docker offers. The series continues soon, and we’ll bring you some more practical examples of Docker use through short demos!
Stay up to date with our updates by following our social media from the links below!