"It works on my machine" - The Motivation for Containers
It works on my machine.§
Have you ever heard someone say, "It works on my computer" in response to a complaint about software not working?
It's no surprise. Software is inherently complex, with myriad dependencies that are often updated or out-of-date.
Wouldn't it be a joyous thing to develop stuff on your computer, and be secure in the knowledge that you can deploy it to servers
without any gruesome accidents?
Let's start with our original computer, then try to build up a mechanism to ensure that it works on other computers as well.
Finite-State Machines§
You can imagine your computer as a finite-state machine - it is completely predictable, if it starts from the same state, and gets the same inputs every time.
A finite-state machine (FSM) or finite-state automaton (FSA, plural: automata), finite automaton, or simply a state machine, is a mathematical model of computation. It is an abstract machine that can be in exactly one of a finite number of states at any given time. The FSM can change from one state to another in response to some inputs; the change from one state to another is called a transition. -- Cribbed from Wikipedia
In the context of strict reproducibility, inputs include everything from the time to the filesystem. Filesystems are nothing but global mutable stores. The time is never the same twice. Tough-to-reproduce bugs are often caused by race conditions (changes in the timing of threads causing unpredictable behaviour) and are notoriously difficult to replicate and disappear when you try to debug them (read: Heisenbugs). Replicating all of that is a hassle, and likely unnecessary if we just want to get a program to work on a computer.
For the purpose of "trying to get software running on another computer", we can relax our constraints a lot to just wanting the program to run. The bare minimum required to do that would be to have the same environment - libraries, filesystem, operating system, etc., since programs cannot (usually) function without those pieces.
Before we start working on that, let's take a moment to appreciate the number of ways the environment can go wrong.
Number of States§
How many states does a disk have? Each bit has 2 states (1 or 0), a byte then has $2^8 = 256$ states. A modern hard disk can easily have a terabyte of memory.
$$ 1 \text{ kilobyte} = 10^3 \text{ bytes} $$ $$ 1 \text{ megabyte} = 10^6 \text{ bytes} $$ $$ 1 \text{ gigabyte} = 10^9 \text{ bytes} $$ $$ 1 \text{ terabyte} = 10^{12} \text{ bytes} $$
If a byte has $256$ states, and a terabyte has $10^{12}$ bytes, then the number of states in a terabyte is not $256 * 10^{12}$, but $256^{10^{12}}$, a humongous number.1
Even the variance of a single byte between two drives can mess stuff up; even replicating the environment can prove to be difficult.
Of course, this is somewhat irrelevant when your states are tightly controlled: installing the operating system on a blank drive is going to result in the same state no matter what. However, I find this perspective valuable when you get to installing more binaries and libraries onto your machine, each of which have different versions, configuration options, etc.
Optimizing our Reproduction§
Now that we've gained an appreciation for the amount of things that can go wrong on a computer, let's try to replicate program behaviour from one computer on another by replicating their environment!
We start with first obtaining a dump of the hard disk used in the original, plug it in, and boot our second computer from it. And we're done!
... that was awfully easy. What's the catch?
The catch? I'm glad you noticed. The catch is that most server (read: bare-bones) operating systems take anywhere from a gigabyte to three-and-a-half gigabytes. And that's not accounting for additional libraries and packages that you install to run your program. Overall, the amount of data you need to transfer is a lot. Additionally, booting from the copied disk is much more restrictive than the approach we're going to discuss next.
User space and Kernels§
Quick revision:
- The kernel is the core of the operating system: it manages the most basic operations that processes require, such as memory allocation, disk management, etc.
- Programs call into the kernel for many operations, such as allocating memory, accessing system resources, etc.
- Programs and the kernel communicate through the ABI. In the case where ABI expectations do not match (i.e. the program was compiled with the expectation of a different layout), bad things happen.
We could make reproducibility much easier if we used the same kernel as the host system. That way, all we'd have to do is mount another filesystem to the host (which contains the filesystem from the original computer), isolate it sufficiently, and we'd have replicated program behaviour if there are no kernel problems.
For our case, that's a worthwhile trade. As long as the kernel for which the programs on the original were compiled for is compatible with your host, you can replicate program behaviour with minimal overhead. One of the most popular kernels out there, Linux, has a long history of backwards compatibility, which makes it a good fit.
The above-mentioned constraint of having the same kernel or same processor architecture can be worked around with emulation.2
Linux has a handy feature called namespaces
, which enables effective isolation of userspace. Combined with other kernel functionality, Docker can reproduce program behaviour from a Linux machine on any other machines running sufficiently-compatible Linux versions.
While I would love to discuss the exact manner of how Docker interacts with Linux in order to accomplish that, at this point in time, I don't think I can do better than Red Hat's explanation.
Further Optimization§
So far, we've gotten down to replicating the environment of the host computer efficiently. Our next question is whether we can optimize the size of the environment itself. We can do that by building our environment up from scratch to only contain relevant dependencies, instead of trying to replicate everything from the host system. Dockerfiles and other "infrastructure as code" software attempts to automate that process. (Most of them are glorified bash scripts run in fresh distros - but hey, that's a mainstream and perfectly valid approach to setting up a system.)
There are many more innovations in this domain, which we won't be discussing here. Normally, at this point, I would move on to properly introducing containers, images, but...
Not a Docker Tutorial§
There are already enough Docker tutorials in the world that I really don't think it needs another. This article was written primarily to showcase the "number of states" insight, and the rest about Docker and reproducibility grew around that. If you still think that a Docker tutorial would be useful, feel free to write to me.
I will leave you with a taste of that, however:
A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings. Container images become containers at runtime and in the case of Docker containers – images become containers when they run on Docker Engine. Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure. Containers isolate software from its environment and ensure that it works uniformly despite differences for instance between development and staging.3
Oh, and there's a world of containerization outside Docker, in case you're interested.
Further Reading§
An example: Multi-arch images
Feel free to write to me to point out an error, suggest a topic, or just say hi!