Two years ago I wrote an article about my disdain about the popularity of creating virtual machines to host applications. I was discouraged with the attitude of creating images that weren’t easy to rebuild, and felt that it encouraged bad practices around application setup and maintenance. I also felt that it was It seemed incredibly wasteful in terms of storage, memory, and cpu cycles to do a full OS emulation. I was a fan of the idea of OpenVZ, and LXC at the time. I didn’t know a lot about those options and didn’t go forward with it. However, through cheap hosting providers I learned about the downsides of oversubscribed OpenVZ hosts.

Docker came to popularity. It was an approach that went the LXC/Cgroups route and made it easier to use. Docker doesn’t attempt to virtualize the entire stack, but it does attempt to reproduce the Linux environment within the container and attempts to isolate the process running therein. It’s basically a sandbox for the filesystem, processes, and network. All of the benefits of a VM but none of the full hardware emulation needed.

Why do I like it? There are a few reasons that I like Docker they are:

It’s portable- For most of the internal structure of the VM, it’s based on pre built images. To start up an app, just pull it down from it’s online repository. For example, to startup an instance of Couchbase is a matter of running the following command:

docker run -d –name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase

In the context of another NoSQL server, Riak, previously you had to fight with bad platform support and Erlang installs. With Docker, all of this is configured in the container and doesn’t negatively affect the host OS. The portability of docker means that you no longer have to figure out a new install procedure if the product is using Docker. On top of that you can configure where the persistent storage will be located. The application within Docker will have no knowledge of where it’s stored, nor will it care. The same goes with the networking configuration.

It’s opensource/free- Event the Docker Repo, Registry, and base images are freely available via the Dockerhub. When you want to move away from this model you can reproduce it within your own environment. On top of this, the Docker registry is a docker container itself, and it allows for versioning.

**It’s social and collaborative- **With the introduction of the DockerHub, that means that you can build your images on existing images. If you want to upgrade the underlying infrastructure (Ubuntu 14.04 to 32.01) it’s just a matter of upgrading the base image in your Dockerfile. That allows for testing and debugging in an isolated and repeatable manner (as opposed of making golden images). The organizations responsible for creating the products, i.e. Couchbase and Ubuntu, have their own official images that are frequently updated.

**It’s easy to track changes- **Everything about the build of the Docker image is based a declarative script known as a Dockerfile. There are a few nuances in the script (I.e. How a run command is run versus an entry point) However, it’s fairly easy to create, update, and track changes (via source control).


So far the only downside to docker that I’ve seen has been more on a creator issue: There is a tendency to try to containerize everything in the application environment. That includes a container for the database and another for the storage which both are frequently hard linked to the application container. I realize that is helpful for cases where you need to have a particular version, however I would rather have a single database install on the host OS to share between containers and maintain the security for that separately.