Docker survival kit

TL;DR
I present to you the bare minimum of docker knowledge.

You could almost think of docker in terms of a virtual machine. It is something that ‘thinks’ it has an operating system and it runs some application. Truly, it is way more than that but I will attempt to break it down to its basic fundamentals. Let’s start with a quick description of the moving parts.

In the middle of the diagram, we have the docker daemon. We communicate with docker through the daemon using cli commands. Docker also has an API which would be the best way to interact with it, you know, automation and all things wanting to be automated. For the sake of this post we will cover some of the CLI commands. Here are the ones I use most.

Now that you have your docker command decoder ring, let’s continue with the architecture. On the docker host we will have a ‘local’ repository where we will store images we create and images we PULL from the dockerhub. Dockerhub has a huge stash of pre-made images that are officially supported and you are free to pull whatever you require. When you do, that image is downloaded and stored on the host. You can use docker images command to see what images you have, locally. I also use docker ps to see what is ‘currently’ running. Like I said above, we won’t be interacting with docker in production with the command line, these commands are just something you should be familiar with.

The Dockerfile is a bit of text that describes, one step at a time, what we want our custom image to look like. We pull the base image we want with FROM, load the libs/bins with RUN, make a working directory with RUN again, set the working directory with WORKDIR and ADD all of the files in our development directory to the docker image. Next we install the pip requirements with another RUN, EXPOSE a port and identify what application we want to execute when the image is ‘ran’ in a container. You can only have one Dockerfile per directory. You might have 4 different services, each will have their own folder and a single Dockerfile for each one inside it’s folder. Now we want to automatically take all of these Dockerfiles, process them, and create images. Next we want to run those images in containers all liked together on the same network. For this bit of magic we need the docker-compose.yaml.With a single command we can bring up our application. It could be one container or thousands

The docker-compose.yaml file defines what services we want to run. In this example we have two containers. One is a web app and the other is a mongo database. In the web service we build the image from the dockerfile if the image is not in the local repository. We bind port 8080 to the internally exposed port 5000, we set up a dev volume, link to the database and name the container.
The next service is a mongo database and it is ‘pulled’ from the docker hub and given a name. Docker automatically put these containers on the same network and bridges everything to the docker0 ethernet interface. Now to get all of this up and running.

Use: docker-compose up -d

That command will run two containers and present a web service on port 8080. You will most likely want to set up some persistent storage for the database and I will cover that in a future post.

That wraps up this super basic knowledge. Be sure to check out the online docker documentation Docker Documentation

Leave a Reply

Your email address will not be published. Required fields are marked *