Docker: A Beginners Introduction

In this day and age, more than ever before, companies depend on software in order to remain competitive.Moreover, one of the biggest innovations in the software development field was the invention of containers.

The introduction of containers has dramatically changed the way software is built and shipped. The companies and teams adopting containerization have made its developers worry-free due to docker's amazing features.

Today, we will discuss the fundamentals of Docker.

Kubernetes

Docker Containers

Let's start with the following questions:

  • Why do we need Docker?
  • What can Docker do for us?

These questions are best explained with a scenario developers face without the use of Docker:

Let's say a team wishes to set up a full stack application that includes various technologies, such as a web server using nodeJS, a database such as MongoDB, a messaging system like Redis, and an orchestration tool like Ansible.

Node

Mongo

Redis

Ansible

A few issues may arise when developing such a project:

  1. Compatibility issues with the underlying operating system: Developers have to always make sure different services are compatible with the version of the operating system they use.

  2. Compatibility issues between dependencies and the application services: It may come to be that one service requires one version of a dependent library, whereas another service requires another version. Every time a dependency changes, developers would have to go through the process of checking the compatibility between these various components and the underlying infrastructure.

This compatibility matrix issue is usually referred to as the "matrix from hell".

  1. Setting up an environment for a new developer is difficult: Whenever a new developer joins the team, he would have to follow a large set of instructions and run hundreds of commands to finally set up their environment. This is of course time consuming and inefficient.

Given the previous difficulties, we need something that can help us with these compatibility issues that will allow us to modify or change the application's components independently and even modify the underlying operating system as required.

That's when Docker comes into play.

With Docker, developers are able to run application components in containers with their own libraries and dependencies, thus solving all of the previous issues.

For many developers to run an instance of an application, a docker configuration must be built, which can later be run by all developers using a simple Docker run command, irrespective of the underlying operating system. The only requirement is to have Docker installed the developer's machine.

So what are containers?

Containers are completely isolated environments. They can have their own processes or services, their own networking interfaces, their own mounts (just like virtual machines), except they all share the same operating system kernel. We will discuss the implications of this later.

Setting up these container environments is hard as they require low level computer OS knowledge. And that is where Docker offers a high-level tool with several powerful functions, making it really easy for end users like us.

How does Docker work?

To understand how Docker works, we must understand some basic operating system concepts.

If you look at operating systems like Ubuntu, Fedora, CentOS, etc. They all consist of two components:

  1. An OS kernel.
  2. A set of software.

The operating system kernel is responsible for interacting with the underlying hardware.

The OS kernel, which is Linux in this case, remains the same. It's the software above it that makes these operating systems different.

So you have a common Linux kernel shared across all operating systems and some custom software that differentiates operating systems from each other.

Previously we mentioned that Docker containers share the underlying kernel. What are the implications of this?

This means that any OS that uses the same kernel can run the same flavor of docker, regardless of the software used in the OS.

If the underlying operating system is Ubuntu Docker can run a container based on another distribution like Debian, Fedora, SUSE or CentOS.

Docker does not run a separate kernel on the host machine. As a reminder, docker shares the computer's kernel.

You might ask: isn't that a disadvantage then? Not being able to run another kernel on the OS? The answer is no. Docker is not meant to virtualize and run different operating systems and kernels on the same hardware. The main purpose of Docker is to containerize applications and to ship them and run them.

Differences between virtual machines and containers

In the case of Docker (containers), the architecture is built with the following layer structure:

At the bottom we have the underlying hardware infrastructure, then the operating system and Docker installed on the OS. Docker can then manage the containers that run with libraries and dependencies alone.

In the case of a virtual machine: We have the OS on the underlying hardware, then the hypervisor (this can be any kind), on top of this lies the virtual machines. For each instance of a virtual machine lies an operating system. And finally, above the OS lies the application. As you can imagine, this extra overhead causes a higher use of resources as there are multiple virtual operating systems and kernels running.

The virtual machines also consumes higher disk space as each VM is "heavy", and it's usually in gigabytes in size, while Docker containers are lightweight, and they're usually in megabytes in size.

This allows Docker containers to boot up faster in a matter of seconds, while virtual machines may take minutes to boot up.

It is also important to note that Docker has less isolation as more resources are shared between containers such as the kernel, whereas VMs have complete isolation from each other.

Since VMs, don't rely on the underlying operating system or kernel. You can have different types of operating systems such as Linux based or Windows-based on the same hypervisor. This would not be possible on a single Docker host.

More On Docker and Docker repositories

There are a lot of containerized versions of applications readily available as of today. So most organizations have their products containerized and available in a public Docker registry called Docker Hub, or Docker store.

For instance, you can find images of the most common operating systems, databases, and other services and tools. Once you identify the images you need, and you install Docker on your host. Bringing up an application stack is as easy as running:

docker run [image name]

If you need to run multiple instances of an application/service, simply add as many instances as you need and configure a load balancer of some kind in the front.

In case one of the instances wants to fail, simply destroy that instance and launch a new instance.

As you can see, once you are proficient with Docker you can easily scale an applications which is very powerful.

Image vs Containers

An image is a package or a template just like a VM template. It is used to create one or more containers. Containers are running instances of images that are isolated and have their own environments and set of processes.

If we think of images and containers from a programming perspective: you can think of an image being a class definition while a container being an instance of that class.

As we have seen before, a lot of products have been Dockerized already. In case you cannot find what you're looking for, you could create an image yourself and push it to the Docker Hub repository, making it available for the public.

With Docker, a major portion of work involved in setting up the infrastructure is now in the hands of the developers in the form of a Dockerfile. The guide that the developers built previously to set up the infrastructure can now easily be put together into a Dockerfile to create an image for the applications.

This image can now run on any container platform, and it's guaranteed to run the same way everywhere. So the Operations team now can simply use the image to deploy the application. Since the image was already working, when the developer built it, and operations are not modifying it, it continues to work the same way when deployed in production.