Docker for Beginners

In the world of software development, crafting applications that run flawlessly on every machine is akin to chasing a mirage in the desert. While the code might sing harmoniously in our local environment, its charm seems to vanish mysteriously when handed to another user. This perplexing phenomenon often puzzles developers and leaves end-users scratching their heads. Let’s see how to fix it.

What is Docker?

Docker is an Open-Source platform that simplifies application development, testing, and deployment using containers. Developers utilize Dockerfiles to define applications and dependencies, creating Docker images that enable easy container creation. This streamlined process ensures consistent, portable, and efficient deployment across diverse environments.

Why Docker?

With the help of Docker, you can publish your code faster, It gives you total control over that application, You can deploy your application in as many containers as you want. Docker helps developers to automate the deployment, scaling, and management of applications inside containers. Docker shines for microservices architecture, data processing, Continuous Integration and Delivery and container-based services.

How does Docker work?

Containerization vs Virtualization

Using Virtual machines requires a suitable and desired guest Operating System for running applications which helps us to run multiple applications on the same infrastructure however each Virtual machine is running its own operating system which makes them slow. The help of Docker and its containerization process help us for deploying applications most efficiently. Instead of virtualizing hardware, containers virtualize the operating system which makes it run faster than other virtual machines. In comparison, Docker containers are much smaller and require far fewer resources than a Virtual machine. It is also known as a lightweight approach to virtualization.

Docker Architecture

Docker follows a client-server architecture, where the Docker client communicates with the docker daemon for managing containers and images.

For download and installation refer to docs.docker.com

Docker daemon

Docker daemon is a core component of Docker that runs as a background service on the host machine. The docker daemon is responsible for managing most of the container-related tasks. The docker daemon exposes REST-API to allow user interaction with Docker CLI.

Docker client

The docker client is a command line tool or a GUI that allows you to interact with the docker daemon. It acts as a primary interface that allows you to issue commands to manage the docker objects.

Docker registries

Docker registries are like a big repository for storing and distributing Docker images. Docker registers serve as a library where developers can upload, share and download Docker images, it plays a huge role in promoting collaboration among developers. One of the great examples of a Docker registry is the Docker hub which is provided by Docker itself and holds tons of Docker images.

For exploring the images visit hub.docker.com

Dockerfile

A Docker file is a building block of a Docker image. It is similar to a cookbook where we can have all the necessary details to cook our favorite dish. One of the nice ways to know about Dockerfile is to create and deploy it on a container.

So let's create one :

  • Create a file named 'Dockerfile'

  • While building the file docker searches for 'Dockerfiles' ~ docker build .

  • When the image is build the docker starts to run the command RUN which is provided by the Dockerfile to get the task executed properly.

  • The command CMD in the Dockerfiles section will execute when the image properly gets build.

FROM ubuntu #pull the latest image from the docker hub
LABEL author="Rashid Alam" email="emailtorash@gmail.com"
RUN apt-get update #update the the file to the latest configuration
CMD ["echo", "Hello World"] # print hello world

Docker Image

Docker images act as a set of instructions to build a Docker container, like a template. It consists of all the key pairs like application code, libraries, dependencies, environment variables, and configuration files. Docker images have multiple layers, each one originates but is different from the others. These layer helps to speed up the Docker Build while increasing reusability and decreasing disk usage. An image is also a read-only file so when a new container is created it adds a writable layer on the unchangeable image, allowing us to configure it as we want. Images can be stored locally or can be published on remote locations like hub.docker.com, GitLab etc.

What makes the Image so special?

Docker images are made with layered architecture and each layer represents specific changes to the image so each time we pull images from the image registries it pulls several layers and reuses the previous one if they are available locally, this reduces the amount of data that needs to be transferred during the pull.

Docker uses delta uploads which optimizes the image transfer so whenever a image is pulled docker compares the locally pulled image from the cache and pulls the rest to make the pulling process faster.

Docker volumes

A docker volume is a folder in the physical host file system that is mounted or gets connected to the virtual file system of Docker. While the bind mounts are dependent on the host machine.

Starting a container with a volume

Whenever there is a container that has no volume then docker creates one for it. Here is an online command for creating a volume. ~ docker run -d --name devtest --mount source=myvol2,target=/app nginx:latest

Basic docker command

$ docker pull <image> # it pulls image from the DockerHub repository
$ docker run <image> # it will run the container using that image
#This is an example of running an container using nginx on port 80
$ docker run container -it -p 80:80 nginx
$ docker rm -f <conatiner ID> # remove the container forcefully 
$ docker info # it gives a breif info about docker
$ docker container ls # shows the containers which are running
$ docker container ps -a # show all the containers
$ docker stop <container ID > # stop the container
$ docker kill < container ID > # it kills the container
$ docker image ls # list all the image 
$ docker inspect <image> # show minimum details of image
$ docker history <image> # show the history of image

Containers :

A container is like a sandboxed process that runs on the host machine. In simple words a container is just like a box that runs your application, by the help of Docker API and Cli we can easily manage them, Containers themselves are not inherently immutable. While the image can be immutable we can still modify it on runtime. However, the best practice is to design it in such a way that promotes immutability which makes it easy to manage and deploy.

Some of the characteristics of containers are :

Isolation and portability: based on a sandbox environment which enhances the security of the container. The Containers encapsulate the entire runtime environment, this allows the container to work on any desired system that has compatible container runtime.

Scalability: with the help of easy scalability we can create as many containers as we want to divide the workload which is very useful in Microservices and cloud environments.

Versioning and Rollback updates: with the huge community Container images can be versioned or tagged which allows you to manage changes and roll back to the previous version we ever needed, making the development cycle easy.

Making your first container

  • Pulling image from the Docker registry
~ docker pull ubuntu   
Using default tag: latest
latest: Pulling from library/ubuntu
5af00eab9784: Pull complete 
Digest: sha256:0bced47fffa3361afa981854fcabcd4577cd43cebbb808cea2b1f33a3dd7f508
Status: Downloaded newer image for ubuntu:latest
docker.io/library/ubuntu:latest
  • Making a container out of the ubuntu image <here '-it' stands for interactive mode>
 ~ docker run -it ubuntu
root@bccdc39fc79e:/# ls
bin   dev  home  media  opt   root  sbin  sys  usr
boot  etc  lib   mnt    proc  run   srv   tmp  var
root@bccdc39fc79e:/# exit
exit
  • Exposing the containers
docker container ps -a
CONTAINER ID   IMAGE                     COMMAND                  CREATED         STATUS                     PORTS                  NAMES
bccdc39fc79e   ubuntu                    "/bin/bash"              6 minutes ago   Exited (0) 2 minutes ago                          affectionate_shtern
  • Removing the container
~ docker rm -f bccdc39fc79e           
bccdc39fc79e

What’s next?

Ever wonder after this awesome containerization technology, how can we manage if there are too many containers running? The answer is a Container-orchestration system. It automates the deployment, scaling, and management of containerized applications, by providing a suitable framework that efficiently manages and coordinates container deployments across a cluster of nodes. There are a lot of Container-orchestration systems and some are Docker Swarm, Kubernetes, Amazon ECS, OpenShift and many more.