Thinh Huynh (Huynh Cong Thinh) - Blog

Microsoft MVP, Data Scientist & Cloud Architect

Docker, its advantages and security issues

Recently, Cloud computing represents an attractive and cost-efficient of server-based computing and application service provider model. Virtualization technology not only enables Cloud providers dynamically allocate resources based on workload fluctuations from users’ needs but also makes the cloud data-center more agile, manageable, cloud friendly and application friendly. However, virtualization has overheads of the guest operating system, costly licensing for virtualization software, configuration and administration time, etc. Docker, as a solution, is emerging in the technology circles and is rapidly gaining mind-share from researchers, developers, startups and IT organizations. As a matter of fact, Microsoft, Google and Amazon Web Services are all adapting Docker’s approaches so as to provide their services more efficient, and to scale their resource workloads ever-higher to cloud customers.
In this post, we study about what Docker really is, how it works, its run-time, orchestration and then discuss about Docker’s advantages, disadvantages. We will go to an end with Docker security in Cloud environment perspective.

What is Docker

Before talking about Docker, we will briefly take a look at Linux containers (LXC) which Docker is based on. Linux containers (LXC) is an operating-system-level virtualization environment for running multiple isolated Linux systems (containers) on a single Linux control host [1].

VM vs Docker

Figure 1. Traditional Virtual machines and Docker Containers paradigms

Docker container, on the other hand, has different approach than traditional virtualization technology, runs on user space on top of OS kernel. It can be called as OS level virtualization. As seen on the right part of Figure 1, each container will have its isolated user space, libraries and we can run multiple containers on a host, each having its own user space. More particularly, rather than having an entire guest OS, Docker containers isolate the guest but to not virtualize the physical hardware. By doing this, we can run different Linux systems (containers) with their own file systems, processor, memory, library, etc. on a single host with the same kernel.

In summary, Docker is basically a container engine which uses the Linux Kernel features like namespaces and control groups makes the deployment of application much easier, it enables to create, deploy, and run applications inside software containers by providing an additional layer of abstraction and automation OS level virtualization on Linux.

Docker architecture and orchestration

Figure 2 represents the client-server architecture of Docker. The Docker client communicates to the Docker Daemon through CLI sockets or RESTful API, Docker Daemon does the heavy lifting of building, running, and distributing our Docker containers. Both the Docker client and Docker Daemon can be on the same host system, or we can connect a Docker client to a remote Docker daemon on any other host.

Docker architecture

Figure 2. Docker architecture

In this diagram, Images are the basic building blocks of Docker. Images can be configured with libraries, applications, etc. and used as template for creating new container on the host layer.

Docker Registry is a repository for Docker images. Using this repository, we can build and share images either publicly or privately. There is service called Docker Hub, it works like a Git service, allows us to upload, download and share images from datacenter. In this Hub, we can find some popular images such as Ubuntu, Node.JS, Nginx, WordPress and so on.

Containers are created from images. We can build up our applications in a container and set it as default container for future changes. Containers can be started, stopped, committed and terminated at any time.

The basic workflow of Docker run-time is illustrated in Figure 3. The Docker client sends the operation request to Docker Daemon, the Docker Daemon then pulls existing images from the Docker registry so we can re-use the images are already made and provide accurately the container that Docker client needs instead of spending hours creating new instance, OS, required libraries, applications, etc. In addition, Docker Daemon can also flexibly switch from one container to another one immediately (when they have been built) as seen in Figure 4.

Figure 3. The basic workflow of Docker architecture

Figure 3. The basic workflow of Docker architecture

Figure 4. The basic workflow of sharing Docker image files

Figure 4. The basic workflow of sharing Docker image files

Disadvantages and advantages

In this section, we first discuss about the disadvantages of Docker and in the last part, in contrast, we will go deeply into Docker’s advantages and its efficiency in Cloud environment.


The first disadvantage is less secure compared with VMs, containers are built upon the top layer of a physical hardware and far from host kernel that creates more changes to escape of it. Also we cannot run Windows applications in a Linux container. Second, if we want to develop a containerized service which consists of many components, libraries may lead to overhead. A mail server, for example, will need SMTP, IMAP/POP3, Antivirus, Antispam, etc. services; performing, communicating tightly coupled components on each container is a hard job for IT both in configuration and administration. Besides, managing many containers with various modules on the same kernel becomes complexity, it can cause the security risk from firewall rules and policy settings. In some cases, we want to save container state in one physical machine and migrate it to another machine will be a problem in Docker. Docker does not allow to give each container its own IP or this function is currently not integrated with Docker (in this current version of Docker). On the other hand, backing up the data stored in application databases still requires a robust configuration and backup strategy.


Talking about advantages of Docker, we will compare it to the very well-known virtualization technology: virtual machine. What is the different between them? Can they integrate, work together?

While virtual machines represent an entire server with all of the associated software and administration concerns, Docker containers bring to IT developers/managers application isolation and can be configured with minimum run-time environments. Moreover, in a Docker container, the kernel and parts of the operating system infrastructure are shared. For the virtual machine, a full operating system must be included [2]. So what that means we can create or destroy containers quickly and easily. VM will require more time with full installation and more computing resources to execute.
Because of its characteristics, Docker containers are lightweight so we can run simultaneously more containers than VM on a physical machine.
Docker containers efficiently share resources while VMs are isolated. Therefore multiple applications running in Docker containers are also able to be very lightweight. For example, shared binaries or libraries are not duplicated on the system.

Virtual machines can be migrated while in state of executing, however containers cannot be migrated while executing and must be stopped before moving from host machine to host machine.

Hence, by using Docker, we can achieve more benefits as following:

  1. Rapid application deployment: Docker containers include the run-time requirements of the application, reduce the libraries size and help deploy it quickly. For example, we want to build a PHP web server with Rails supported, we might need Apache, Nginx, MySQL, Rails 3 and Rails 4, PHP4 and PHP5. Can we cleverly install and configure all mentioned libraries on a VM? It could be a tricky and headache job. By using Docker, we can put packages with associated requirements for a particular application on two or many containers without facing problem with host configuration.
  2. Server Consolidation and Resource usage planning: Docker makes it easy to limit the CPU and memory available to an application and share unused memory across the instances. It also allow consolidating multiple servers to save the monetary cost.
  3. Application isolation: In some cases, we need to consider server consolidation for decreasing cost or a gradual plan to separate application into decoupled pieces. That is why Docker comes into this stage to provide fast, lightweight platform for application isolation.
  4. Application portability: Application are deployed on a container and all its dependencies can be packed and migrated into another container on a different physical machine that runs Docker and has the same Linux kernel, platform distribution, library, binary deployment model.
  5. Application version control, debugging and testability: Using the feature of the layered containers on Docker Daemon and images on Docker Registry, we can track versions of container, inspect differences and roll-back to the best state of application development.
  6. Minimal overhead and simplified maintenance: Docker images are small, hence it helps the rapid delivery and reduces the deployment time. The same Docker configuration can be used in a variety of environments. It also reduces the risk of problem by application dependencies and administration time.
  7. Sharing and Collaboration: With Docker Hub, we can upload and download containers to/from registry between others. We can also configure our own private repository.
  8. Security: With many containers on the Docker Daemon, Docker allows us to sandbox, to isolate our application.

Docker security

Security Benefits of Docker

  1. Isolation of applications: As introduced from the previous part, application isolation is one of the most important feature of Docker containers. Normally, applications all run on the same host system. By using container technology we can isolate them, making it easier to determine traffic flows and configuring the security roles between containers.
  2. Flexible attitude: Containers are small size individual units, they become flexible so the workflow to manage them is more flexible as well. It is great for security patching, testing and releasing the updated containers into production [3].
  3. Limiting information disclosure: Docker can limit resources assigned. This helps us in limiting the amount of information available to the system (and an evil attacker). Each containers gets the following components:
    – Network stack
    – Process space
    – File system instances.
  4. Limiting resources are gained by using namespaces in Linux. Namespaces are like a “view”, which only shows a subset of all resources on the system. By doing this, Docker provides a form of isolation: processes running within a container cannot see, or affect, processes in other containers, or the host system itself.

Current security issues with Docker

Root permissions
Docker daemon runs with root privileges when performing a workflow. That is why we should give the Docker daemon control to authorized users as Docker allow directory sharing with a guest container with limiting the access rights. Hence, Docker creates plans to define well-audited sub-processes, which do not longer require root permissions. Furthermore, this will increase the security level of each container components and enhance the stability.

Docker container management
The isolation feature of Docker enables to run multiple instances of applications and make it easy of applying security patch to all images but the use and management of Docker containers is not well-understood by the researchers/developers. Additionally, the Docker’s isolation is not as robust as by hypervisors for virtual machines in conventional approaches.

Lack of full User namespace implementation
In current Linux environment, there is still no full user namespace implementation. When the LXC tools are evolved and include the support, Docker can leverage the possibilities. The first actions like user mapping is available, in the next release of Docker will provide full support the user namespace using LXC [3].

User 0 in container = User 0 on host
One of the risks due to the missing User namespaces, is that the mapping of users from the host to the containers is still a one-to-one mapping. Previously user 0 in the container was equal to 0 on the host.

Default allow all
From a container, we can ping to other containers because all IP traffic is allowed as default in Docker containers, but also send other forms of traffic. This issue forces the IT manager to carefully think about what kind of traffic is needed between containers. It would have been better if Docker applied a “deny all by default” policy.


Traditional virtualization technologies become overheads of the guest operating system, costly licensing for virtualization software, configuration and administration time. As a result, Docker comes into the game, attracts more concerns and provide unpredictable promising capabilities to researchers and IT industry. Docker provides the light weight environment to deploy and run application code across a variety of Linux instances.
In this report, we have taken view of Docker, its characteristics also disadvantages and advantages. How it works and how Docker provides efficient workflow for deploying applications between containers. We then discussed about Docker security concerns and future investigations in using Docker.


[1] Wikipedia, Linux containers
[2] Redhat, Docker’s advantages
[3] Linux-audit, Docker and Security

Comments are closed.