Skip to content

1.1 Docker Fundamentals

1. Introduction to Docker

Docker is a platform that enables developers to automate the deployment, scaling, and management of applications using containerization. Containers package an application and its dependencies into a standardized unit, ensuring that it runs consistently across different computing environments.

Here’s a brief overview of Docker fundamentals:

1.1 What is Docker?

  • Docker is an open-source platform designed to simplify the creation, deployment, and management of applications through containerization.
  • Containers are lightweight, standalone, and executable software packages that include everything needed to run an application: code, runtime, system tools, libraries, and settings.

1.2 Key Components

  • Docker Engine: The core component of Docker, responsible for running containers. It consists of:

    • Docker Daemon: The background service that manages Docker containers and images.
    • Docker CLI: The command-line interface that allows users to interact with Docker Daemon.
  • Docker Images: Read-only templates used to create containers. Images include everything needed to run an application, such as code, libraries, and environment variables.

    • Dockerfile: A text file containing a series of instructions for building Docker images.
  • Docker Containers: Instances of Docker images that run applications in isolated environments. Containers are lightweight and start quickly.

  • Docker Hub: A cloud-based registry service that allows you to share and store Docker images. You can pull public images or push your own images to Docker Hub.

1.3 Benefits of Docker

  • Consistency: Containers ensure that applications run the same way regardless of the environment, whether it’s a developer’s machine, staging, or production.

  • Isolation: Containers isolate applications and their dependencies, preventing conflicts and ensuring that each application has its own environment.

  • Portability: Docker containers can run on any system that supports Docker, making it easy to move applications across different environments.

  • Efficiency: Containers share the host system’s OS kernel, making them more lightweight and efficient compared to traditional virtual machines.

  • Scalability: Docker makes it easy to scale applications up or down by managing containers and orchestrating them with tools like Docker Compose and Kubernetes.

1.4 Common Use Cases

  • Development and Testing: Simplifies the development environment setup and testing of applications by providing a consistent environment.

  • Continuous Integration/Continuous Deployment (CI/CD): Docker integrates well with CI/CD pipelines to automate the build, test, and deployment processes.

  • Microservices Architecture: Docker is ideal for deploying microservices, where each service can run in its own container and communicate with other services.

In summary, Docker streamlines the development, deployment, and scaling of applications through containerization, providing a consistent and efficient way to manage software across various environments.

2. Docker vs Virtual Machines?

Docker and virtual machines (VMs) are both technologies used to create isolated environments for running applications, but they have fundamental differences in terms of architecture, performance, and usage. Here’s a detailed comparison:

1.1 Architecture

Virtual Machines:

  • VMs run on a hypervisor, which can be either Type 1 (bare-metal) or Type 2 (hosted).
  • Each VM includes a full operating system (OS) instance, which means that every VM has its own kernel and set of system libraries.
  • VMs are more heavyweight due to the need for separate OS instances and the overhead of the hypervisor.

Docker:

  • Docker uses containerization technology, which leverages the host OS kernel.
  • Containers share the host OS kernel and use isolated user spaces, making them much lighter than VMs.
  • Docker containers package applications and their dependencies but do not include a full OS, just the necessary libraries and binaries.

1.2 Performance

Virtual Machines:

  • VMs are more resource-intensive because they require more CPU, memory, and storage to run multiple full OS instances.
  • The hypervisor adds a layer of overhead, which can impact performance.

Docker:

  • Containers are more lightweight and efficient because they share the host OS kernel.
  • Startup times for containers are typically much faster compared to VMs.
  • Docker can achieve higher density of applications on the same hardware compared to VMs.

1.3 Isolation

Virtual Machines:

  • VMs provide strong isolation because each VM runs a completely separate OS instance.
  • This isolation is more secure but at the cost of higher resource usage.

Docker:

  • Containers provide process-level isolation using namespaces and control groups (cgroups) in the Linux kernel.
  • While still secure, container isolation is generally considered to be less strong than VM isolation since they share the same kernel.

1.4 Portability

Virtual Machines:

  • VMs are portable across different physical machines and can be migrated easily using tools provided by hypervisor vendors.
  • Portability can be limited by differences in the underlying hardware and hypervisor features.

Docker:

  • Docker containers are highly portable and can run on any system that supports Docker, regardless of the underlying hardware or OS.
  • Docker images can be easily shared and deployed across different environments using Docker Hub or private registries.

1.5 Use Cases

Virtual Machines:

  • Running multiple different operating systems on a single physical machine.
  • Providing strong isolation for applications that require high security.
  • Legacy applications that require a specific OS environment.

Docker:

  • Microservices architecture and modern application development.
  • Continuous Integration/Continuous Deployment (CI/CD) pipelines.
  • Applications that need to be lightweight, scalable, and portable.

Summary

AspectVirtual MachinesDocker Containers
ArchitectureFull OS instance per VMShared OS kernel
Resource UsageHighLow
PerformanceSlower startup, higher overheadFast startup, low overhead
IsolationStronger isolationProcess-level isolation
PortabilityLimited by hypervisorHighly portable
Use CasesMulti-OS environments, strong securityMicroservices, CI/CD, scalability

Both Docker and virtual machines have their own strengths and are suitable for different scenarios. Understanding these differences can help in choosing the right technology for a given use case.

3. How is Docker Working ?

Docker operates on a client-server architecture.

  1. Docker Client: The Docker client is the primary way users interact with Docker. It provides a command-line interface (CLI) that sends commands to the Docker daemon. The client can run on the same host as the daemon or connect to a remote daemon.

  2. Docker Daemon: The Docker daemon (dockerd) is a server that runs on the host machine. It is responsible for building, running, and managing Docker containers. The daemon listens for Docker API requests and manages Docker objects, such as images, containers, networks, and volumes.

  3. Docker REST API: The Docker API is used by the Docker client to communicate with the daemon. It can also be used by other applications to interact with the daemon.

  4. Docker Registry: This is a repository for Docker images. The Docker client can pull images from the registry to create containers and push images to the registry. Docker Hub is a popular public registry, but there are also private registries.

In this architecture, the Docker client can interact with the Docker daemon over various protocols, such as UNIX sockets or network interfaces, enabling flexible deployment and management of containers.

4. Docker on Windows (Installation)

Docker Desktop is a crucial tool for developers looking to containerize applications. The video from the Docker Mastery course offers a detailed walkthrough, now shared on YouTube for broader accessibility. Below are the steps and recommendations from the course for setting up Docker Desktop on Windows 10 or 11.

1. Downloading and Installing Docker Desktop

2. Enabling WSL 2

  • Docker Desktop now uses WSL 2, a more efficient way to run Linux on Windows compared to the older Hyper-V setup.
  • During installation, enable WSL 2 if it isn’t already enabled. The installer will guide you through this, including installing the necessary Linux kernel update.

3. Post-Installation Configuration

  • After installation, launch Docker Desktop and follow the setup wizard.
  • Agree to the End User License Agreement (EULA). Docker Desktop is free for learning and personal use, though some enterprise features may require a paid license.

4. Setting Up Visual Studio Code

  • Download Visual Studio Code from its official website.
  • Install Docker and Kubernetes extensions within Visual Studio Code for enhanced functionality.

5. Adjusting Docker Desktop Settings

  • Access Docker Desktop settings by right-clicking the Docker icon in the system tray and selecting “Settings”.
  • Configure your preferred settings, especially under the WSL integration section. Ensure your Linux distributions (e.g., Ubuntu) are enabled for Docker.

6. Creating a Docker ID

  • Create a free Docker ID at hub.docker.com.
  • Log in to Docker Desktop with your Docker ID to increase your pull rate limit from Docker Hub.

7. Cloning the Course Repository

  • Clone the course repository into your WSL file system for better performance:
    Terminal window
    git clone <repository_url>

Troubleshooting Tips

  • Virtualization Errors: Ensure CPU virtualization features (VT-x) are enabled in your BIOS.
  • Pull Rate Limits: Log in with your Docker ID to avoid hitting free-tier limits on Docker Hub.

5. Docker CLI Cheat Sheet

Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation and security allows you to run many containers simultaneously on a given host. Containers are lightweight and contain everything needed to run the application, so you do not need to rely on what is currently installed on the host. You can easily share containers while you work, and be sure that everyone you share with gets the same container that works in the same way.

Download Cheat Sheet