Skip to content

1.5 Docker Compose

1. Introduction

Docker Compose is a tool that allows you to define and manage multi-container Docker applications. It uses a YAML file to configure the application’s services, networks, and volumes. With Docker Compose, you can easily manage and deploy complex applications that require multiple services working together, such as web servers, databases, caches, and more.

1.1 Key Features of Docker Compose:

1. Multi-Container Management: Define and run multiple containers as a single application. For example, you can set up a web server, database, and cache in one docker-compose.yml file.

2. Declarative Configuration: The docker-compose.yml file describes the entire application, including its services, networks, and volumes, in a clear and consistent manner.

3. Single Command Deployment: Start your entire application stack with a single command (docker-compose up), which builds, creates, and starts all the defined services.

4. Environment-Specific Configurations: Easily manage different environments (development, testing, production) using environment variables and overriding configurations.

5. Networking: Automatically sets up a network for your containers, allowing them to communicate with each other using service names as hostnames.

6. Scalability: You can scale services up or down with a simple command (docker-compose up --scale <service>=<num>), making it easy to handle more load or reduce resource usage.

1.2 How Docker Compose Works:

1. Define your application’s environment with a docker-compose.yml file.

2. Start your application by running docker-compose up.

3. Compose handles starting and linking your services, creating the network, and attaching any volumes defined.

1.3 Example Use Case:

Imagine you have a web application that requires:

  • A web server running Nginx.
  • A backend API built with Flask.
  • A Redis instance for caching.

With Docker Compose, you can define all these services in one file and manage them together. This eliminates the need to manually link containers, manage networks, or define volumes separately.

1.4 Basic Workflow:

1. Create a docker-compose.yml file: This file contains the definitions for your services.

2. Run docker-compose up: This command builds the images (if needed), creates the containers, and starts all the services defined in the file.

3. Use docker-compose down to stop and remove all containers, networks, and volumes associated with the Compose file.

2. Container Orchestration

Docker Compose is often associated with container orchestration, but it is not a full-fledged container orchestration tool like Kubernetes or Docker Swarm. Instead, Docker Compose is a simpler tool focused on defining and managing multi-container Docker applications in a single host environment.

1. Orchestration Capabilities:

  • Docker Compose provides basic orchestration features such as starting, stopping, and scaling containers. It allows you to manage the lifecycle of a group of containers (e.g., web server, database) on a single host.
  • Kubernetes/Docker Swarm provide more advanced orchestration capabilities, including automated deployment, scaling across multiple hosts, self-healing, rolling updates, and more.

2. Use Case:

  • Docker Compose is ideal for local development, testing environments, and small-scale deployments. It’s particularly useful when all services run on a single machine.
  • Full Orchestration Tools (like Kubernetes) are designed for production environments with complex needs, such as deploying across clusters, managing multiple nodes, and handling distributed systems.

3. Single vs. Multi-Host:

  • Docker Compose manages containers on a single host. It does not natively support multi-host deployments or sophisticated networking across multiple nodes.
  • Kubernetes and Docker Swarm are designed for managing containers across multiple hosts, supporting large-scale, distributed applications.

3. Example

A simple example of a docker-compose.yml file. This example sets up a basic web application with a Python Flask app, a Redis database, and a reverse proxy using Nginx:

version: '3.8'
services:
web:
image: python:3.8-slim
container_name: flask-app
working_dir: /app
volumes:
- ./app:/app # This is not a named-volume mapping
ports:
- "5000:5000"
command: flask run --host=0.0.0.0
redis:
image: redis:alpine
container_name: redis-db
ports:
- "6379:6379"
nginx:
image: nginx:alpine
container_name: nginx-proxy
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf # This is not a named-volume mapping

Structure:

1. version: Defines the version of Docker Compose being used. 2. services: Specifies the different services that make up the application.

  • web: Runs the Flask app using a Python image.
  • redis: Runs the Redis database.
  • nginx: Uses Nginx as a reverse proxy. 3. volumes: Mounts local directories or files to the container. 4. ports: Maps container ports to the host.

How to Use docker-compose.yaml:

  1. Place this docker-compose.yml in the root directory of your project.
  2. Run docker-compose up in the terminal to start all services.

You can customize the docker-compose.yml to fit your specific application needs, such as adding databases, additional services, or scaling your application.

4. docker-compose up

When you run docker-compose up, it starts all the services defined in your docker-compose.yml file, but these services are not entirely independent—they are designed to work together as part of a multi-container application.

4.1 How Services Work with docker-compose up

  1. Dependency Management:

    • Docker Compose respects the dependencies between services. For example, if your web service depends on a database service, Docker Compose ensures that the database service is started before the web service.
    • You can explicitly define dependencies using the depends_on key in the docker-compose.yml file.
    services:
    web:
    build: .
    depends_on:
    - database
    database:
    image: postgres
  2. Network Connectivity:

    • By default, Docker Compose creates a network for all the services to communicate with each other. Services can reach one another by their service names (e.g., database in the example above).
    • Even though the services run in separate containers, they are interconnected and can communicate over this internal network.
  3. Parallel Start:

    • Services are typically started in parallel, but if one service depends on another, Docker Compose ensures that the dependent service is ready before proceeding.
  4. Shared Volumes:

    • If services share a volume, they can read/write to the same data. This ensures that data is consistent across services that need to interact with the same files.

4.2 Independence vs. Interdependence

  • Independent Execution: Each service runs in its own container with its own environment, which means they are isolated and can run independently.
  • Interdependence: Despite their independent execution, services are often designed to work together as part of the same application (e.g., a web server connecting to a database).

Example of docker-compose up Execution:

Given a docker-compose.yml file with two services:

version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"
depends_on:
- app
app:
image: python:3.8
command: python app.py
  • docker-compose up will:
    1. Start the app service.
    2. Once app is running, start the web service.

5. Share Volumes in Compose

In Docker Compose, you can share volumes between containers by defining a named volume and then mounting it in multiple services. This allows containers to share data, such as configuration files or persistent storage.

5.1 Example: Sharing a Volume Between Containers

Let’s say you have two services: app and worker. Both need to access a shared directory where they read and write data.

Here’s how you can define and use a shared volume in a docker-compose.yml file:

version: '3.8'
services:
app:
image: python:3.8-slim
container_name: app-container
volumes:
- shared-data:/data # This is a named-volume
command: python /data/app.py
worker:
image: python:3.8-slim
container_name: worker-container
volumes:
- shared-data:/data # This is a named-volume
command: python /data/worker.py
volumes:
shared-data: # named-volume definition

1. services:

  • app and worker are two services that both mount the shared-data volume to the /data directory inside their containers.
  • Both services can read and write to the /data directory, and any changes made by one service will be visible to the other.

2. volumes:

  • The volumes section at the bottom defines the named volume shared-data.
  • Docker manages this volume and ensures it persists across container restarts.

How It Works:

  • When you run docker-compose up, Docker will create the shared-data volume if it doesn’t exist, and mount it to the specified directories inside the app and worker containers.
  • Both containers can access and modify files in the /data directory, enabling them to share data.

This approach is useful for scenarios like sharing logs, configuration files, or temporary data between different services in your Docker Compose setup.

5.2 Is volumes section mandatory in compose?

The volumes section in Docker Compose is not mandatory but is often used for named volumes. Named volumes are useful when you need persistent storage that is managed by Docker, shared between multiple containers, or reused across services.

However, if you define a volume directly in a service without explicitly declaring it under the volumes section, Docker will automatically treat it as an anonymous volume. Here’s a comparison:

1. Without volumes Section (Anonymous Volume):

version: '3.8'
services:
app:
image: python:3.8-slim
container_name: app-container
volumes:
- /data
worker:
image: python:3.8-slim
container_name: worker-container
volumes:
- /data

In this case, Docker will create an anonymous volume for each service. These volumes are not shared, and their lifecycle is tied to the container.

2. With volumes Section (Named Volume):

version: '3.8'
services:
app:
image: python:3.8-slim
container_name: app-container
volumes:
- shared-data:/data
worker:
image: python:3.8-slim
container_name: worker-container
volumes:
- shared-data:/data
volumes:
shared-data:

Here, the volumes section defines a named volume shared-data (you can see this in docker desktop volumes tab). Both services can access this same volume, and it persists independently of the containers.

3. Key Differences:

  • Named Volumes: Defined explicitly under the volumes section. These can be shared between containers and persist even if containers are removed.
  • Anonymous Volumes: Created automatically if you omit the volumes section. They are container-specific and not shared unless explicitly defined.

If you need to share data between containers, defining the volume under the volumes section is recommended.

6. Environment variables

In Docker Compose, both the env_file and environment sections are used to set environment variables for services, but they serve slightly different purposes and are used in different ways.

6.1 env_file Section

The env_file section specifies a file that contains environment variable definitions. Each variable is defined on a new line in the file in the format KEY=VALUE. Docker Compose reads this file and sets the environment variables in the service’s container.

Usage:

version: '3.8'
services:
web:
image: nginx
env_file:
- .env
- ./config/env.list
  • .env and ./config/env.list are files containing environment variables.
  • The environment variables from these files are loaded into the container.

Example .env File:

DATABASE_URL=postgres://user:password@db:5432/mydatabase
SECRET_KEY=mysecretkey

6.2 environment Section

The environment section allows you to define environment variables directly within the docker-compose.yml file. You can list individual variables and their values directly under this section.

Usage:

version: '3.8'
services:
web:
image: nginx
environment:
- DATABASE_URL=postgres://user:password@db:5432/mydatabase
- SECRET_KEY=mysecretkey
  • Variables are specified directly in the docker-compose.yml file, which is useful for simple configurations or when you want to keep environment settings directly in the compose file.

6.3 Key Differences:

1. Source:

  • env_file: Loads environment variables from external files. This is useful for managing variables outside the docker-compose.yml file, which can be useful for secrets, configuration files, or shared environment settings.
  • environment: Defines environment variables directly in the docker-compose.yml file. This is useful for straightforward configurations or when the variables are not sensitive.

2. File Format:

  • env_file: Uses a separate file (or files) where each line is a key-value pair.
  • environment: Defines variables inline, using a list format.

3. Multiple Files:

  • env_file: Can specify multiple files, and environment variables from these files are merged, with later files overriding variables from earlier files if there are conflicts.
  • environment: Can only define variables directly in the compose file and does not support merging with external files.

6.4 Example of Using Both:

You can use both sections together if needed:

version: '3.8'
services:
web:
image: busybox
command: sh -c 'sleep 2 && env'
# will run `sleep 2` and `env` shell commands inside a shell process
# `env` command will print the environment variables
env_file:
- .env
environment:
- DEBUG=true

In this example:

  • Environment variables from .env file are loaded.
  • The DEBUG variable is also set directly in the docker-compose.yml file, which could override any similar variable defined in the .env file.

Choosing between these depends on your needs for managing configuration and secrets in your Docker Compose setup.

7. Docker compose command sh -c

You can definitely use the echo command directly in Docker Compose without needing to wrap it in sh -c. However, the sh -c approach is necessary when you want to execute multiple commands or use shell features like conditional execution, loops, or variable expansion.

7.1 Direct echo Command Example:

If you only want to run a single command like echo, you can specify it directly in the command section without sh -c:

version: '3.8'
services:
ubuntu_service:
image: ubuntu:latest
container_name: my_ubuntu_container
command: echo "Hello from Docker Compose!"

7.2 When You Need sh -c:

The sh -c is required when you need to execute multiple commands in sequence, use shell operators (&&, ||, etc.), or work with more complex command structures:

version: '3.8'
services:
ubuntu_service:
image: ubuntu:latest
container_name: my_ubuntu_container
command: sh -c "echo 'Updating packages...' && apt-get update && echo 'Setup complete!'"

The use of sh -c provides flexibility and allows you to do more complex operations within a single command field in Docker Compose.

8. Docker compose Psuedo Terminal

To open a pseudo-terminal (interactive terminal) with Docker Compose, you can use the docker-compose run or docker-compose exec command with the -it option. This is useful for running commands interactively inside a container, such as opening a shell session.

8.1 Using docker-compose run

The docker-compose run command creates and runs a new container based on the service definition. The -it option is used to allocate a pseudo-TTY and keep it interactive.

Example:

Terminal window
docker-compose run -it <service_name> /bin/bash
  • Replace <service_name> with the name of the service you want to interact with (e.g., web, app, db).
  • /bin/bash is the command to run inside the container. You can replace this with any other command.

8.2 docker-compose exec

The docker-compose exec command is used to run a command inside an already running container. It’s the preferred method if your container is already up and running.

Example:

Terminal window
docker-compose exec -it <service_name> /bin/bash
  • Replace <service_name> with the name of the running service you want to access.
  • This will open a bash shell in the running container.

8.3 Differences Between run and exec:

  • run: Starts a new container for the specified service. It is useful for one-off tasks or testing commands in a fresh container.
  • exec: Attaches to an already running container. Use this when you need to interact with an existing service.

Example Scenario:

Suppose you have a docker-compose.yml file with a service named web:

version: '3.8'
services:
web:
image: nginx
ports:
- "80:80"

1. Open a new container with a terminal:

Terminal window
docker-compose run -it web /bin/bash

This command starts a new nginx container and drops you into an interactive shell.

2. Access an existing running container:

Terminal window
docker-compose up -d # Start services in the background
docker-compose exec -it web /bin/bash

This command attaches to the running web container and gives you a bash shell.

9. Interative Termial in docker-compose.yaml

To specify an interactive terminal within the docker-compose.yml file, you can use the stdin_open and tty options. These options ensure that the container starts with an interactive terminal available. This is particularly useful when you want to run a service in a way that you can interact with it directly, such as when using a shell.

9.1 Example docker-compose.yml with Interactive Terminal:

version: '3.8'
services:
myservice:
image: ubuntu:latest
command: /bin/bash # Command to run when the container starts
stdin_open: true # Keeps stdin open to allow interactive input
tty: true # Allocates a pseudo-TTY
  • stdin_open: true: Keeps the standard input (stdin) open, which is necessary for interactive sessions.
  • tty: true: Allocates a pseudo-TTY, which is required for terminal interaction.
  • command: /bin/bash: Runs the bash shell, allowing interaction when the container starts.

9.2 Running the Service:

When you run the service with docker-compose up -d, the container will start with an interactive terminal and detach. To interact with it, use the docker attach command or access it via docker-compose exec.

Terminal window
docker-compose up
# or
docker-compose up -d # run in background

Then, in a separate terminal:

Terminal window
docker attach <container_name>
# attach to main bash process in ubunutu

Or:

Terminal window
docker-compose exec myservice /bin/bash
# attach to a new bash process in ubunutu