1.2 Docker Workflow
1. Creating a Docker Container
Creating a Docker container that runs a Unix shell is straightforward and a common use case. Docker containers can run any process, including a shell, provided that the necessary binaries and libraries are included in the container image. Here’s a step-by-step guide on how to create and run a Docker container with a Unix shell:
Step 1: Install Docker
Ensure Docker is installed on your system. You can download and install Docker from the official Docker website.
Step 2: Choose a Base Image
Docker images are built from base images, which can include various Linux distributions. Common choices for Unix shell environments include:
alpine
: A minimal Docker image based on Alpine Linux.ubuntu
: A more full-featured Docker image based on Ubuntu.
Step 3: Create a Dockerfile
A Dockerfile is a script that contains instructions on how to build a Docker image. Below are examples of Dockerfiles for both Alpine Linux and Ubuntu:
Dockerfile for Alpine Linux
# Use the official Alpine Linux imageFROM alpine:latest
# Install Bash shellRUN apk add --no-cache bash
# Set the default command to run BashCMD ["bash"]
Dockerfile for Ubuntu
# Use the official Ubuntu imageFROM ubuntu:latest
# Install Bash shellRUN apt-get update && apt-get install -y bash
# Set the default command to run BashCMD ["bash"]
Step 4: Build the Docker Image
Navigate to the directory containing your Dockerfile and build the Docker image using the docker build
command:
# For Alpine Linuxdocker build -t alpine-bash .
# For Ubuntudocker build -t ubuntu-bash .
The -t
flag tags the image with a name (alpine-bash
or ubuntu-bash
).
Step 5: Run the Docker Container
Once the image is built, you can run a container from the image using the docker run
command:
# For Alpine Linuxdocker run -it alpine-bash
# For Ubuntudocker run -it ubuntu-bash
The -it
flags make the container interactive and allocate a pseudo-TTY, allowing you to interact with the shell.
1.1 Example: Running an Alpine Linux Shell
Here’s a complete example of creating and running an Alpine Linux shell in a Docker container:
-
Create a directory and a Dockerfile:
Terminal window mkdir alpine-shellcd alpine-shellnano Dockerfile -
Add the following content to the Dockerfile:
FROM alpine:latestRUN apk add --no-cache bashCMD ["bash"] -
Build the Docker image:
Terminal window docker build -t alpine-bash . -
Run the Docker container:
Terminal window docker run -it alpine-bash
You will now be inside a Bash shell running in an Alpine Linux container.
Some of linux commands to observe what’s going on inside the container.
root@<container-id># ls -al (observe the file system)root@<container-id># ps -elf (running processes)
If one explores the filesystem, there are no other shell or GUI in the file system (docker container linux is lightweight unlike VM).
The bash process is the main process with PID=1.
1.2 Additional Tips
Networking
You can connect the container to a network using the --network
flag:
docker run -it --network=my-network alpine-bash
1.3 Dokerize an App
A guide to put your application to docker and distribute.
How to dockerize
1.4 Working with Existing Docker images
1.4.1 Creating / Running / Starting Container
Docker hub provides docker images to start with.
docker pull ubuntu # download the Ubuntu image
docker run -it ubuntu# The command `run`# 1. creates a container with a random name# 2. starts the Ubuntu container with the default startup command specified in docker file (i.e bash)
#- `-i`: Keeps STDIN open even if not attached.#- `-t`: Allocates a pseudo-TTY, which allows you to interact with the container.
can run a custom startup command when running a container.
docker run -it ubuntu sh# run Ubuntu container with a custom startup command that you want to run inside the container (i.e `sh`)
docker run -it ubuntu top -b# The command that you want to run inside the container here is `top -b`
docker run -it alpine-bash sh -c "echo 'Hello, World!'"# You can override the default command to run custom scripts or commands# Here, run the `echo` shell command inside a shell process
Start an existing container
If you want to start an already existing container in interactive mode, you can use:
docker start -i <container_name_or_id>
This will attach to the container and allow you to interact with it.
Start an existing container in detach mode
To start an existing Docker container in detached mode:
docker start <container_name_or_id>
By default, Docker containers will run in the background unless they are designed to be interactive (e.g., by running a shell). If you want to reattach or interact with the running container, you can use:
docker attach <container_name_or_id># This will attach you to the PID=1 main process
Run a command in a running container
The docker exec -it
command is used to run a command in a running Docker container interactively. This is often used to open a new shell session inside a container that is already running. To start a bash session inside a running container:
docker exec -it <container_name_or_id> /bin/bash# Creates a new shell with bash and attach a pseudo terminal in the interactive mode
docker exec
: This part of the command is used to execute a command inside a running container.-i
: This flag keeps STDIN open, allowing you to interact with the container.-t
: This flag allocates a pseudo-TTY, which makes the command interactive (usually used for commands like/bin/bash
or/bin/sh
).
Enter into a running container
You can enter a running container with:
docker exec -it my_new_container /bin/bash
you can replace bash with sh if bash is not available in the container.
To attach to a running container later, use -a / —attach option:
docker start -a my_new_container
If you need to explicitly use a UID , like root = UID 0, you can specify this:
docker exec -it -u 0 my_new_container /bin/bash# will log you as root
1.4.2 Exiting Container
Lets run a container
docker run -it ubuntu bash
Exit without Stopping the container
To exit the bash shell in the container one can, use the Ctrl+PQ
option which will return the host system prompt but keeps the container running in the background.
This operation detaches the container and allows you to return to your system’s shell without exiting from the only process (i.e bash
) which is running inside the container. Hence the container will stay Running.
Alternative
Once inside the container, if you exit from the only process running inside the container (in this example, bash
process), it will return to your system’s shell, and the container will be stopped.
root@container_id# exit
2. Docker with Ubuntu vs Ubuntu VM?
Docker provides the functionality of different operating systems, like Ubuntu, without running a full guest OS by leveraging a few key concepts and technologies. Here’s how Docker manages to provide the same functionalities using an Ubuntu image without installing the entire Ubuntu operating system:
2.1 Key Concepts and Technologies
Containers vs. Virtual Machines:
- Virtual Machines (VMs): Each VM includes a full operating system along with the application and its dependencies, running on a hypervisor. This means each VM has its own kernel and OS resources.
- Containers: Containers, on the other hand, share the host system’s kernel and only include the application and its dependencies. Containers run as isolated processes on the host OS, using its kernel but maintaining their own filesystem, network, and process space.
Union File Systems and Docker Layers:
- Docker images are built in layers. Each instruction in a Dockerfile creates a new layer. These layers are stacked and form a union filesystem, which means that the image only contains the necessary parts of the OS to run the application.
- For an Ubuntu image, this includes the necessary binaries, libraries, and tools that are part of the Ubuntu userland, but not the kernel.
Namespaces and Cgroups:
- Namespaces: Provide isolated environments within the same OS instance. This includes process isolation (PID namespace), user isolation (user namespace), file system isolation (mount namespace), etc.
- Control Groups (Cgroups): Manage and limit resource usage (CPU, memory, disk I/O, etc.) of the containerized applications.
Docker Images:
- When you pull an Ubuntu Docker image from Docker Hub, it includes the Ubuntu filesystem (binaries, libraries, etc.) but without the Ubuntu kernel. It relies on the host’s kernel to function.
- This means the image has everything needed to provide the “Ubuntu experience” (such as the apt package manager, bash shell, common utilities, etc.) without the overhead of a full OS.
2.2 Example of How an Ubuntu Image Works in Docker
When you run an Ubuntu container, Docker uses the host OS kernel and provides the container with the Ubuntu filesystem:
-
Download the Ubuntu Image:
Terminal window docker pull ubuntu -
Run the Ubuntu Container:
Terminal window docker run -it ubuntu
Inside the container, you can run commands just like you would on a full Ubuntu system:
root@container_id:/# lsbin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
You can use apt-get
to install software, navigate the filesystem, and perform other tasks as if you were on an Ubuntu machine.
How Docker Achieves This
-
File System:
- The Ubuntu Docker image contains a minimal root filesystem that replicates the Ubuntu environment. This includes directories like
/bin
,/etc
,/lib
,/usr
, etc., filled with the usual tools and libraries.
- The Ubuntu Docker image contains a minimal root filesystem that replicates the Ubuntu environment. This includes directories like
-
Process Management:
- Docker containers run as isolated processes on the host system. The host’s kernel manages these processes, providing the necessary system calls and resource management.
-
User Space Tools:
- The tools and applications you use within the container are from the Ubuntu user space. These tools are packaged in the Docker image, enabling you to interact with the container as if it were a standalone Ubuntu system.
3. Docker Basics
3.1 Docker Crash Course
Docker Crash Course
Pulled docker images are stored in?
Here is a list of the storage locations of the docker images on different operating systems:
- Ubuntu: /var/lib/docker/
- Fedora: /var/lib/docker/
- Debian: /var/lib/docker/
- Windows: C:\ProgramData\DockerDesktop.
- MacOS: ~/Library/Containers/com. docker. docker/Data/vms/0/
3.3 Docker Advance
Docker Basic Commands
0. Docker Related
-
docker ps
running containers
-
docker run –name debian-container-always -it –restart always debian:latest
container restarts automatically if stopped due to
exit
from the main process.
To end the session, clean up all the containers and images.
docker container rm -f (sudo docker ps -aq)docker image rmi -f $(sudo docker images – aq)
1. Docker run -dit
flag
In Docker, the -dit
flags are used in combination to run a container in the background (detached mode), while still providing interactive terminal capabilities and allocating a pseudo-TTY. Let’s break down the purpose of each flag and why they might be used together:
1. -d
(detached mode):
This flag runs the container in the background, returning control of the terminal to the user. The container continues to run even after the user logs out or closes the terminal session.
2. -i
(interactive mode):
This flag keeps the STDIN stream open, allowing the user to provide input to the container interactively. It is useful for cases where the container process requires user input or when running a process that reads from the terminal.
3. -t
(pseudo-TTY):
This flag allocates a pseudo-TTY, which provides an interface for terminal input and output. It makes the command prompt more user-friendly, with features like line editing.
Why Use -dit
Together?
While it may seem counterintuitive to use -d
(detached mode) and -it
(interactive and TTY) together, there are specific scenarios where this combination is useful:
1. Starting Interactive Services in the Background:
Sometimes, you want to start a service or an interactive application (like a shell or a REPL) in the background, but still be able to attach to it later if needed. For example, running a debugging tool or an interactive shell session in a container that you can later attach to using docker attach
or docker exec
.
2. Log and Debug:
By using -it
, you can ensure that the container has a proper terminal interface, which can be useful for logging and debugging. Even if the container is running in detached mode (-d
), you can inspect logs or attach to the container to interact with it.
3. Daemon Processes with Interactive Control:
Some daemon processes may provide an interactive control interface that can be accessed through a TTY. By using -dit
, the container runs in the background, but you can still interact with the daemon if necessary.
Example Scenario
Suppose you’re running a container with a shell that needs to start and continue running in the background, but you also want the ability to interact with it later:
docker run -dit --name my-shell-container ubuntu bash
In this case:
-d
allows the container to run in the background.-i
keeps the input open, which is necessary for interactive shells.-t
provides a terminal interface, making the shell prompt user-friendly.
You can later attach to this container using docker attach my-shell-container
or execute a command interactively using docker exec -it my-shell-container bash
.
Important Example
Following docker container will immediately stop as soon it is started. You will not be able to attach
or exec
any command on the container again.
- To attach, you need to have a interactive pseudo terminal created initianlly when you create a container
- To exec, you need a running container
docker run --name stranded_container ubuntu
# or
docker run --name stranded_container ubuntu bash
# Creating a container without a interactive peusdo terminal (-it).# `bash` process will simply run in ubuntu and exit as soon as the container starts. Therefore resulting in stopping the container.
3. Docker commit
The docker commit
command is used to create a new Docker image from an existing container. This allows you to capture the current state of a container, including any changes made to it, and save it as a new image that can be used to create other containers.
docker commit [OPTIONS] <container_id_or_name> <new_image_name>
<container_id_or_name>
: The ID or name of the container you want to commit.<new_image_name>
: The name you want to give to the new image.
Common Options:
-m "message"
: Adds a commit message, similar to a Git commit message, describing what changes were made.-a "author"
: Specifies the author of the image (e.g.,-a "John Doe"
).-p
: Pauses the container during the commit to ensure that the filesystem is in a consistent state.
Example:
Assume you have a running container with ID abc123
that you’ve made changes to, and you want to save those changes as a new image called my_custom_httpd
:
docker commit -m "Customized Apache HTTPD configuration" -a "John Doe" abc123 my_custom_httpd
This command will create a new image named my_custom_httpd
based on the current state of the abc123
container. You can then use this image to run new containers with the changes preserved.
Verifying the New Image:
To verify that the new image has been created, you can list your Docker images:
docker images
You should see my_custom_httpd
in the list of available images.
3.1 Docker commit vs dockerfiles
docker commit
and Dockerfiles serve different purposes in Docker workflows, and each has its own advantages and use cases. Here’s a comparison of the advantages of using docker commit
versus Dockerfiles:
Advantages of docker commit
1. Quick Prototyping and Experimentation:
- Immediate Changes:
docker commit
allows you to quickly capture the current state of a running container, including changes made interactively (e.g., installing packages, modifying files). This can be useful for rapid prototyping or experimentation without needing to create a Dockerfile. - Ease of Use: For developers who are experimenting with a new setup or configuration,
docker commit
can be a fast way to save the state of a container without writing a Dockerfile.
2. Snapshot of Current Container State:
- Capture Complex States: It’s useful when you have manually configured a container in ways that are hard to describe in a Dockerfile, such as complex runtime configurations or custom setup scripts.
- Preserve Manual Changes: If you’ve made manual changes inside a container (e.g., modifications to files or configurations) and want to save those changes as an image,
docker commit
captures those changes.
3. No Need for Dockerfile Syntax Knowledge:
- Simplicity for Non-Developers: Users who are not familiar with Dockerfile syntax or who prefer not to write Dockerfiles can use
docker commit
to create images from their containers without needing to learn Dockerfile commands.
Advantages of Dockerfiles
1. Reproducibility:
- Consistent Builds: Dockerfiles provide a clear and repeatable method to build Docker images. This ensures that anyone building the image from the Dockerfile will get the same result, which is crucial for debugging and maintaining consistency across different environments.
- Version Control: Dockerfiles can be stored in version control systems (e.g., Git), allowing you to track changes over time and collaborate with others.
2. Documentation:
- Readable and Maintainable: Dockerfiles serve as documentation of how an image is built. They provide a human-readable and maintainable record of the installation and configuration steps.
- Best Practices: Dockerfiles encourage best practices by using well-defined instructions and following conventions that ensure better image layering and efficiency.
3. Automation and CI/CD Integration:
- Automated Builds: Dockerfiles can be used in continuous integration and deployment pipelines to automate image builds and deployments.
- Consistent Environments: Automated builds using Dockerfiles help maintain consistent environments across development, staging, and production.
4. Flexibility and Control:
- Custom Builds: Dockerfiles offer fine-grained control over the build process, allowing for optimization and customization that
docker commit
cannot provide. - Complex Configurations: They enable the creation of complex images with multi-stage builds, conditional logic, and advanced configuration that
docker commit
does not support.
In Summary
- Use
docker commit
when you need a quick snapshot of a running container’s state or when you are making interactive changes and need to preserve them without initially defining a Dockerfile. - Use Dockerfiles for reproducibility, automation, documentation, and when you require a structured, maintainable way to build Docker images.
4. Docker inspect
The docker inspect
command is used to obtain detailed information about Docker objects, such as containers, images, networks, or volumes. However, docker inspect
doesn’t directly show the commit history of a Docker image or container in the way that, for example, Git shows commit history.
Inspecting a Docker Image:
If you want to see detailed information about a Docker image, including its layers and configuration, you can use:
docker inspect <image_name_or_id>
This command will display a JSON-formatted output with various details, such as the image’s ID, creation date, environment variables, command history, and more.
Example:
docker inspect my_custom_httpd
Inspecting a Container:
Similarly, to inspect a container, you can run:
docker inspect <container_name_or_id>
Example:
docker inspect my_running_container
5. Docker history
Viewing the Layers (Image History):
If you want to see the layer-by-layer history of an image (which can give you insight into changes made via commits), you can use the docker history
command:
docker history <image_name_or_id>
This command shows a list of layers, including the command that created each layer, the size of each layer, and when it was created.
Example:
docker history my_custom_httpd
The output will show each layer of the image, which can help you understand what commands were run and what changes were made over time. However, it won’t show detailed commit messages like a version control system.
To summarize, while docker inspect
is useful for viewing detailed metadata and configuration, docker history
is the command you would use to inspect the “commits” or layers that make up a Docker image.
TODO: In progress
Redis with Docker
https://www.docker.com/blog/how-to-use-the-redis-docker-official-image/
Docker Compose
What is docker-compose? Let’s come back to docker-compose.
Docker Compose is a tool you can use to define and share multi-container applications. This means you can run a project with multiple containers using a single source.
For example, assume you’re building a project with NodeJS and MongoDB together. You can create a single image that starts both containers as a service – you don’t need to start each separately.
Interesting right? And this solves the problem which I called out at the very beginning of this article.
To achieve this we need to define a docker-compose.yml.