1.9 Docker Swarm
Docker vs Kubernetes vs Docker Swarm
Docker Swarm is Docker’s native orchestration tool that allows you to manage a cluster of Docker nodes (a group of machines running Docker) as a single virtual system. It helps in orchestrating containers, managing their lifecycle, scaling them up or down, and maintaining the desired state of the applications. It’s used to create and manage Docker clusters and deploy services across multiple nodes, providing built-in features for scaling, load balancing, and failover.
1. Features
- Cluster Management: Swarm allows you to create a cluster of Docker hosts and manage them as a single entity. It uses the “manager” and “worker” roles for nodes, where managers orchestrate the cluster, and workers execute tasks (containers).
- Service Scaling: Swarm allows you to scale services (applications running inside containers) up or down as per the needs. Swarm automatically distributes the scaled services across nodes.
- Service Discovery: Docker Swarm assigns each service a unique DNS name, and all nodes can access services by their DNS name, making service discovery seamless.
- Load Balancing: Swarm load balances traffic to services based on algorithms, distributing it evenly across the available service instances.
- Fault Tolerance: If a node in the swarm fails, Swarm reschedules the containers from that node onto other healthy nodes.
- Desired State Management: Swarm continuously monitors the state of the cluster and attempts to reconcile any differences between the actual state and the desired state (e.g., by rescheduling failed containers).
- Rolling Updates: Swarm allows you to update services without downtime by performing rolling updates.
2. Setting Up a Docker Swarm Cluster
Getting Started in Docker Swarm
Following is an example of setting up Docker Swarm with two nodes and deploying a service on it.
2.1 Step 1: Initialize Docker Swarm
First, on the manager node, initialize the swarm.
docker swarm init --advertise-addr <MANAGER-IP>
- This command initializes a Swarm and configures the current Docker engine to be the manager.
- The
--advertise-addr
flag tells other nodes where the manager is located.
The output will include a token used to join worker nodes to this Swarm cluster.
2.2 Step 2: Add Worker Nodes
On the worker nodes, join them to the Swarm cluster using the token you received from the docker swarm init
command.
docker swarm join --token <TOKEN> <MANAGER-IP>:2377
- Each worker node will connect to the manager and become part of the Swarm cluster.
- The token and manager IP are provided by the manager node.
2.3 Step 3: Deploy a Service
Now, on the manager node, deploy a service, for example, an NGINX web server, that will be replicated across worker nodes.
docker service create --name nginx --replicas 3 -p 80:80 nginx
This command deploys an NGINX container and specifies that 3 replicas (3 instances) of the container should be created across the Swarm nodes. The -p
flag maps port 80 of the container to port 80 on the host, making the service accessible via HTTP.
2.4 Step 4: Verify the Service
To check the service’s status, run the following command on the manager node:
docker service ls
It will show you the number of replicas running and the overall status of the service. You can inspect the tasks (individual containers) in the service with:
docker service ps nginx
2.5 Step 5: Scaling the Service
You can easily scale the service up or down with a single command. For example, to scale NGINX to 5 replicas:
docker service scale nginx=5
Docker Swarm will automatically create additional containers on the available worker nodes, distributing the load.
2.6 Step 6: Rolling Updates
To update the NGINX service (e.g., changing to a new version), use the following command:
docker service update --image nginx:1.19 nginx
Swarm will perform a rolling update, replacing each instance of NGINX one by one, ensuring no downtime during the update process.
2.7 Step 7: Fault Tolerance
If one of the worker nodes goes down, Docker Swarm will automatically reschedule the containers from the failed node to other healthy nodes, ensuring service continuity.
3. Important Commands
# Get the IP address of a PChostname -I
# Initialize swarm manager serverdocker swarm init --advertise-addr 192.168.0.191
# List the information about the nodes in swarm clusterdocker node ls
# Info about docker including swarmdocker info
# Deploying a container in docker swarm is done with a service# Following creates a service with 1 replicadocker service create --replicas 1 --name pingservice alpine ping docker.com
# Info about the servicesdocker service ls
# Info about a specific service (prety print is easy to read)docker service inspect pingservice --pretty
# Process on a specific servicedocker service ps pingservice
# Changing the replicas (scaling)docker service scale pingservice=5
# To stop a service (will stop all replicas)docker service rm pingservice