Scalable System Architecture
Creating a scalable and robust system architecture is getting easier and easier. With cloud providers giving options to deploy a cluster that can be scaled up and down as resources are needed.
However, it is also important to know how to deploy this kind of cluster ourself. This approach has a couple of advantages. One, we get to learn how you can create, maintain, and scale these clusters ourself. Two, we can make these custom clusters provider independent. So if we’ve deployed our custom cluster on Amazon’s cloud and want to move to Google’s cloud, we can do that.
In this series, we will be creating this cluster using Docker and Docker’s new(ish) swarm mode. This is not just going to be a high level explanation of the concept. We will be creating practical example that uses this cluster. Our cluster will be used to serve our web applications or our web APIs in a secure way using Let’s Encrypt’s free SSL service. The following high level diagram is what we will be creating in this series.
Table of Contents
- The Architecture
- Scalable System Architecture with Docker, Docker Flow, and Elastic Stack: System Provisioning (current)
- Scalable System Architecture with Docker, Docker Flow, and Elastic Stack: Frontend Services
- Scalable System Architecture with Docker, Docker Flow, and Elastic Stack: Logging Stack
- Scalable System Architecture with Docker, Docker Flow, and Elastic Stack: Backend Services
- Scalable System Architecture with Docker, Docker Flow, and Elastic Stack: Limitations and Final Thoughts
Docker is a multi-platform containerization technology. In simple terms, it isolates our application from the host machine. Allowing us to deploy our application on any system has docker on it. Docker, while isolating, still allows our application to interface with the host machine. For example if we want to run the NGINX web server, we don’t need to install and configure it on your web server. We can simply pull the official NGINX image and bind it to port 80 to get a functional web server in few seconds (depending on our network speed).
We can install docker for our platform of choice. We will be using
docker-machine utilities to create our infrastructure.
The docker utility is our main source of interaction with containers. This utility will allow us to create, inspect, and manage containers, swarm nodes, and services.
Docker Machine utility will make easier for us to create virtual machines for our cluster. We will use this utility to create virtual machines locally in virtualbox for the purposes of this tutorial. But this utility is capable of creating machines on AWS, Digitalocean, and more.
Swarm Mode was introduced with docker version 1.12 and improved upon with latest version 1.13. This mode is a robust and flexible implementation of swarm then the previous standalone swarm technology. It is integrated with the
docker utility where as the standalone swarm was tied to
Swarm mode allows us to achieve the cluster pictured above. Each
worker in the diagram is a physical machine that will become a pool of resources for our applications. Swarm will also allow us to scale up and down our application as needed.
As depicted above, our architecture will consist of three manager nodes, three worker nodes, and at least one logging node. Which is also just a worker with a specific purpose. So let’s get to work with provisioning these machines.
We need to make sure we have Virtualbox installed on our platform
docker-machine create --driver virtualbox manager-1
we can simply enter the above command three times, incrementing the manager number every time, or just put the following into a bash script and execute:
#!/usr/bin/env bash set -e for i in 1 2 3; do docker-machine create --driver virtualbox manager-$i done echo "" echo "--- Three managers created ---"
Next we can provision the workers the same way.
docker-machine create --driver virtualbox worker-1
#!/usr/bin/env bash set -e for i in 1 2 3; do docker-machine create --driver virtualbox worker-$i done echo "" echo "--- Three workers created ---"
Finally lets create a logging worker.
docker-machine create --driver virtualbox logging-worker-1
If we wish to create more than one logging worker, we can use the same technique as above.
Now we will create a swarm out our machines using Docker’s new swarm mode. This is will allow us to easily scale up our web apps or services should we need to.
To get started with creating our swarm we need to use the docker environment of one or our
managers. We can use
docker-machine ssh manager-1 to use
manager-1‘s environment. But there is an easier way. We can assign the manager-1 environment to the docker utility running on our host system.
eval $(docker-machine env manager-1)
We can verify that our docker daemon environment has changed with the following command:
docker info | grep Name
We should see the following output:
Now we initialize our swarm.
docker swarm init --advertise-addr $(docker-machine ip manager-1) ### OUTPUT ### Swarm initialized: current node (q1t43wlo5afjlaekncvz3c1mf) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-3dn7yyrtq8v51cf8c4flmmshjftwjg2ykvajrnt7i5ps818ar5-6ifsfzb1eqzae6s4hrwwe3w66 \ 192.168.99.100:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
First we use the
docker swarm join-token manager command to get the command that will allow us to add more managers.
docker swarm join-token manager ### OUTPUT ### To add a manager to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-3dn7yyrtq8v51cf8c4flmmshjftwjg2ykvajrnt7i5ps818ar5-94sjpol9zen8fqk6a58pp3c2c \ 192.168.99.100:2377
We can execute the command on the manager machines directly in a one-liner. We will just repeat the command for the two remaining managers.
### MANAGER 2 ### docker-machine ssh manager-2 docker swarm join \ --token SWMTKN-1-3dn7yyrtq8v51cf8c4flmmshjftwjg2ykvajrnt7i5ps818ar5-94sjpol9zen8fqk6a58pp3c2c \ 192.168.99.100:2377 ### OUTPUT ### This node joined a swarm as a manager. ### MANAGER 3 ### docker-machine ssh manager-3 docker swarm join \ --token SWMTKN-1-3dn7yyrtq8v51cf8c4flmmshjftwjg2ykvajrnt7i5ps818ar5-94sjpol9zen8fqk6a58pp3c2c \ 192.168.99.100:2377 ### OUTPUT ### This node joined a swarm as a manager.
To add the workers we can use the script from above with some modifications.
#!/usr/bin/env bash set -e for i in 1 2 3; do docker-machine ssh worker-$i docker swarm join \ --token SWMTKN-1-3dn7yyrtq8v51cf8c4flmmshjftwjg2ykvajrnt7i5ps818ar5-6ifsfzb1eqzae6s4hrwwe3w66 \ 192.168.99.100:2377 done echo "" echo "--- Three workers joined the swarm ---"
And finally we add our logging worker to the swarm.
docker-machine ssh logging-worker-1 docker swarm join \ --token SWMTKN-1-3dn7yyrtq8v51cf8c4flmmshjftwjg2ykvajrnt7i5ps818ar5-6ifsfzb1eqzae6s4hrwwe3w66 \ 192.168.99.100:2377
To finish off creating provisioning section, we will create a main overlay network.
docker network create --driver overlay main ### VERIFY ### docker network ls | grep main ### OUTPUT ### vc5yvi2ixkfy main overlay swarm
So far we have created the following from our big picture above.
In next post in the series we will start create our frontend services such as the reverse proxy, and SSL manager.