We are excited to be a part of AWS re:Invent 2024. Visit us at booth #1844 in Las Vegas.More info
Blog

Swarm-ing Containers: Docker Orchestration at Scale

default photo
Richard Guo
Aerospike Cloud Engineer
August 25, 2017|13 min read

Aerospike, with its shared-nothing architecture and Smart Client ™, has always been a champion of speed and scale. These qualities are a natural fit for containers, where database horsepower is required to deliver scalable services and microservices. However, all is not well in the land of containers. Aerospike’s integration with Docker previously relied on a third-party container (Interlock) for discovery and on Docker’s internal mechanisms. This created challenges in maintainability and appropriateness in a production environment. There is now a better way.

This might be old news to some, but the occasional users of Docker should know that Docker Engine has integrated swarm as of version 1.12; it’s known as swarm mode. This has significantly simplified swarm deployment and management.

Aerospike is now very easy to deploy with swarm mode in a post-Docker 1.12* world. As a prerequisite, all of Aerospike’s Docker hosts run Docker version 17.04* or above; this is due to the fact that we are using Docker Compose version 3.2, which is only supported in Docker Engine 17.04 and above.

* About Docker versions: Docker has changed their versioning to YY.MM and switched to monthly releases. What would have been version 1.14 is now version 17.03.

Prepare the Swarm

In order to show a multi-machine swarm, we will need more than one machine. In the current Docker for Windows, this can be done with the included hyperv driver (for supported versions of Windows); please follow Docker’s instructions for configuring the network. In Docker for Mac 17.03, the current default HyperKit driver does not allow creating new machines. For this reason, you may need to use the included virtualbox or vmwarefusion drivers. Docker for Linux supports multiple machines on the default driver, so no driver parameter is required.

If you already have multiple Docker machines nodes running Docker 17.04+, you may reuse them for this test. You do not need to execute the following docker-machine steps; however, you will need to use different node names in the remainder of the instructions. To begin, please validate that the machines are running.

  1. Create Your Nodes

Use docker-machine to create the Docker hosts and simplify the provisioning of what will become the swarm. Each Docker host is referred to as a “node” within the swarm. The following example shows two nodes (hosts) being created via the VirtualBox driver:

$ docker-machine create --driver virtualbox node1
$ docker-machine create --driver virtualbox node2

Note: if you receive the error Error with pre-create check: “exit status 126”, this indicates that you have not installed VirtualBox. Either install VirtualBox, or replace it with a driver name that’s appropriate for your environment.

Next, validate that your Docker machines are running:

~/docker$ docker-machine ls NAME     ACTIVE  DRIVER   STATE     URL                         SWARM   DOCKER ERRORS 
node1 -  virtualbox       Running   tcp://192.168.99.102:2376   v17.06.0-ce  
node2 -  virtualbox       Running   tcp://192.168.99.103:2376   v17.06.0-ce

The conspicuous lack of any additional –swarm parameters is now a hallmark of swarm mode. The –swarm parameters can still be used to deploy standalone swarm (aka the old swarm). We won’t be using those anymore.

2. Pick a Node to Be Your Manager Node and Log Into It

$docker-machine ssh node1

You will need to determine the swarm advertise IP address. This will be one of the IP addresses of node1, and must be accessible by the other nodes in the swarm. If you are unsure which IP address to use, it is typically the IP address of the eth0 interface. By default, the VirtualBox driver of Docker-Machine provisions a 192.168.99.0/24 network for internal VM traffic, so we will use the 192.168.99.XX address for our advertise IP.

If you are uncertain as to which IP addresses might be visible, please validate connectivity by logging into node2 and pinging node1 using that IP address:

docker@node1: ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:9B:AF:FD inet addr:192.168.99.102 Bcast:192.168.99.255 Mask:255.255.255.0

3. Initialize the Swarm

Once you have determined the swarm advertisement address, execute swarm init to make node1 your swarm manager node:

docker@node1:~$ docker swarm init --advertise-addr <swarm_advertise_ip>

The previous command will output a command and token for every other Docker node to join as workers:

Swarm initialized: current node (bvz81updecsj6wjz393c09vti) is now a manager. To add a worker to this swarm, run the following command: docker swarm join –token <token> <swarm_advertise_ip>

4. Join Nodes into the Swarm

Copy and paste the following command into the other node(s):

$ docker-machine ssh node2 docker@node2:~$ docker swarm join --token <token> <swarm_advertise_ip>

After running the above command on the other node(s), we can run a docker node ls on the manager node to quickly verify our swarm status:

docker@node1:~$ docker node ls ID                           HOSTNAME    STATUS     AVAILABILITY    MANAGER STATUS 4zcvlrefn3ssvywyjpnigkfk6    node2       Ready      Active               yrsn4rsa19kzr0e0dfkz913ov *  node1       Ready      Active          Leader

Dog meme

That’s it! The swarm is now fully formed and waiting. As you can see, swarm mode lets you use much shorter and straightforward commands, and you no longer need an external KV store like Consul, etcd, or ZooKeeper!

Using Docker Compose to Define Our Stack

In order to create the set of application servers that will act together in a swarm, we need to define what Docker calls the application stack. Once we define a number of parameters — such as Docker Compose versions, overlay network parameters, and Docker secrets — and also define the services within the application, we will be able to deploy the stack, which will use the Docker deploy stack command on the swarm manager.

If you’d like to jump ahead to the results, you can see the contents of these files in our Docker Swarm GitHub repository.

Let’s get started!

Create the Multi-Host (Overlay) Network

Docker Compose will be used to orchestrate our Docker containers inside Swarm, and will be launched on the swarm manager. We will create a Docker Compose file (aerospike.yml) on the swarm manager.

We can also orchestrate our multi-host networking in the compose file.

Let’s start with the version and network section. Please use the following:

version: "3.2"
networks:
    aerospikenetwork:
        driver: overlay
        attachable: true

Note:

  • Add an explicit version for Docker Compose 3.2. This is required later on in our ‘Service’ definitions.

  • Use an overlay network. This enables multi-host networking between containers on different Docker Swarm nodes.

  • “attachable” is useful for development and debugging, but should be removed once in production. Removing this field will lock the network so that only the containers maintained through Docker Compose / Docker Swarm can utilize that network.

Secrets (Shhh…)

We’ll be using Docker secrets to load our configuration file and discovery script to all nodes within Docker Swarm. Unlike volume mounts, secrets are automatically propagated to every node in the swarm.

This is how the secrets section looks in the aerospike.yml file:

secrets:
    conffile:
        file: ./aerospike.conf/p>

    discoveryfile:
        file: ./discovery.py

Docker secrets are mounted to /run/secrets within the container that mounts them.

Service Definitions

We want to define two services:

  • aerospikedb: This service represents the Aerospike daemon process encapsulated in a container

  • meshworker: This service will look for new Aerospike containers and add them to the Aerospike cluster

Now for the services section and the aerospikedb component of the stack:

services:
aerospikedb:
image: aerospike/aerospike-server:3.14.1.1
         networks:
         - aerospikenetwork
         deploy:
             replicas: 2
             endpoint_mode: dnsrr
         labels:
             com.aerospike.description: "This label is for all containers for the Aerospike service"
         command: [ "--config-file","/run/secrets/aerospike.conf"]
         secrets:
         - source: conffile
             target: aerospike.conf
             mode: 0440

Comments:

  • We picked an explicit Aerospike version (3.14.1.1 in this case); it is useful, especially in production deployments, to pick a concrete version rather than just use latest. Consider updating to the newest Aerospike server; however, be aware that choosing ‘latest’ can cause problems in production environments.

  • The declaration we used for the overlay network was aerospikenetwork.

  • A deploy parameter specifies the initial cluster size and how the service discovery endpoint is implemented. Here, we have dnsrr, which is DNS round-robin. We must override the default endpoint mode of vip, or Virtual IP. Aerospike’s Smart Client ™ operates on direct access to server nodes, so a VIP would interfere with cluster discovery, while ‘dnsrr’ allows the Smart Client ™ to initially connect to any node in the cluster and use the Smart Client ™’s internal load balancing algorithms. This is the key feature that requires Docker Compose 3.2.

  • We defined a label. This is purely for your own informational purposes.

  • The command tag appends the arguments to the Aerospike command. We will utilize a

    Docker secrets file as the config file for Aerospike.

  • We will mount the conffile s secret defined in the secrets section of the Docker Compose file. This is specified in the long syntax as a read-only file.

Finally, we add the meshworker service; make sure to add this under the same service tag as the previously created aerospikedb service:

Services:
  ...
meshworker:
        image: aerospike/aerospike-tools:latest
        networks:
        - aerospikenetwork
        depends_on:
        - aerospike
        entrypoint:
        - /run/secrets/discovery
        - "--hostname"
        - aerospikedb
        - "-i"
        - "5"
        - "-v"
        secrets:
        - source: discoveryfile
          target: discovery
          mode: 0755

Comments:

  • Use the same services tag as the aerospikedb service above.

  • Use tools within the the aerospike/aerospike-tools image to trigger cluster reconfigurations.

  • You must use the overlay network aerospikenetwork. This ensures that this discovery process can connect to the Aerospike nodes to perform clustering.

  • The parameter depends_on ensures that this service starts after the aerospikedb service. This is ignored in swarm mode, but obeyed on single node with Docker Compose. It ensures that this service starts after the aerospikedb service.

  • We will use the entrypoint parameter to overwrite the command for aerospike/aerospike-tools container, which will run our Aerospike discovery script. This definition passes in parameters such as:

    • The service name to resolve

    • The interval to poll the service name

    • A verbose flag for debugging/logging

  • Mount the discoveryfile secret as an executable script

Start the Services

Now that we have explained the complete Docker Compose file, we can deploy the stack on the Docker manager node.

You will need to grab three files. We suggest starting with the aerospike.yml Docker Compose file and with working aerospike.conf and discover.py files, which you can find in the Aerospike Docker Swarm GitHub repository.

Note that although we’ve explained the aerospike.yml file, whitespace and character encoding problems are common when copying and pasting these instructions to a text file. To avoid errors, we recommend using the aerospike.yml file in the repo as a starting point.

We will deploy the stack from your host, although you could create the swarm from any of your swarm manager machines. Make sure that when you run subsequent Docker commands, you use the same environment in which you created the stack.

Use docker-machine env to direct a Docker client to the Docker daemon running the swarm manager. This means that the required files are in the host operating system — where you run Docker — and not within the Docker containers or swarm environment:

$ git clone https://github.com/aerospike/aerospike-docker-swarm.git $ cd aerospike-docker-swarm $ eval $(docker-machine env node1) $ docker stack deploy -c aerospike.yml aerospike

Test Your Aerospike Service

Now that the stack has been launched, we should have an Aerospike cluster. Let’s validate that this is the case by using the aql command line tool to log onto the Aerospike cluster and create some data. We are going to run aql via an aerospike-tools container.

In your standard environment (the one you used to deploy the stack above), create and run an Aerospike tools container.

Notice that you are adding the network name, which is a combination of the stack name and its network. We use the service name aerospikedb as the hostname for aql to connect to, as it will resolve to a container running aerospike-server based on DNS round-robin (more on this hostname resolution mechanism in the next section):

$ docker run --rm -it --net aerospike_aerospikenetwork aerospike/aerospike-tools aql -h aerospikedb
aql> insert into test.foo (pk, val1) values (1, 'abc')

OK, 1 record affected.

aql> select * from test.foo
 +-------+
 | val1  |
 +-------+
 | "abc" |
 +-------+
 1 row in set (0.391 secs)

You’ve just proven that you have a working, running Aerospike stack! You can now begin writing an application that runs on Aerospike, in the language of your choice.

Scale the Aerospike Cluster

We can now use the following Docker Service command to scale the Aerospike service (again, use the same environment where you deployed the stack):

$ docker service scale aerospike_aerospikedb=4

That’s almost too easy! Let’s validate that your cluster has four servers, as indicated above. Using the asadm tool, we can view the changed Aerospike cluster:

$ docker run --rm -it --net aerospike_aerospikenetwork aerospike/aerospike-tools asadm -e info -h aerospikedb

The output is lengthy, but you should see it list four servers.

How does this work? Docker Swarm now maintains an internal DNS service. This DNS service will resolve the service name to a container via round-robin. The discovery script within the MeshWorker container simply resolves this DNS address and runs asinfo tip commands against every host entry in the DNS record. This process is continuously looped until the meshworker service is stopped.

Behind the scenes, the Aerospike servers have started in the new containers, but as they have joined the cluster, they have automatically worked to rebalance your data. Any client applications you have are made aware of the new data layout as rebalancing happens — that’s the benefit of the Aerospike Smart Client ™.

Additional Considerations

You might have noticed that deployment constraints are now missing from our yaml template. This is due to two reasons:

  1. The swarm mode scheduling strategy spreads containers across nodes first, starting from Docker 1.13. This makes our previous constraint mostly redundant.

  2. The Environment property — and by extension, the Affinity tag — have been removed from Docker Compose.

To replicate our original constraint that no node runs more than one Aerospike container, we’d need to implement one of the following:

  • Ensure that the replicas/scale parameter never goes beyond swarm size.

  • Implement service constraints with global replication mode. Note that this makes the cluster less flexible as every host will run a container and ignore the replicas/scale parameter.

Conclusion

The new swarm mode has drastically simplified a scalable Aerospike deployment. From initial setup to final deployment, we have made the following improvements:

  • Dramatically simpler docker-machine commands

  • Complete orchestration within Docker Compose

  • Fewer moving parts for Docker Swarm

  • No longer requiring the usage of Interlock; instead, using our own aerospike-tools Docker image, which also has a number of other uses

What are you waiting for? Give Aerospike’s swarm mode integration a spin!

Resources