Recommendations for deploying Aerospike with Docker
Overviewโ
This page describes Aerospike's recommendations for deploying Aerospike Database with Docker.
Customize aerospike.confโ
The default Aerospike image is available at Docker Hub. You can override the default aerospike.conf
file at runtime (using --config-file
) since this is an exported volume from the Container, the example assumes the configuration file is in the local directory. A feature-key-file
is required to run the Enterprise Edition Aerospike container.
$ docker run -v $PWD:/etc/aerospike ...
Consult the Configuration Guide prior to finalizing the aerospike.conf
configuration file for a given deployment.
Changing loggingโ
The default logging is set to critical
. This is configured in the aerospike.conf
file:
$ cat aerospike.conf
...
logging {
# Log file must be an absolute path.
file /var/log/aerospike/aerospike.log {
context any critical
}
# Send log messages to stdout
console {
context any critical
}
}
...
Specifying network interfacesโ
Your docker host/container may have multiple network interfaces. To prevent any confusion of which interface to use, we recommend that you lock down interface usage within the network
stanza of aerospike.conf
for service, heartbeat and fabric.
...
network {
service {
address eth0
port 3000
...
}
heartbeat {
address eth0
mode mesh
...
}
fabric {
address eth0
port 3001
}
...
Affinity rulesโ
Affinity rules no longer apply to Swarm Mode
Docker Swarm has the ability to apply constraints and filters. These are automatically propagated from Docker Compose files if you are using this orchestration mechanism.
Simple rules that you should apply:
- Prevent Aerospike nodes in the same cluster running on the same Docker Daemon
- Specifying a specific Hardware configuration
More information about the various options can be found in the Docker Swarm documentation or take a look at the Docker Examples in the Aerospike documentation.
Prevent Aerospike nodes in the same cluster running on the same Docker Daemonโ
$ docker run -l com.aerospike.cluster=mycluster -e affinity:com.aerospike.cluster!=mycluster aerospike/aerospike-server
Specifying a specific Hardware configurationโ
If you want to constrain the Container to be deployed within a specific Hardware profile, this information can be passed when the Container is started. For example, assuming the Docker Daemon is started with a profile of SSD storage, then this can be used as a constraint for the Container:
$ docker -d --label storage=ssd
$ docker run -e constraint:storage==ssd aerospike/aerospike-server
Data persistenceโ
There are three main strategies for dealing with persistence, a critical choice for an Container running Aerospike.
Ephemeral storageโ
Data is stored within the container and the data will be lost when the container is removed. This is a typical option used for development and testing, but is not recommended for a production deployment unless its combined with an External Volume (see below).
The default Docker Hub image for Aerospike will write data to /opt/aerospike/data
, which is implicitly an Ephemeral volume.
Block storageโ
Docker run offers the ability to expose hosts block devices to a running container. The --device option can be used to map a host block device within a container.
Example mapping /dev/sdc to /dev/xvdc on a running container:
$ docker run -tid --name aerospike -p 3000:3000 -p 3001:3001 -p 3002:3002 -v /aerospike-server-enterprise.docker:/opt/etc/aerospike/ --device '/dev/sdc:/dev/xvdc' -e "FEATURE_KEY_FILE=/opt/etc/aerospike/features.conf" aerospike-server-enterprise:latest
External volumeโ
Docker can bind mount an external file-system into an exposed mount point in the Container. The default Docker Hub image for Aerospike exports a Volume /opt/aerospike/data
. When the container starts, this can be mapped to a host file-system thus:
$ docker run -v $PWD:/opt/aerospike/data aerospike/aerospike-server
The local directory (e.g. $PWD
) stores the data written by the Aerospike server. A typical configuration would use the external volume as a Shadow Volume so that any Ephemeral data is also written to the external volume as well. The external volume will survive the destruction of the Container and can be attached to a new Container if required and could be used for backups/restores (e.g. using an AWS EBS Snapshot).