Scaling Aerospike on Kubernetes
You can scale Aerospike on Kubernetes horizontally (adjusting the number of pods or nodes in your cluster), or vertically (adjusting the resources available to them). See Horizontal scaling and Vertical scaling for more information and instructions.
Horizontal scalingโ
The Custom Resource (CR) file controls the number of pods (nodes) on the rack. When you change the cluster size in the CR file, Aerospike Kubernetes Operator (AKO) adds pods to the rack following the rack order defined in the CR.
AKO distributes the nodes equally across all racks. If any pods remain after equal distribution, they are distributed following the rack order.
Example: A cluster of two racks and five pods.
- After equal pod distribution, both racks have two pods, with one left over as a remainder.
- The remaining pod goes to Rack 1, resulting in Rack 1 having three pods and Rack 2 having two pods.
- If the cluster size is scaled up to six pods, a new pod is added to Rack 2.
- Scaling down follows the rack order and removes pods with the goal of equal distribution. In this example of two racks and six pods, scaling down to four pods results in two racks with two pods each. The third pod (third replica) on Rack 1 goes down first, followed by the third pod on Rack 2.
Horizontal scaling CR parametersโ
For this example, the cluster is deployed using a CR file named aerospike-cluster.yaml
.
- Change the
spec.size
field in the CR file to scale the cluster up or down to the specified number of pods.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 2
image: aerospike/aerospike-server-enterprise:7.2.0.1
.
.
- Use
kubectl
to apply the change.
kubectl apply -f aerospike-cluster.yaml
- Check the pods.
Output:
$ kubectl get pods -n aerospike
NAME READY STATUS RESTARTS AGE
aerocluster-0-0 1/1 Running 0 3m6s
aerocluster-0-1 1/1 Running 0 3m6s
aerocluster-0-2 1/1 Running 0 30s
aerocluster-0-3 1/1 Running 0 30s
Batch scale-downโ
You can scale down multiple pods in the same rack with a single scaling command by configuring scaleDownBatchSize
in the CR file.
This parameter is a percentage or absolute number of rack pods that the Operator scales down simultaneously.
Batch scale-down is not supported for Strong Consistency (SC) clusters.
Horizontal autoscalingโ
Kubernetes Cluster Autoscaler automatically scales up the Kubernetes cluster when resources are insufficient for the workload, and scales down the cluster when nodes are underused for an extended period of time. See the documentation at the GitHub link for more details.
Karpenter is an autoscaling tool for Kubernetes deployments on AWS, with features that fit into an AWS workflow. See the Karpenter documentation for more details.
If Aerospike pods only have in-memory and dynamic network-attached storage, both autoscalers scale up and down by adjusting resources and shifting load automatically to prevent data loss.
Horizontal autoscaling with local volumesโ
The primary challenge with autoscalers and local storage provisioners is ensuring the availability of local disks during Kubernetes node startup after the autoscaler scales up the node. When using local volumes, the ability to successfully autoscale depends on the storage provisioner: the default static local storage provisioner built into Kubernetes, or OpenEBS.
Do not use multiple storage provider provisioners such as OpenEBS and gce-pd simultaneously. If your individual setup requires the use of an additional provisioner alongside OpenEBS, configure OpenEBS with exclusion filters to prevent other disks from being consumed by OpenEBS.
The Kubernetes cluster autoscaler cannot add a new node when the underlying storage uses the Kubernetes static local storage provisioner. Scale up in this case must be done manually.
When scaling up using Karpenter, OpenEBS automatically gets a newly-provisioned node running only if your cluster has a way to set up local storage (bootstrapping) as soon as a new node becomes active in a cluster. See Google's documentation for automatic bootstrapping on GKE for more information and a setup guide.
Neither autoscaler can scale down the nodes if any pod is running with local storage attached.
Vertical scalingโ
Vertical scaling refers to adjusting the compute resources, such as CPU and memory, allocated to existing pods in a Kubernetes cluster. This can be useful if applications experience variable workloads that require more or less computing power at different times, such as peak and off-peak traffic times requiring changes in the amount of memory.
Processโ
Vertical scaling uses the Aerospike rack awareness feature. See Rack Awareness Architecture for more details.
The AerospikeContainerSpec
parameter in the CR file governs the amount of compute resources (CPU or memory) available to each pod.
Modifying this parameter causes a rolling restart of any pods with updated resources.
Vertical Pod Autoscalerโ
Kubernetes also provides an autoscaler called Vertical Pod Autoscaler (VPA), which automatically sets resource requests based on usage. VPA can both downscale pods that are over-requesting resources and upscale pods that are under-requesting resources based on their usage over time.
AKO uses Kubernetes StatefulSet for deploying Aerospike clusters. StatefulSet uses PersistentVolumeClaim for providing storage. A PersistentVolumeClaim cannot be updated under these circumstances, which prevents AKO from providing a simple solution for vertical scaling.
Scale rack-configured storageโ
We recommend using the Aerospike Rack Awareness feature as a workaround to perform vertical scaling.
Instead of changing the size of the existing rack, you create a new, larger rack and delete the old, smaller rack.
The Operator automatically transfers the data.
In this process, you leave the aerospikeConfig
section unmodified and update the storage in the rackConfig
section.
Example 1: Rack-configured storage before scalingโ
For this example, we assume that the cluster is deployed with the custom resource (CR) file aerospike-cluster.yaml
.
We start with one rack with an id
of 1
as the target to replace with a new, larger rack.
The cluster currently has two persistentVolume
configurations, one with a size of 1Gi
and the other with a size of 3Gi
.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 2
image: aerospike/aerospike-server-enterprise:7.2.0.1
rackConfig:
namespaces:
- test
racks:
- id: 1
zone: us-central1-b
storage:
filesystemVolumePolicy:
cascadeDelete: true
initMethod: deleteFiles
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns
aerospike:
path: /dev/sdf
source:
persistentVolume:
storageClass: ssd
volumeMode: Block
size: 3Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
aerospikeConfig:
service:
feature-key-file: /etc/aerospike/secret/features.conf
security: {}
namespaces:
- name: test
replication-factor: 2
storage-engine:
type: device
devices:
- /dev/sdf
.
.
.
To resize /dev/sdf
for namespace test
, create a new rack
inside rackConfig
with an updated storage
section and delete the old rack from the CR file.
Example 2: Rack-configured storage after scalingโ
In this example, the second volume is increased from 3Gi
to 8Gi
.
You can create the new rack in the same physical rack if there is enough space.
Use the existing zone/region
to hold the new storage and old storage together.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 2
image: aerospike/aerospike-server-enterprise:7.2.0.1
rackConfig:
namespaces:
- test
racks:
# Added new rack with id: 2 and removed the old rack with id: 1
- id: 2
zone: us-central1-b
storage:
filesystemVolumePolicy:
cascadeDelete: true
initMethod: deleteFiles
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns
aerospike:
path: /dev/sdf
source:
persistentVolume:
storageClass: ssd
volumeMode: Block
size: 8Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
aerospikeConfig:
service:
feature-key-file: /etc/aerospike/secret/features.conf
security: {}
namespaces:
- name: test
replication-factor: 2
storage-engine:
type: device
devices:
- /dev/sdf
...
Save and exit the CR file, then use kubectl
to apply the change.
kubectl apply -f aerospike-cluster.yaml
This creates a new rack with id: 2
and an updated storage
section.
AKO removes the old rack after verifying that the Aerospike server has migrated the old data to the new rack.
Check the pods with the get pods
command:
> $ kubectl get pods -n aerospike
> NAME READY STATUS RESTARTS AGE
> aerocluster-2-0 1/1 Running 0 3m6s
> aerocluster-2-1 1/1 Running 0 3m6s
> aerocluster-1-1 1/1 Terminating 0 30s
Scale global storage configurationsโ
In some Kubernetes deployments, you specify all storage as PersistentVolumeClaims and have nothing specified by default in the rackConfig
section.
This configuration works for normal usage, but it prevents the AKO from scaling global storage due to Kubernetes limitations.
To scale storage when you have configured your deployment only for global storage, first copy your global storage configuration into the rackConfig
section, then make the changes to the CR file as described in the previous section under "Scale rack-configured storage."
Example 1: Global storage before scalingโ
Example 1 represents the "before" state, prior to scaling.
It uses global storage, with example volumes workdir
and test
in the volumes
section under the global storage
parameter.
Some sections of this CR appear truncated to save space.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 3
image: aerospike/aerospike-server-enterprise:6.3.0.2
storage:
cleanupThreads: 10
filesystemVolumePolicy:
initMethod: deleteFiles
cascadeDelete: true
blockVolumePolicy:
cascadeDelete: true
initMethod: dd
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: gp2
volumeMode: Filesystem
size: 1Gi
- name: test
aerospike:
path: /aerospike/dev/xvdf_test
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 60Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
podSpec:
aerospikeInitContainer:
resources:
limits:
cpu: "16"
memory: 32Gi
requests:
cpu: "1"
memory: 1Gi
metadata:
annotations:
example: annotation
multiPodPerHost: false
sidecars:
- name: exporter
image: aerospike/aerospike-prometheus-exporter:latest
ports:
- containerPort: 9145
name: exporter
env:
- name: "AS_AUTH_USER"
value: "admin"
- name: "AS_AUTH_PASSWORD"
value: "admin123"
- name: "AS_AUTH_MODE"
value: "internal"
rackConfig:
namespaces:
- test
- bar
- foo
racks:
- id: 1
- id: 2
- id: 3
aerospikeConfig:
service:
feature-key-file: /etc/aerospike/secret/features.conf
security: {}
network:
...
namespaces:
- name: test
memory-size: 2000000000
replication-factor: 2
nsup-period: 120
storage-engine:
type: device
devices:
- /aerospike/dev/xvdf_test
To scale the cluster, copy the volumes over to new racks and label them with unique IDs.
Example 2: Global storage after scalingโ
Example 2 shows a configuration representing the state of the same cluster after scaling.
In this example, the volumes are copied over to three racks, labeled with IDs 11
, 12
, and 13
.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 3
image: aerospike/aerospike-server-enterprise:6.3.0.2
storage:
cleanupThreads: 10
filesystemVolumePolicy:
initMethod: deleteFiles
cascadeDelete: true
blockVolumePolicy:
cascadeDelete: true
initMethod: dd
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: gp2
volumeMode: Filesystem
size: 1Gi
- name: test
aerospike:
path: /aerospike/dev/xvdf_test
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 60Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
podSpec:
aerospikeInitContainer:
resources:
limits:
cpu: "16"
memory: 32Gi
requests:
cpu: "1"
memory: 1Gi
metadata:
annotations:
example: annotation
multiPodPerHost: false
sidecars:
- name: exporter
image: aerospike/aerospike-prometheus-exporter:latest
ports:
- containerPort: 9145
name: exporter
env:
- name: "AS_AUTH_USER"
value: "admin"
- name: "AS_AUTH_PASSWORD"
value: "admin123"
- name: "AS_AUTH_MODE"
value: "internal"
rackConfig:
namespaces:
- test
- bar
- foo
racks:
- id: 11
storage:
cleanupThreads: 10
filesystemVolumePolicy:
initMethod: deleteFiles
cascadeDelete: true
blockVolumePolicy:
cascadeDelete: true
initMethod: dd
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: gp2
volumeMode: Filesystem
size: 1Gi
- name: test
aerospike:
path: /aerospike/dev/xvdf_test
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 60Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
- id: 12
storage:
cleanupThreads: 10
filesystemVolumePolicy:
initMethod: deleteFiles
cascadeDelete: true
blockVolumePolicy:
cascadeDelete: true
initMethod: dd
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: gp2
volumeMode: Filesystem
size: 1Gi
- name: test
aerospike:
path: /aerospike/dev/xvdf_test
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 60Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
- id: 13
storage:
cleanupThreads: 10
filesystemVolumePolicy:
initMethod: deleteFiles
cascadeDelete: true
blockVolumePolicy:
cascadeDelete: true
initMethod: dd
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: gp2
volumeMode: Filesystem
size: 1Gi
- name: test
aerospike:
path: /aerospike/dev/xvdf_test
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 60Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
aerospikeConfig:
service:
...
security: {}
network:
...
namespaces:
- name: test
memory-size: 2000000000
replication-factor: 2
nsup-period: 120
storage-engine:
type: device
devices:
- /aerospike/dev/xvdf_test
- /aerospike/dev/xvdf_test_2
- name: bar
memory-size: 2000000000
replication-factor: 2
nsup-period: 120
storage-engine:
type: device
devices:
- /aerospike/dev/xvdf_bar
- /aerospike/dev/xvdf_bar_2
- name: foo
memory-size: 2000000000
replication-factor: 2
nsup-period: 120
storage-engine:
type: device
devices:
- /aerospike/dev/xvdf_foo
- /aerospike/dev/xvdf_foo_2
- /aerospike/dev/xvdf_foo_3
- /aerospike/dev/xvdf_foo_4