Scaling Namespace Storage for Aerospike on Kubernetes
Scaling namespace storage (vertical scaling) can be a complex topic. The Operator uses Kubernetes StatefulSet for deploying Aerospike clusters. StatefulSet uses PersistentVolumeClaim for providing storage. Currently a PersistentVolumeClaim cannot be updated under these circumstances, which prevents the Operator from providing a simple solution for vertical scaling.
We recommend using the Aerospike Rack Awareness feature to perform vertical scaling.
For this example, we assume that cluster is deployed with the custom resource (CR) file aerospike-cluster.yaml
.
We start with one rack with an id
of 1
as the target to replace with a new, larger rack.
It currently has two persistentVolume
configurations, one with a size of 1Gi
and the other with a size of 3Gi
.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 2
image: aerospike/aerospike-server-enterprise:6.4.0.0
rackConfig:
namespaces:
- test
racks:
- id: 1
zone: us-central1-b
storage:
filesystemVolumePolicy:
cascadeDelete: true
initMethod: deleteFiles
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns
aerospike:
path: /dev/sdf
source:
persistentVolume:
storageClass: ssd
volumeMode: Block
size: 3Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
aerospikeConfig:
service:
feature-key-file: /etc/aerospike/secret/features.conf
security: {}
namespaces:
- name: test
memory-size: 6000000000
replication-factor: 2
storage-engine:
type: device
devices:
- /dev/sdf
.
.
.
Create a New Rack
To resize /dev/sdf
for namespace test
, create a new rack
inside rackConfig
with an updated storage
section and delete the old rack from the CR file.
In this example, the second volume is now 8Gi
in size.
You can create the new rack in the same physical rack if there is enough space. Use the existing zone/region
to hold the new storage and old storage together.
apiVersion: asdb.aerospike.com/v1
kind: AerospikeCluster
metadata:
name: aerocluster
namespace: aerospike
spec:
size: 2
image: aerospike/aerospike-server-enterprise:6.4.0.0
rackConfig:
namespaces:
- test
racks:
# Added new rack with id: 2 and removed the old rack with id: 1
- id: 2
zone: us-central1-b
storage:
filesystemVolumePolicy:
cascadeDelete: true
initMethod: deleteFiles
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns
aerospike:
path: /dev/sdf
source:
persistentVolume:
storageClass: ssd
volumeMode: Block
size: 8Gi
- name: aerospike-config-secret
source:
secret:
secretName: aerospike-secret
aerospike:
path: /etc/aerospike/secret
aerospikeConfig:
service:
feature-key-file: /etc/aerospike/secret/features.conf
security: {}
namespaces:
- name: test
memory-size: 10000000000
replication-factor: 2
storage-engine:
type: device
devices:
- /dev/sdf
.
.
.
Save and exit the CR file, then use kubectl to apply the change.
kubectl apply -f aerospike-cluster.yaml
This creates a new rack with id: 2
and an updated storage
section.
The Operator removes the old rack after verifying that the Aerospike server has migrated the old data to the new rack.
Check the pods with the get pods
command:
> $ kubectl get pods -n aerospike
> NAME READY STATUS RESTARTS AGE
> aerocluster-2-0 1/1 Running 0 3m6s
> aerocluster-2-1 1/1 Running 0 3m6s
> aerocluster-1-1 1/1 Terminating 0 30s