Node maintenance for Aerospike on Kubernetes
Use the Node Affinity feature in the Kubernetes Operator to migrate Aerospike pods out of Kubernetes nodes. This is useful for doing maintenance work such as upgrading the Kubernetes version on the nodes.
Network-attached storage
Setting a scheduling policy like affinity
, taint & tolerations
, and nodeSelectors
can help migrate the pods to a different node pool, and the current node pool can be brought down.
Set RollingUpdateBatchSize
to expedite this process by migrating pods in a batch.
For example, you can set the following nodeAffinity
in the podSpec
section of the Custom Resource (CR) file.
AKO performs a rolling restart of the cluster and migrates the pods based on the scheduling policies.
Following nodeAffinity
ensures that pods are migrated to a node-pool named upgrade-pool
.
AKO restarts the pods and move them to the nodes with the node label cloud.google.com/gke-nodepool: upgrade-pool
.
podSpec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: cloud.google.com/gke-nodepool operator: In values: - upgrade-pool
Local-attached storage
When Kubernetes pods use local storage, they are unable to move to different Aerospike cluster nodes because of volume affinity. This prevents a rolling restart with a different scheduling policy from working.
However, you can use the K8sNodeBlockList
feature to migrate the pods to other Kubernetes nodes when using local storage.
K8sNodeBlockList
If you want to bring some Kubernetes nodes down for maintenance or upgrade purposes, you can use the optional K8sNodeBlockList
CR section to safely migrate the pods out of the given Kubernetes nodes.
K8sNodeBlockList
specifies the list of Kubernetes node names from which you want to migrate pods.
AKO reads this configuration and safely migrates pods off these nodes.
If pods are using network-attached storage, AKO migrates the pods out of their Kubernetes nodes without additional configuration.
If pods are using local-attached storage, you must specify those local storage classes in the spec.Storage.LocalStorageClasses
field of the CR.
AKO uses this field to delete the corresponding local volumes so that the pods can be easily migrated out of the Kubernetes nodes.
This process uses the RollingUpdateBatchSize
parameter defined in your CR to migrate pods in batches for efficiency.
The following example CR includes a spec.K8sNodeBlockList
section with two nodes defined:
apiVersion: asdb.aerospike.com/v1kind: AerospikeClustermetadata: name: aerocluster namespace: aerospikespec: k8sNodeBlockList: - gke-test-default-pool-b6f71594-1w85 - gke-test-default-pool-b6f71594-9vm2
size: 4
image: aerospike/aerospike-server-enterprise:8.1.0.0
rackConfig: namespaces: - test racks: - id: 1 - id: 2...