Skip to content

Provisioning storage for Aerospike on Kubernetes

To use persistent storage external to the pods, set up StorageClasses. StorageClasses are configuration files with instructions for dynamically provisioning the persistent storage required by the Aerospike cluster configuration in your CR file.

The specific storage configuration depends on the environment in which you deploy your Kubernetes cluster. Each cloud provider has its own way to set up storage provisioners that dynamically create and attach storage devices to the containers.

Setup

  1. To set up an Aerospike cluster with persistent storage, define your desired storage class in a storage-class.yaml file.

  2. Specify the storage class name in your CR file. AKO reads the storage class information from the CR file and configures the StatefulSet so Kubernetes can provision the required storage. In these examples, the file is in the directory where you run kubectl.

  3. Add the storage class configurations to this file, then use kubectl to apply these changes to the cluster.

Google Cloud

The following storage-class.yaml file uses the GCE provisioner (kubernetes.io/gce-pd) to create a StorageClass called ssd.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd

Local volume

This example uses a local SSD identified as /dev/nvme0n1. Attach this SSD to each Kubernetes worker node that will provide the primary storage device for the Aerospike cluster deployment.

Create a discovery directory

Before you deploy the local volume provisioner, create a discovery directory on each worker node and link the block devices you want to use. The provisioner discovers local block volumes from this directory. The directory is defined in aerospike_local_volume_provisioner.yaml under storageClassMap:

provisioner/templates/provisioner.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: aerospike
data:
useNodeNameOnly: "true"
storageClassMap: |
local-ssd:
hostDir: /mnt/disks
mountDir: /mnt/disks
...
Terminal window
lsblk
Output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 8:16 0 375G 0 disk
nvme0n2 8:32 0 375G 0 disk

Create the directory:

Terminal window
mkdir /mnt/disks

As a privileged user, link the first block device:

Terminal window
ln -s /dev/nvme0n1 /mnt/disks/

Link the second block device:

Terminal window
ln -s /dev/nvme0n2 /mnt/disks/

Deploy the local volume provisioner

To automate the local volume provisioning, we create and run a provisioner based on kubernetes-sigs/sig-storage-local-static-provisioner.

The provisioner runs as a DaemonSet that manages the local SSDs on each node based on a discovery directory, creates and deletes the PersistentVolumes, and cleans up the storage when it is released.

The local volume static provisioner for this example is defined in aerospike_local_volume_provisioner.yaml.

The storage class is defined in local_storage_class.yaml. This and other example CRs are available in the main Aerospike Kubernetes Operator repository.

  1. Create a local storage class:

    Terminal window
    kubectl create -f local_storage_class.yaml
  2. Then deploy the provisioner:

    Terminal window
    kubectl create -f aerospike_local_volume_provisioner.yaml
  3. Verify the persistent volumes were created.

    Terminal window
    kubectl get pv
    kubectl output
    NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
    local-pv-342b45ed 375Gi RWO Delete Available "local-ssd" 3s
    local-pv-3587dbec 375Gi RWO Delete Available "local-ssd" 3s
Feedback

Was this page helpful?

What type of feedback are you giving?

What would you like us to know?

+Capture screenshot

Can we reach out to you?