To use persistent storage external to the pods, set up StorageClasses.
StorageClasses are configuration files with instructions for dynamically provisioning the persistent storage required by users in the Aerospike Custom Resource (CR) configuration.
The specific storage configuration depends on the environment in which you deploy your Kubernetes cluster.
Each cloud provider has its own way to set up storage provisioners that dynamically create and attach storage devices to the containers.
Setup
To set up an Aerospike cluster with persistent storage, define your desired storage class in a storage-class.yaml file.
Specify the storage class name in your Custom Resource (CR) file. AKO reads the storage class information from the CR file and configures the StatefulSet, allowing Kubernetes to provision the required storage. The location of this file is not important because the command to update AKO with this file includes the full file path. In the examples here, we assume the file is located in the directory from which you run kubectl commands.
Add the storage class configurations to this file, then use kubectl to apply these changes to the cluster.
Google Cloud
The following storage-class.yaml file uses the GCE provisioner (kubernetes.io/gce-pd) to create a StorageClass called ssd.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
Local volume
This example uses a local SSD (identified as /dev/nvme0n1). Attach this SSD to each Kubernetes worker node which will be used for getting the primary storage device for the Aerospike cluster deployment.
Create a discovery directory
Before deploying local volume provisioner, create a discovery directory on each worker node and link the block devices to be used in the discovery directory. The provisioner will discover local block volumes from this directory. The directory is named in aerospike_local_volume_provisioner.yaml under storageClassMap:
provisioner/templates/provisioner.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: aerospike
data:
useNodeNameOnly: "true"
storageClassMap: |
local-ssd:
hostDir: /mnt/disks
mountDir: /mnt/disks
...
Output:
Terminal window
$lsblk
NAMEMAJ:MINRMSIZEROTYPEMOUNTPOINT
nvme0n18:160375G0disk
nvme0n28:320375G0disk
Create the directory:
Terminal window
mkdir/mnt/disks
Link to the first block device:
sudo ln -s /dev/nvme0n1 /mnt/disks/
Link0
. to the second block device:
sudo ln -s /dev/nvme0n2 /mnt/disks/
You can also use your own discovery directory, but make sure that the provisioner is configured to point to the same directory.
The provisioner runs as a DaemonSet that manages the local SSDs on each node based on a discovery directory, creates and deletes the PersistentVolumes, and cleans up the storage when it is released.
If the Provisioner configuration is set to shred for block cleanup, then PersistentVolume release and reclaim takes time proportionate to the size of the block device.
local-provisioner-config ConfigMap in aerospike_local_volume_provisioner.yaml
can be changed to use blkdiscard as cleanup method if disk supports that. For more info, refer to FAQs.
Refer to the blockCleanerCommand section in the following configuration:
apiVersion: v1
kind: ConfigMap
metadata:
name: local-provisioner-config
namespace: aerospike
data:
useNodeNameOnly: "true"
storageClassMap: |
local-ssd:
hostDir: /mnt/disks
mountDir: /mnt/disks
blockCleanerCommand:
- "/scripts/blkdiscard.sh"
volumeMode: Block
When using blkdiscard you need to be sure that the drive deterministically returns zeros after TRIM (RZAT). You should check with your hardware provider or cloud provider.
If you are unsure, it is safer to use dd.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-342b45ed 375Gi RWO Delete Available "local-ssd" 3s
local-pv-3587dbec 375Gi RWO Delete Available "local-ssd" 3s
The storageclass configured here is "local-ssd". We will provide this in the Aerospike cluster CR config. This storageclass will be used to talk to the provisioner and request PV resources for the cluster.