# Provisioning storage for Aerospike on Kubernetes

## Overview

To use persistent storage external to the pods, set up [StorageClasses](https://kubernetes.io/docs/concepts/storage/storage-classes/). StorageClasses are configuration files with instructions for dynamically provisioning the persistent storage required by users in the Aerospike Custom Resource (CR) configuration.

The specific storage configuration depends on the environment in which you deploy your Kubernetes cluster. Each cloud provider has its own way to set up storage provisioners that dynamically create and attach storage devices to the containers.

## Setup

1.  To set up an Aerospike cluster with persistent storage, define your desired storage class in a `storage-class.yaml` file.
    
2.  Specify the storage class name in your Custom Resource (CR) file. AKO reads the storage class information from the CR file and configures the [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), allowing Kubernetes to provision the required storage. The location of this file is not important because the command to update AKO with this file includes the full file path. In the examples here, we assume the file is located in the directory from which you run `kubectl` commands.
    
3.  Add the storage class configurations to this file, then use `kubectl` to apply these changes to the cluster.
    

## Google Cloud

The following `storage-class.yaml` file uses the GCE provisioner (`kubernetes.io/gce-pd`) to create a StorageClass called `ssd`.

```yaml
apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: ssd

provisioner: kubernetes.io/gce-pd

parameters:

  type: pd-ssd
```

## Local volume

This example uses a local SSD (identified as `/dev/nvme0n1`). Attach this SSD to each Kubernetes worker node which will be used for getting the primary storage device for the Aerospike cluster deployment.

### Create a discovery directory

Before deploying local volume provisioner, create a discovery directory on each worker node and link the block devices to be used in the discovery directory. The provisioner will discover local block volumes from this directory. The directory is named in `aerospike_local_volume_provisioner.yaml` under `storageClassMap`:

provisioner/templates/provisioner.yaml

```yaml
apiVersion: v1

kind: ConfigMap

metadata:

  name: local-provisioner-config

  namespace: aerospike

data:

  useNodeNameOnly: "true"

  storageClassMap: |

    local-ssd:

       hostDir: /mnt/disks

       mountDir:  /mnt/disks

...
```

> Output:
> 
> Terminal window
> 
> ```shell
> $ lsblk
> 
> NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
> 
> nvme0n1       8:16   0  375G  0 disk
> 
> nvme0n2       8:32   0  375G  0 disk
> ```

Create the directory:

Terminal window

```shell
mkdir /mnt/disks
```

Link to the first block device:

```plaintext
sudo ln -s /dev/nvme0n1 /mnt/disks/
```

Link0 . to the second block device:

```plaintext
sudo ln -s /dev/nvme0n2 /mnt/disks/
```

::: note
You can also use your own discovery directory, but make sure that the provisioner is configured to point to the same directory.
:::

### Deploy the local volume provisioner

To automate the local volume provisioning, we create and run a provisioner based on [kubernetes-sigs/sig-storage-local-static-provisioner](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner).

The provisioner runs as a DaemonSet that manages the local SSDs on each node based on a discovery directory, creates and deletes the PersistentVolumes, and cleans up the storage when it is released.

The local volume static provisioner for this example is defined in [aerospike\_local\_volume\_provisioner.yaml](https://github.com/aerospike/aerospike-kubernetes-operator/tree/v4.1.2/config/samples/storage/aerospike_local_volume_provisioner.yaml).

The storage class is defined in [local\_storage\_class.yaml](https://github.com/aerospike/aerospike-kubernetes-operator/blob/v4.1.2/config/samples/storage/local_storage_class.yaml). This and other example CRs are available in [the main Aerospike Kubernetes Operator repository](https://github.com/aerospike/aerospike-kubernetes-operator/tree/v4.1.2/config/samples).

::: note
1.  If the Provisioner configuration is set to `shred` for block cleanup, then PersistentVolume release and reclaim takes time proportionate to the size of the block device.
    
2.  `local-provisioner-config` ConfigMap in [aerospike\_local\_volume\_provisioner.yaml](https://github.com/aerospike/aerospike-kubernetes-operator/tree/2.1.0/config/samples/storage/aerospike_local_volume_provisioner.yaml) can be changed to use `blkdiscard` as cleanup method if disk supports that. For more info, refer to [FAQs](https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/faqs).
    
    Refer to the `blockCleanerCommand` section in the following configuration:
    
    ```yaml
    apiVersion: v1
    
     kind: ConfigMap
    
     metadata:
    
       name: local-provisioner-config
    
       namespace: aerospike
    
     data:
    
       useNodeNameOnly: "true"
    
       storageClassMap: |
    
         local-ssd:
    
            hostDir: /mnt/disks
    
            mountDir:  /mnt/disks
    
            blockCleanerCommand:
    
              - "/scripts/blkdiscard.sh"
    
            volumeMode: Block
    ```
    

When using `blkdiscard` you need to be sure that the drive deterministically returns zeros after TRIM (RZAT). You should check with your hardware provider or cloud provider. If you are unsure, it is safer to use `dd`.
:::

1.  Create a local storage class:
    
    Terminal window
    
    ```shell
    kubectl create -f local_storage_class.yaml
    ```
    
2.  Then deploy the provisioner:
    
    ```plaintext
    kubectl create -f aerospike_local_volume_provisioner.yaml
    ```
    
3.  Verify the persistent volumes were created.
    
    Terminal window
    
    ```shell
    $ kubectl get pv
    ```
    
    kubectl output
    
    ```plaintext
    NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS     REASON   AGE
    
    local-pv-342b45ed   375Gi      RWO            Delete           Available           "local-ssd"            3s
    
    local-pv-3587dbec   375Gi      RWO            Delete           Available           "local-ssd"            3s
    ```
    
    ::: note
    The `storageclass` configured here is `"local-ssd"`. We will provide this in the Aerospike cluster CR config. This `storageclass` will be used to talk to the provisioner and request PV resources for the cluster.
    :::