OpenEBS
Overviewโ
OpenEBS (Elastic Block Store) is an open-source platform that enables cloud-native, local or distributed persistent volume (PV) storage for Kubernetes. It is a platform-agnostic alternative to Amazon Elastic Block Store.
For more about OpenEBS, see the official OpenEBS documentation, particularly the Prerequisites.
Planning your deploymentโ
There are multiple storage engines to provision local as well as replicated volumes in OpenEBS based on the use case:
- Local volumes (Local PV device, LVM local PV)
- cStor replicated volumes (not recommended, see warning)
Before installing, consider the following pros and cons of each solution.
Local PV device volumesโ
This is Aerospike's recommended storage solution for OpenEBS.
Pros:
- Dynamic volume provisioner as opposed to a static provisioner.
- Better management of the block devices used for creating local PVs by OpenEBS NDM. NDM provides capabilities like discovering block device properties, setting up Device Filters, metrics collection and ability to detect if the block devices have moved across nodes.
Cons:
- Local PV size cannot be increased dynamically.
- Enforced capacity and PVC resource quotas on the local disks or host paths.
- A 1-1 mapping between the physical disk attached to the Kubernetes node and PVs claimed by the Aerospike pod (or any application) running on that node, which restricts the maximum number of PVCs that can be created on a Kubernetes node.
LVM local PVsโ
Pros:
- Solves all aforementioned limitations of the local PV device.
- Aerospike can create any number of PVCs (aggregate size should be less than the size of the volume group)
Cons:
- All worker nodes need to have
lvm2
installed. This involves another step to create a volume group on each Kubernetes node before installing the OpenEBSlvm
CSI driver. - The backup/restore and clone features are not supported in this.
- During cluster scaleup, the volume group must already be created on the upcoming node.
cStor replicated volumes (not recommended)โ
Although it is possible to use replicated local volumes, Aerospike strongly discourages doing so.
- Aerospike already implements replication at the application level, and the additional replication could severely impact performance and reliability.
- cStor has an iSCSI layer to provision volumes, which impedes performance.
Local PV device setupโ
The OpenEBS dynamic local PV provisioner can create Kubernetes local persistent volumes using block devices available on the node to persist data.
1. Set up OpenEBSโ
Install OpenEBS on the cluster using kubectl
.
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
This creates pods in the openebs
namespace.
A daemonset then runs NDM (Node Disk Manager) pods on each Kubernetes node.
NDM pods detect attached block devices and load them as block device custom resources into Kubernetes.
Use OpenEBS as the only provisioner in the cluster.
If your cluster is using additional provisioners along with OpenEBS like gce-pd
(Google Compute Engine Persistent Disk), use exclude filters in your openebs-operator.yaml
file to prevent those disks from being consumed by OpenEBS.
For more information on filtering, see NDM Filters.
Without exclude filters in a cluster with additional provisioners, this error appears:
Unable to mount pvc <pvc-name>, mount point is busy
Restart all NDM pods on each node after modifying the configuration.
2. Install Aerospike clusterโ
Use the value openebs-device
for the storageClass
key to provision a local volume PVC.
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns1
aerospike:
path: /test/dev/xvdf1
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 5Gi
OpenEBS then creates BlockDeviceClaim
objects to claim the already created BlockDevices
.
Run get BlockDeviceClaim
, get pv
,get pvc
, and get pod
to check that the objects have been successfully created.
Set up an LVM local PVโ
Before beginning, all nodes must have lvm2
utilities installed.
1. Set up Kubernetes nodes and volume groupโ
Use vgcreate
to set up a volume group that will be used by the LVM Driver for provisioning the volumes.
This example creates two volumes, then groups them into the lvmvg
volume group.
sudo pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
sudo pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created.
sudo vgcreate lvmvg /dev/sdb1 /dev/sdc1
Volume group "lvmvg" successfully created
2. Install OpenEBS LVM driverโ
Install the latest release of the OpenEBS LVM driver by running the following command.
kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml
Run get pods
to see one lvm-controller
pod and lvm-node
daemonset running on the nodes. You may see more pods depending on your setup.
kubectl get pods -n kube-system -l role=openebs-lvm
NAME READY STATUS RESTARTS AGE
openebs-lvm-controller-0 5/5 Running 0 80s
openebs-lvm-node-jtb9l 2/2 Running 0 73s
openebs-lvm-node-ppt8f 2/2 Running 0 73s
openebs-lvm-node-ssgln 2/2 Running 0 73s
3. Install storage classโ
Create a storage class with the provisioner
field set to local.csi.openebs.io
.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvmpv
parameters:
storage: "lvm"
volgroup: "lvmvg"
provisioner: local.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
4. Install Aerospike clusterโ
Install an Aerospike cluster by setting storageClass
to openebs-<your volume group name>
to provision a local volume PVC.
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns1
aerospike:
path: /test/dev/xvdf1
source:
persistentVolume:
storageClass: openebs-lvmpv
volumeMode: Block
size: 5Gi
Verify Aerospike pods and PVC with get pv
,get pvc
, and get pod
to check that the objects have been successfully created.