OpenEBS
Overviewโ
OpenEBS (Elastic Block Store) is an open-source platform that enables cloud-native, local or distributed persistent volume (PV) storage for Kubernetes. It is a platform-agnostic alternative to Amazon Elastic Block Store.
For more about OpenEBS, see the official OpenEBS documentation, particularly the Prerequisites.
Planning your deploymentโ
There are multiple storage engines to provision local as well as replicated volumes in OpenEBS based on the use case:
- Local volumes (Local PV device, LVM local PV)
- Replicated volumes (cStor)
Before installing, consider the following pros and cons of each solution.
Local PV device volumesโ
Pros:
- Dynamic volume provisioner as opposed to a static provisioner.
- Better management of the block devices used for creating local PVs by OpenEBS NDM. NDM provides capabilities like discovering block device properties, setting up Device Filters, metrics collection and ability to detect if the block devices have moved across nodes.
Cons:
- Local PV size cannot be increased dynamically.
- Enforced capacity and PVC resource quotas on the local disks or host paths.
- A 1-1 mapping between the physical disk attached to the Kubernetes node and PVs claimed by the Aerospike pod (or any application) running on that node, which restricts the maximum number of PVCs that can be created on a Kubernetes node.
LVM local PVsโ
Pros:
- Solves all aforementioned limitations of the local PV device.
- Aerospike can create any number of PVCs (aggregate size should be less than the size of the volume group)
Cons:
- All worker nodes need to have
lvm2
installed. This involves another step to create a volume group on each Kubernetes node before installing the OpenEBSlvm
CSI driver. - The backup/restore and clone features are not supported in this.
- During cluster scaleup, the volume group must already be created on the upcoming node.
cStor replicated volumesโ
Pros:
- Provide synchronous replication of data with multiple disks on the nodes.
- Can manage storage for multiple applications from a common pool of local or network disks on each node.
- Features such as thin provisioning, on-demand capacity expansion of the pool and volume will help manage the storage layer.
- Can build Kubernetes native storage services similar to AWS EBS or Google PD on the Kubernetes clusters running on-premise.
- Support storage-level snapshots and clone capability.
- Support enterprise grade storage protection features like data consistency and resiliency (RAID protection).
Cons:
- Aerospike already has its own concept of replication, which makes the concept of replicated volumes through cStor less necessary.
- cStor has an iSCSI layer to provision volumes, which impedes performance.
Local PV device setupโ
The OpenEBS dynamic local PV provisioner can create Kubernetes local persistent volumes using block devices available on the node to persist data.
1. Set up OpenEBSโ
Install OpenEBS on the cluster using kubectl
.
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
This creates pods in the openebs
namespace.
A deamonset then runs NDM (Node Disk Manager) pods on each Kubernetes node.
NDM pods detect attached block devices and load them as block device custom resources into Kubernetes.
Use OpenEBS as the only provisioner in the cluster.
If your cluster is using additional provisioners along with OpenEBS like gce-pd
(Google Compute Engine Persistent Disk), use exclude filters in your openebs-operator.yaml
file to prevent those disks from being consumed by OpenEBS.
For more information on filtering, see NDM Filters.
Without exclude filters in a cluster with additional provisioners, this error appears:
Unable to mount pvc <pvc-name>, mount point is busy
Restart all NDM pods on each node after modifying the configuration.
2. Install Aerospike clusterโ
Use the value openebs-device
for the storageClass
key to provision a local volume PVC.
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns1
aerospike:
path: /test/dev/xvdf1
source:
persistentVolume:
storageClass: openebs-device
volumeMode: Block
size: 5Gi
OpenEBS then creates BlockDeviceClaim
objects to claim the already created BlockDevices
.
Run get BlockDeviceClaim
, get pv
,get pvc
, and get pod
to check that the objects have been successfully created.
Set up an LVM local PVโ
Before beginning, all nodes must have lvm2
utilities installed.
1. Set up Kubernetes nodes and volume groupโ
Use vgcreate
to set up a volume group that will be used by the LVM Driver for provisioning the volumes.
In this example, two volumes are created and then grouped into the lvmvg
volume group.
sudo pvcreate /dev/sdb1
Physical volume "/dev/sdb1" successfully created.
sudo pvcreate /dev/sdc1
Physical volume "/dev/sdc1" successfully created.
sudo vgcreate lvmvg /dev/sdb1 /dev/sdc1
Volume group "lvmvg" successfully created
2. Install OpenEBS LVM driverโ
Install the latest release of the OpenEBS LVM driver by running the following command.
kubectl apply -f https://openebs.github.io/charts/lvm-operator.yaml
Run get pods
to see one lvm-controller
pod and lvm-node
daemonset running on the nodes. You may see more pods depending on your setup.
kubectl get pods -n kube-system -l role=openebs-lvm
NAME READY STATUS RESTARTS AGE
openebs-lvm-controller-0 5/5 Running 0 80s
openebs-lvm-node-jtb9l 2/2 Running 0 73s
openebs-lvm-node-ppt8f 2/2 Running 0 73s
openebs-lvm-node-ssgln 2/2 Running 0 73s
3. Install storage classโ
Create a storage class with the provisioner
field set to local.csi.openebs.io
.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvmpv
parameters:
storage: "lvm"
volgroup: "lvmvg"
provisioner: local.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
4. Install Aerospike clusterโ
Install an Aerospike cluster by setting storageClass
to openebs-<your volume group name>
to provision a local volume PVC.
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns1
aerospike:
path: /test/dev/xvdf1
source:
persistentVolume:
storageClass: openebs-lvmpv
volumeMode: Block
size: 5Gi
Verify Aerospike pods and PVC with get pv
,get pvc
, and get pod
to check that the objects have been successfully created.
OpenEBS setup with replicated volumesโ
Though there are multiple ways to provision replicated volumes in OpenEBS, in this example we use the cStor storage engine. The primary function of cStor is to serve iSCSI block storage using the underlying disks or cloud volumes in a cloud-native way. The reason behind using Ubuntu as image type in Kubernetes nodes is to use an iSCSI client, since GKE Container-Optimized OS does not come with an iSCSI client preinstalled and does not allow installation of the iSCSI client.
Prerequisitesโ
Verify that iSCSI services are configured on all GKE nodes
If the iSCSI initiator is already installed on your node, check that the initiator name is configured and iSCSI service is running using the following commands:
sudo cat /etc/iscsi/initiatorname.iscsi
systemctl status iscsid
If the service status appears as Inactive
, enable and start the iscsid
service using sudo systemctl enable --now iscsid
.
1. Verify unformatted block devicesโ
cStor requires raw (unformatted) block devices.
You must have disks attached to nodes to provision the storage.
The disks must not have any filesystem and the disks must not be mounted on the Node.
Use the lsblk -fa
command to check if the disks have a filesystem or if the disk is mounted.
lsblk -fa
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
loop0
sdb
โโsdb1
sdc
โโsdc1
2. Install cStor operatorโ
Install the cStor operator using kubectl.
kubectl apply -f https://openebs.github.io/charts/cstor-operator.yaml
3. Create cStor storage poolsโ
Create a Kubernetes custom resource called CStorPoolCluster
, specifying the details of the nodes and the devices on the nodes that must be used to set up cStor pools.
Replace the placeholders in the following example with your actual hostname and block device name.
apiVersion: cstor.openebs.io/v1
kind: CStorPoolCluster
metadata:
name: cstor-disk-pool
namespace: openebs
spec:
pools:
- nodeSelector:
kubernetes.io/hostname: "<hostname>"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "<block device name>"
poolConfig:
dataRaidGroupType: "stripe"
- nodeSelector:
kubernetes.io/hostname: "<hostname>"
dataRaidGroups:
- blockDevices:
- blockDeviceName: "<block device name>"
poolConfig:
dataRaidGroupType: "stripe"
4. Create storage classโ
Create a storage class to use the cstor.csi.openebs.io
provisioner.
In the parameters
section, storPoolCluster
should have the name of the CSPC and replicaCount
should be less than or equal to the number of CSPIs created in the selected CSPC.
For example, if replicaCount
is "3"
, cStor maintains three copies of data on three different nodes and data is written synchronously to all three replicas.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cstor-csi-disk
provisioner: cstor.csi.openebs.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
cas-type: cstor
cstorPoolCluster: cstor-disk-pool
replicaCount: "1"
5. Install Aerospike clusterโ
Use the value cstor-csi-disk
for storageClass
to provision a local volume PVC.
volumes:
- name: workdir
aerospike:
path: /opt/aerospike
source:
persistentVolume:
storageClass: ssd
volumeMode: Filesystem
size: 1Gi
- name: ns1
aerospike:
path: /test/dev/xvdf1
source:
persistentVolume:
storageClass: cstor-csi-disk
volumeMode: Block
size: 5Gi
- name: ns
aerospike:
path: /test/dev/xvdf
source:
persistentVolume:
storageClass: cstor-csi-disk
volumeMode: Block
size: 5Gi