OpenEBS: cStor storage engine on KVM openebs kubespray kvm

OpenEBS: cStor storage engine on KVM

OpenEBS provides a Kubernetes native distributed storage solution which is friendly on developers and administrators. It is completely open source and part of the CNCF. Previously I wrote about installing and using OpenEBS, Jiva storage engine, on the Charmed Kubernetes distribution of Canonical. The Jiva storage class uses storage inside managed pods. cStor however can use raw disks attached to Kubernetes nodes. Since I was trying out Kubespray (also a CNCF project) on KVM and it is relatively easy to attach raw storage to KVM nodes, I decided to give cStor a try. cStor (which uses ZFS behind the scenes) is also the more recent and more robust storage engine and suitable for more serious workloads. See here. You can download the scripts I used to setup my Kubernetes environment here.

Preparations

Prerequisites

I used the setup described here. Kickstart is used to create KVM VMs with the names node1, node2, etc. Kubespray is used to provision the nodes with Kubernetes. A prerequisite for the following steps is a running Kubernetes environment created using the described method.

Preparing the environment

In order to use OpenEBS, an iSCSI client needs to be installed on the nodes and a service needs to be enabled. This is described in the prerequisites of OpenEBS here. For my setup I created a small script to loop over my nodes and execute some SSH commands on them in order to do that (the exec_ssh.sh script here). You can use that script to execute the below commands on every node.

You can also manually execute the following commands on your node hosts after having logged in there, should your environment look differently. These commands are Ubuntu based and have been checked on 18.04 and 20.04. They will probably work on other Debian based distributions but for other OSs, check the previously mentioned OpenEBS documentation on how to install the client.

 sudo apt-get -y install open-iscsi  
 sudo systemctl enable --now iscsid  
 sudo systemctl enable iscsid  
 sudo systemctl start iscsid  

The other commands mentioned are run on the host and not on the individual nodes.

Preparing and attaching raw storage

In my environment, I had KVM machines named node1, node2, etc. In order to create raw storage I did the following:

 #Go to the location where your images are stored. In my case this is /home/maarten/k8s/k8s-prov/machines/images  
 cd /home/maarten/k8s/k8s-prov/machines/images  
 #Create 4 times (one for every node)  
 for n in $(seq 1 4); do  
   #Create a raw storage image of 10Gb  
   qemu-img create -f raw node$n-10G 10G  
   #Attach the storage to the node and call it vdb  
   virsh attach-disk node$n /home/maarten/k8s/k8s-prov/machines/images/node$n-10G vdb --cache none --persistent  
 done  

OpenEBS

OpenEBS: cStor storage engine on KVM openebs logo

Installing OpenEBS is quite easy. I’ve used 1.12.0. As described, a prerequisite is having a running Kubernetes environment and of course a working ~/.kube/config in order to use kubectl commands. Helm also needs to be installed.

Preparations

In order to install kubectl and helm on Ubuntu you can do:

 sudo snap install kubectl --classic  
 sudo snap install helm --classic  

In order to install OpenEBS you can do the following:

 helm repo add openebs https://openebs.github.io/charts  
 helm repo update  
 kubectl create namespace openebs  
 helm install --namespace openebs openebs openebs/openebs  

Configuring OpenEBS

Now OpenEBS needs to know which devices it is allowed to use. The following command updates the ConfigMap which specifies which devices to include. /dev/vdb should be included since it uses our newly created raw disk files.

 kubectl get -n openebs cm openebs-ndm-config -o yaml |  sed -e 's|include: ""|include: "/dev/vdb"|' |  kubectl apply -f -  

Next you can check if the raw devices are available

 kubectl get blockdevice -n openebs  

In my case this gives output like:

 NAME                      NODENAME  SIZE     CLAIMSTATE  STATUS  AGE  
 blockdevice-85b3dd88549b7bb2ca9aada391750240  node2   10737418240  Unclaimed  Active  12m  
 blockdevice-9955ca806fd32c2e18e5293f597653b5  node1   10737418240  Unclaimed  Active  12m  
 blockdevice-cb09bfc8ae80591f356fe3153446064e  node3   10737418240  Unclaimed  Active  12m  
 blockdevice-f4629d6ac8d0d9260ff8a552640f30cf  node4   10737418240  Unclaimed  Active  12m  

Now you can create a cStor storage pool which uses these blockdevices. The following is an example since the names of the blockdevices are specific. You should update it to reflect your specific blockdevices.

 kubectl apply -n openebs -f - <<END   
 #Use the following YAMLs to create a cStor Storage Pool.  
 apiVersion: openebs.io/v1alpha1  
 kind: StoragePoolClaim  
 metadata:  
  name: cstor-disk-pool  
  annotations:  
   cas.openebs.io/config: |  
    - name: PoolResourceRequests  
     value: |-  
       memory: 2Gi  
    - name: PoolResourceLimits  
     value: |-  
       memory: 4Gi  
 spec:  
  name: cstor-disk-pool  
  type: disk  
  poolSpec:  
   poolType: striped  
  blockDevices:  
   blockDeviceList:  
   - blockdevice-85b3dd88549b7bb2ca9aada391750240  
   - blockdevice-9955ca806fd32c2e18e5293f597653b5  
   - blockdevice-cb09bfc8ae80591f356fe3153446064e  
   - blockdevice-f4629d6ac8d0d9260ff8a552640f30cf  
 END  

Now you can check if the blockdevices have correctly been claimed:

 kubectl get csp  

This will give output like:

 NAME          ALLOCATED  FREE  CAPACITY  STATUS  READONLY  TYPE   AGE  
 cstor-disk-pool-3csp  77K     9.94G  9.94G   Healthy  false   striped  3m9s  
 cstor-disk-pool-6cbb  270K    9.94G  9.94G   Healthy  false   striped  3m10s  
 cstor-disk-pool-jdn4  83K     9.94G  9.94G   Healthy  false   striped  3m10s  
 cstor-disk-pool-wz7x  83K     9.94G  9.94G   Healthy  false   striped  3m10s  

Next you can create a storage class to use the newly created storage pool:

 kubectl apply -n openebs -f - <<END   
 apiVersion: storage.k8s.io/v1  
 kind: StorageClass  
 metadata:  
  name: openebs-sc-statefulset  
  annotations:  
   openebs.io/cas-type: cstor  
   cas.openebs.io/config: |  
    - name: StoragePoolClaim  
     value: "cstor-disk-pool"  
    - name: ReplicaCount  
     value: "3"  
 provisioner: openebs.io/provisioner-iscsi  
 END  

You can set it to be the default with the following command:

 kubectl patch storageclass openebs-sc-statefulset -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'  

Using storage

Now that we have a storage class, we can try it out!

Create a persistent volume claim

 kubectl create -n jenkins -f - <<END  
 apiVersion: v1  
 kind: PersistentVolumeClaim  
 metadata:  
  name: jenkins-pv-claim  
 spec:  
  storageClassName: openebs-sc-statefulset  
  accessModes:  
   - ReadWriteOnce  
  resources:  
   requests:  
    storage: 5Gi  
 END  

Configure and install Jenkins

 cat << EOF > jenkins-config.yaml  
 persistence:  
   enabled: true  
   size: 5Gi  
   accessMode: ReadWriteOnce  
   existingClaim: jenkins-pv-claim  
   storageClass: "openebs-sc-statefulset"  
 EOF  
   
 helm install my-jenkins-release -f jenkins-config.yaml stable/jenkins --namespace jenkins  

Confirm it actually works

Now you can check under the Jenkins namespace everything comes up

OpenEBS: cStor storage engine on KVM persistent volume
OpenEBS: cStor storage engine on KVM jenkins

After you have executed the helm installation of Jenkins, there is an instruction on how to login. This allows you to confirm Jenkins is actually running.

OpenEBS: cStor storage engine on KVM jenkins runs

You can also see storage has actually been claimed and that, as specified in the ReplicaCount of the StorageClass, the data is distributed over 3 replica’s.

 kubectl get csp  
 NAME          ALLOCATED  FREE  CAPACITY  STATUS  READONLY  TYPE   AGE  
 cstor-disk-pool-3csp  221M    9.72G  9.94G   Healthy  false   striped  36m  
 cstor-disk-pool-6cbb  178K    9.94G  9.94G   Healthy  false   striped  36m  
 cstor-disk-pool-jdn4  221M    9.72G  9.94G   Healthy  false   striped  36m  
 cstor-disk-pool-wz7x  221M    9.72G  9.94G   Healthy  false   striped  36m