OpenEBS: Create persistent storage in your Charmed Kubernetes cluster quick and easy!

Maarten Smeets
0 0
Read Time:4 Minute, 3 Second

As a developer I wanted to experiment with Kubernetes environments which approximate production deployments. In order to do that I wanted a distributed storage solution and chose OpenEBS. Mainly because it was easy to get started and quick to get up and running. In this blog post I’ll describe how I did that.

Why OpenEBS?

I previously wrote a blog about using StorageOS as persistent storage solution for Charmed Kubernetes here. StorageOS is dependent on etcd. I was having difficulties getting etcd up again after a reboot. Since I wanted a storage solution without too much effort to get it running and not focus too much on external dependencies I decided to give OpenEBS a try. OpenEBS did not have the etcd dependency. I also considered using CephFS as described in the Charmed Kubernetes tutorials, but a deployment using charms would by default create a large number of additional hosts (for which my laptop didn’t have the resources) and storage would be external to the Kubernetes cluster making this setup more complex to deploy in other environments. The OpenEBS solution runs fully on Kubernetes.

In this blog I’ll describe a developer installation on Charmed Kubernetes (the environment described here). I used openebs-jiva-default as storage class. This is unsuitable for production scenario’s. OpenEBS provides cStor for that. Most of the OpenEBS development effort goes to cStor. cStor however requires a mounted block device. I have not tried this yet.

Installing OpenEBS

First I created my environment as described here. Note that this does not work on a Charmed Kubernetes install on LXC/LXD as-is! This was one of the reasons I decided to switch to MaaS+KVM.

For the creation of the environment I used the following yaml file as overlay. Note that I use 4 worker nodes. This setup requires at least 32Gb of RAM and 12 cores to run.

 description: A highly-available, production-grade Kubernetes cluster.  
 series: bionic  
 applications:  
  etcd:  
   num_units: 2  
  kubernetes-master:  
   constraints: cores=1 mem=4G root-disk=16G  
   num_units: 2  
  kubernetes-worker:  
   constraints: cores=1 mem=3G root-disk=20G  
   num_units: 4  

Next I enabled iscsi as per OpenEBS requirement in the worker nodes.

 juju run "sudo systemctl enable iscsid && sudo systemctl start iscsid" --application kubernetes-worker  

Allow creation of privileged containers. The containers need access to host devices to do their thing.

 juju config kubernetes-master allow-privileged=true  

Restart the environment

 juju run "reboot" --application kubernetes-worker  
 juju run "reboot" --application kubernetes-master  

Create a namespace

 kubectl create namespace openebs  

Add the OpenEBS Helm chart

 helm repo add openebs https://openebs.github.io/charts  
 helm repo update  

Add some configuration for OpenEBS. Parameters are described here.

 cat << EOF > openebs-config.yaml  
 jiva:  
   replicas: 2  
 EOF  

Install OpenEBS

 helm install openebs stable/openebs --version 1.10.0 -f openebs-config.yaml --namespace openebs  

Make sure the installation is completed before continuing. No pods in the openebs namespace should have state Pending. After the installation is completed, you can set the storageclass openebs-jiva-default as the default storageclass to use:

 kubectl patch storageclass openebs-jiva-default -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'  

Also it helps to indicate in the storage class only one replica is needed. This way you can make due with only 2 worker nodes instead of 4:

 kubectl apply -n openebs -f - <<END  
 kind: StorageClass  
 apiVersion: storage.k8s.io/v1  
 metadata:  
  name: openebs-jiva-default  
  annotations:  
   cas.openebs.io/config: |  
    - name: ReplicaCount  
     value: "1"  
   openebs.io/cas-type: jiva  
   storageclass.kubernetes.io/is-default-class: 'true'  
 provisioner: openebs.io/provisioner-iscsi  
 reclaimPolicy: Delete  
 volumeBindingMode: Immediate  
 END  

Trying it out

Add the Jenkins repo

 helm repo add stable https://kubernetes-charts.storage.googleapis.com/  
 helm repo update  

Create a namespace

 kubectl create namespace jenkins  

Create a persistent volume claim

 kubectl create -n jenkins -f - <<END  
 apiVersion: v1  
 kind: PersistentVolumeClaim  
 metadata:  
  name: jenkins-pv-claim  
 spec:  
  storageClassName: openebs-jiva-default  
  accessModes:  
   - ReadWriteOnce  
  resources:  
   requests:  
    storage: 5Gi  
 END  

Create some Jenkins configuration to use the claim

 cat << EOF > jenkins-config.yaml  
 persistence:  
   enabled: true  
   size: 5Gi  
   accessMode: ReadWriteOnce  
   existingClaim: jenkins-pv-claim  
   storageClass: "openebs-jiva-default"  
 EOF  

Install Jenkins

 helm install my-jenkins-release -f jenkins-config.yaml stable/jenkins --namespace jenkins  

Get your ‘admin’ user password by running:

 printf $(kubectl get secret --namespace jenkins my-jenkins-release -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo  

Get the Jenkins URL to visit by running these commands in the same shell and login!

 export POD_NAME=$(kubectl get pods --namespace jenkins -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=my-jenkins-release" -o jsonpath="{.items[0].metadata.name}")  
 kubectl --namespace jenkins port-forward $POD_NAME 8080:8080  

About Post Author

Maarten Smeets

Maarten is a Software Architect at AMIS Conclusion. Over the past years he has worked for numerous customers in the Netherlands in developer, analyst and architect roles on topics like software delivery, performance, security and other integration related challenges. Maarten is passionate about his job and likes to share his knowledge through publications, frequent blogging and presentations.
Happy
Happy
0 %
Sad
Sad
0 %
Excited
Excited
0 %
Sleepy
Sleepy
0 %
Angry
Angry
0 %
Surprise
Surprise
0 %

Average Rating

5 Star
0%
4 Star
0%
3 Star
0%
2 Star
0%
1 Star
0%

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Next Post

The Six-part REAL Oracle Cloud platform Webinar Series - Cloud Native Live Demo and Hands-On Oracle Developer Meetup sessions - kicking off June 10th 2020

Attend this series of six live webinars to get going with cloud native application development on the Oracle Cloud platform, created and presented by REAL specialists. The Red Expert Alliance (aka REAL) is a network of Oracle partners on four continents that have joined forces to share knowledge and experience […]
%d bloggers like this: