Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl image

Get going with Project Fn on a remote Kubernetes Cluster from a Windows laptop–using Vagrant, VirtualBox, Docker, Helm and kubectl

image

The challenge I describe in this article is quite specific. I have a Windows laptop. I have access to a remote Kubernetes cluster (on Oracle Cloud Infrastructure). I want to create Fn functions and deploy them to an Fn server running on that Kubernetes (k8s from now on) environment and I want to be able to execute functions running on k8s from my laptop. That’s it.

In this article I will take you on a quick tour of what I did to get this to work:

  • Use vagrant to spin up a VirtualBox VM based on a Debian Linux image and set up with Docker Server installed. Use SSH to enter the Virtual Machine and install Helm (a Kubernetes package installer) – both client (in the VM) and server (called Tiller, on the k8s cluster). Also install kubectl in the VM.
  • Then install Project Fn in the VM. Also install Fn to the Kubernetes cluster, using the Helm chart for Fn (this will create a series of Pods and Services that make up and run the Fn platform).
  • Still inside the VM, create a new Fn function. Then, deploy this function to the Fn server on the Kubernetes cluster. Run the function from within the VM – using kubectl to set up port forwarding for local calls to requests into the Kubernetes cluster.
  • On the Windows host (the laptop, outside the VM) we can also run kubectl with port forwarding and invoke the Fn function on the Kubernetes cluster.
  • Finally, I show how to expose the the fn-api service from the Kubernetes service on an external IP address. Note: the latter is nice for demos, but compromises security in a major way.

All in all, you will see how to create, deploy and invoke an Fn function – using a Windows laptop and a remote Kubernetes cluster as the runtime environment for the function.

The starting point:

image

a laptop running Windows, with VirtualBox and Vagrant installed and a remote Kubernetes Cluster (could be in some cloud, such as Oracle the Container Engine Cloud that I am using or could be minikube).

Step One: Prepare Virtual Machine

Create a Vagrantfile – for example this one: https://github.com/lucasjellema/fn-on-kubernetes-from-docker-in-vagrant-vm-on-windows/blob/master/vagrantfile:

Vagrant.configure("2") do |config|
  
config.vm.provision "docker"

config.vm.define "debiandockerhostvm"
# https://app.vagrantup.com/debian/boxes/jessie64
config.vm.box = "debian/jessie64"
config.vm.network "private_network", ip: "192.168.188.105"
 

config.vm.synced_folder "./", "/vagrant", id: "vagrant-root",
       owner: "vagrant",
       group: "www-data",
       mount_options: ["dmode=775,fmode=664"],
       type: ""
         
config.vm.provider :virtualbox do |vb|
   vb.name = "debiananddockerhostvm"
   vb.memory = 4096
   vb.cpus = 2
   vb.customize ["modifyvm", :id, "--natdnshostresolver1","on"]
   vb.customize ["modifyvm", :id, "--natdnsproxy1", "on"]
end
  
end

This Vagrantfile will create a VM with VirtualBox called debiandockerhostvm – based on the VirtualBox image debian/jessie64. It exposes the VM to the host laptop at IP 192.168.188.105 (you can safely change this). It maps the local directory that contains the Vagrantfile into the VM, at /vagrant. This allows us to easily exchange files between Windows host and Debian Linux VM. The instruction “config.vm.provision “docker”” ensures that Docker is installed into the Virtual Machine.

To actually create the VM, open a command line and navigate to the directory that contains the Vagrant file. Then type “vagrant up”. Vagrant starts running and creates the VM, interacting with the VirtualBox APIs. When the VM is created, it is started.

From the same command line, using “vagrant ssh”, you can now open a terminal window in the VM.

To further prepare the VM, we need to install Helm and kubectl. Helm is installed in the VM (client) as well as in the Kubernetes cluster (the Tiller server component).

Here are sthe steps to perform inside the VM (see step 1):

######## kubectl

# download and extract the kubectl binary 
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl

# set the executable flag for kubectl
chmod +x ./kubectl

# move the kubectl executable to the bin directory
sudo mv ./kubectl /usr/local/bin/kubectl

# assuming that the kubeconfig file with details for Kubernetes cluster is available On the Windows Host:
# Copy the kubeconfig file to the directory that contains the Vagrantfile and from which vagrant up and vagrant ssh were performed
# note: this directory is mapped into the VM to directory /vagrant

#Then in VM - set the proper Kubernetes configuration context: 
export KUBECONFIG=/vagrant/kubeconfig

#now inspect the succesful installation of kubectl and the correct connection to the Kubernetes cluster 
kubectl cluster-info


########  HELM
#download the Helm installer
curl -LO  https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz

#extract the Helm executable from the archive
tar -xzf helm-v2.8.1-linux-amd64.tar.gz

#set the executable flag on the Helm executable
sudo chmod +x  ./linux-amd64/helm

#move the Helm executable to the bin directory - as helm
sudo mv ./linux-amd64/helm /usr/local/bin/helm

#test the successful installatin of helm
helm version

###### Tiller

#Helm has a server side companion, called Tiller, that should be installed into the Kubernetes cluster
# this is easily done by executing:
helm init

# an easy test of the Helm/Tiller set up can be run (as described in the quickstart guide)
helm repo update              

helm install stable/mysql

helm list

# now inspect in the Kubernetes Dashboard the Pod that should have been created for the MySQL Helm chart

# clean up after yourself:
helm delete <name of the release of MySQL>

When this step is complete, the environment looks like this:

image

Step Two: Install Project Fn – in VM and on Kubernetes

Now that we have prepared our Virtual Machine, we can proceed with adding the Project Fn command line utility to the VM and the Fn platform to the Kubernetes cluster. The former is simple local installation of a binary file. The latter is an even simpler installation of a Helm Chart. Here are the steps that you should go through inside the VM (also see step 2):

# 1A. download and install Fn locally inside the VM
curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install | sh

#note: this previous statement failed for me; I went through the following steps as a workaround
# 1B. create install script
curl -LSs https://raw.githubusercontent.com/fnproject/cli/master/install > inst
all.sh
# make script executable
chmod u+x install.sh
# execute script - as sudo
sudo ./install.sh

# 1C. and if that fails, you can manually manipulate the downloaded executable:
sudo mv /tmp/fn_linux /usr/local/bin/fn
sudo chmod +x /usr/local/bin/fn

# 2. when the installation was done through one of the  methods listed, test the success by running  
fn --version


# 3. Server side installation of Fn to the Kubernetes Cluster
# details in https://github.com/fnproject/fn-helm

# Clone the GitHub repo with the Helm chart for fn; sources are downloaded into the fn-helm directory
git clone https://github.com/fnproject/fn-helm && cd fn-helm

# Install chart dependencies from requirements.yaml in the fn-helm directory:
helm dep build fn

#To install the Helm chart with the release name my-release into Kubernetes:
helm install --name my-release fn

#note: if you run into this message:
# Error: incompatible versions client[v2.8.1] server[v2.5.0]
# then to upgrade the server to the same version as the client, run:
helm init --upgrade

# to verify the cluster server side installation you could run the following statements:
export KUBECONFIG=/vagrant/kubeconfig

#list all pods for app my-release-fn
kubectl get pods --namespace default -l "app=my-release-fn"

When the installation of Fn has been done, the environment can be visualized as shown below:

image

You can check in the Kubernetes Dashboard to see what has been created from the Helm chart:

image

Or on the command line:

image

Step Three: Create, Deploy and Run Fn Functions

We now have a ready to run environment – client side VM and server side Kubernetes cluster – for creating Fn functions – and subsequently deploying and invoking them.

Let’s now go through these three steps, starting with the creation of a new function called shipping-costs, created in Node.

docker login

export FN_REGISTRY=lucasjellema

mkdir shipping-costs

cd shipping-costs

fn init --name shipping-costs --runtime  node

# this creates the starting point of the Node application (package.json and func.js) as well as the Fn meta data file (func.yaml) 

# now edit the func.js file (and add dependencies to package.json if necessary)

#The extremely simple implementation of func.js looks like this:
var fdk=require('@fnproject/fdk');

fdk.handle(function(input){
  var name = 'World';
  if (input.name) {
    name = input.name;
  }
  response = {'message': 'Hello ' + name, 'input':input}
  return response
})

#This function receives an input parameter (from a POST request this would be the body contents, typically a JSON document)
# the function returns a result, a JSON document with the message and the input document returned in its entirety

After this step, the function exists in the VM – not anywhere else yet. Some other functions could already have been deployed to the Fn platform on Kubernetes.

image

This function shipping-costs should now be deployed to the K8S cluster, as that was one of our major objectives.

export KUBECONFIG=/vagrant/kubeconfig

# retrieve the name of the Pod running the Fn API
kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}"

# retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME    


# set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
# this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
kubectl port-forward --namespace default $POD_NAME 8080:80 &

#now we inform Fn that deployment activities can be directed at port 8080 of the local host, effectively to the pod $POD_NAME on the K8S cluster
export FN_API_URL=http://127.0.0.1:8080
export FN_REGISTRY=lucasjellema
docker login

#perform the deployment of the function from the directory that contains the func.yaml file
#functions are organized in applications; here the name of the application is set to soaring-clouds-app
fn deploy --app soaring-clouds-app

Here is what the deployment looks like in the terminal window in the VM. (I have left out the steps: docker login, set FN_API_URL and set FN_REGISTRY

image


After deploying function shipping-costs it now exists on the Kubernetes cluster – inside the fn-api Pod (where a docker containers are running for each of the functions):image

To invoke the functions, several options are available. The function can be invoked from within the VM, using cURL to the function’s endpoint – leveraging kubectrl port forwarding as before. We can also apply kubectl port forwarding on the laptop – and use any tool that can invoke HTTP endpoints – such as Postman – to call the function.

If we want clients without kubectl port forwarding – and even completely without knowledge of the Kubernetes cluster – to invoke the function, that can be done as well, by exposing an external IP for the service on K8S for fn-api.

imageFirst, let’s invoke the function from with in the VM.

export KUBECONFIG=/vagrant/kubeconfig

# retrieve the name of the Pod running the Fn API
kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}"

# retrieve the name of the Pod running the Fn API and assign to environment variable POD_NAME
export POD_NAME=$(kubectl get pods --namespace default -l "app=my-release-fn,role=fn-service" -o jsonpath="{.items[0].metadata.name}")
echo $POD_NAME    


# set up kubectl port-forwarding; this ensures that any local requests to port 8080 are forwarded by kubectl to the pod specified in this command, on port 80
# this basically creates a shortcut or highway from the VM right into the heart of the K8S cluster; we can leverage this highway for deployment of the function
kubectl port-forward --namespace default $POD_NAME 8080:80 &


curl -X POST \
  http://127.0.0.1:8080/r/soaring-clouds-app/shipping-costs \
  -H 'Cache-Control: no-cache' \
  -H 'Content-Type: application/json' \
  -H 'Postman-Token: bb753f9f-9f63-46b8-85c1-8a1428a2bdca' \
  -d '{"X":"Y"}'



# on the Windows laptop host
set KUBECONFIG=c:\data\2018-soaring-keys\kubeconfig

kubectl port-forward --namespace default <name of pod> 8080:80 &


kubectl port-forward --namespace default my-release-fn-api-frsl5 8085:80 &

image

Now, try to call the function from the laptop host. This assumes that on the host we have both kubectl and the kubeconfig file that we also use in the VM.

First we have to set the KUBECONFIG environment variable to refer to the kubeconfig file. Then we set up kubectl port forwarding just like in the VM, in this case forwarding port 8085 to the Kubernetes Pod for the Fn API.

image


When this is done, we can make calls to the shipping-costs functions on the localhost, port 8085: endpoint http://127.0.0.1:8085/r/soaring-clouds-app/shipping-costs

imageThis still requires the client to be aware of Kubernetes: have the kubeconfig file and the kubectl client. We can make it possible to directly invoke Fn functions from anywhere without using kubectl. We do this by exposing an external IP directly on the service for Fn API on Kubernetes.

The simplest way of making this happen is through the Kubernetes dashboard.

Run the dashboard:

image

and open it in a local browser at : http://127.0.0.1:8001/ui .

Edit the configuration of the service for fn-api:

image

Change type ClusterIP to LoadBalancer. This instructs Kubernetes to externally expose this Service – and assign an external IP address to it. Click on Update to make the change real.

image

After a litle while, the change will have been processed and we can find an external endpoint for the service.

SNAGHTML2314c81

Now we (and anyone who has this IP address) can invoke the Fn function shipping-costs directly using this external IP address:

SNAGHTML231e1db

Summary

This article showed how to start with a standard Windows laptop – with only Virtual Box and Vagrant as special components. Through a few simple, largely automated steps, we created a VM that allows us to create Fn functions and to deploy those functions to a Kubernetes cluster, onto which we have also deployed the Fn server plaform. The article provides all sources and scripts and demonstrates how to create, deploy and invoke a specific function.

Resources

Sources for this article in GitHub: https://github.com/lucasjellema/fn-on-kubernetes-from-docker-in-vagrant-vm-on-windows

Vagrant home page: https://www.vagrantup.com/

VirtualBox home page: https://www.virtualbox.org/ 

Quickstart for Helm: https://docs.helm.sh/using_helm/#quickstart-guide

Fn Project Helm Chart for Kubernetes – https://medium.com/fnproject/fn-project-helm-chart-for-kubernetes-e97ded6f4f0c

Installation instruction for kubectl – https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-binary-via-curl

Project Fn – Quickstart – https://github.com/fnproject/fn#quickstart

Tutorial for Fn with Node: https://github.com/fnproject/fn/tree/master/examples/tutorial/hello/node

Kubernetes – expose external IP address for a Service – https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

Use Port Forwarding to Access Applications in a Cluster – https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

AMIS Technology Blog – Rapid first few steps with Fn – open source project for serverless functions – https://technology.amis.nl/2017/10/19/rapid-first-few-steps-with-fn-open-source-project-for-serverless-functions/

AMIS Technology Blog – Create Debian VM with Docker Host using Vagrant–automatically include Guest Additions – https://technology.amis.nl/2017/10/19/create-debian-vm-with-docker-host-using-vagrant-automatically-include-guest-additions/