In this post we will have a look at Red Hat Openshift, a developer-oriented PaaS based in Kubernetes. Specifically, we use Openshift Origin, its open source flavour, to simulate a local cluster environment in a single host machine by deploying and configuring a set of VMs with the help of Vagrant and Ansible.
As a first contact with the platform, we will be getting familiar with the Openshift CLI and UI at the same time we deploy a sample application, Kubernetes Guestbook. After this, we will leverage the preconfigured Jenkins-Persistent template in order to create a Jenkins pod running over persistent data volumes. For this, we will configure a simple GlusterFS cluster as the backing store, also deployed in our VMs.
Cluster provisioning using Vagrant and Ansible
Our local cluster setup will be composed by three VirtualBox VMs, one for the Openshift master (
master1) and two for the schedulable nodes (
node2). Additionally, a fourth VM (
admin1) will be created by Vagrant, which will be used for provisioning the other three machines with Ansible.
Install Vagrant and plugins
First, we will need to install VirtualBox, Vagrant and the plugins that the default provisioning process requires. Here I describe the steps for my VM host, which is running a CentOS 6 distro:
Note that the Vagrant version we are using is 1.8.6, which it isn’t the latest stable at the time of this writing. This is because one of the needed plugins, landrush, currently has issues with the 1.9.x releases.
VM provisioning and Openshift installation
The GitHub repo
openshift-ansible-contrib contains all the configuration files needed for the setup:
git clone https://github.com/openshift/openshift-ansible-contrib.git cd openshift-ansible-contrib/vagrant
By default, the Vagrantfile in this directory provisions a set of VMs, each of them limited to 1 CPU core and 1 GB RAM, with the next configuration:
The installation will ask for your password in order to edit the /etc/hosts file and reconfigure the dnsmasq service, so make sure that your user is in the sudoers file before running the next command.
Check the cluster installation
In order to check if the cluster has been correctly provisioned, let’s SSH into the master VM and run some basic commands:
Then, return to your host and check that the following lines have been added to your /etc/hosts file:
$ cat /etc/hosts ... 192.168.50.23 admin1.example.com 192.168.50.23 admin1 192.168.50.22 node2.example.com 192.168.50.22 node2 192.168.50.21 node1.example.com 192.168.50.21 node1 192.168.50.20 master1.example.com 192.168.50.20 master1
Note: In my case, I had issues during the installation process and I had to add the routes manually to /etc/hosts. If you need to do this too, you can find them in ~/.vagrant.d/tmp/hosts.local
Optional: Configure external access
I didn’t have much faith that my laptop could handle the four VMs, so to play it safe I used an external server as the host. However, in order to interact with the Openshift cluster from my local machine, I had to setup a SSH tunnel — actually two of them, one for the CLI and another for the Web UI.
If your VM host is your local machine, you can skip this section and proceed with the Openshift client installation.
Tunnel for the Openshift CLI
From a terminal in your computer:
ssh -fL 8443:127.0.0.1:8443 email@example.com -N sudo sh -c "echo '127.0.0.1 master1.example.com' >> /etc/hosts"
Check that you can access the Openshift API:
SOCKSv5 tunnel for the Openshift Web UI
Again, from your computer:
ssh -fL 11443:127.0.0.1:11443 firstname.lastname@example.org -N
Now, go to the network options of your browser and select a manual proxy configuration using SOCKSv5 with host
127.0.0.1 and port
11433. With this in place, you should be able to navigate to the Openshift login page in https://master1.example.com:8443
Install and configure the Openshift client
Find out which exact Openshift version you have installed (hint: previous
oc version) and download the same openshift-origin-client-tools version from the release repository page.
oc is properly set up in your
$PATH, try to login using the pre-configured account:
$ oc login https://master1.example.com:8443 Authentication required for https://master1.example.com:8443 (openshift) Username: admin Password: admin123 Login successful.
With the same credentials, you can also login in the Openshift web UI:
Example 1: Kubernetes Guestbook
Instead of running the classic “hello world” application, we will make things a bit more interesting by deploying the Kubernetes Guestbook application through the Openshift CLI.
Openshift applications are usually deployed using its specific template format, which contains objects that differ in some ways from the equivalent Kubernetes specifications. For example, in Openshift we have the first-class objects
DeploymentConfig, additional abstractions that don’t exist in Kubernetes, as well as parametrized variables that can be modified when the template is deployed from the web UI or the CLI.
In this case, since the Kubernetes Guestbook is not available in the Openshift template format, we will simply use the .yaml specifications provided in the Git repo.
Deploy the Guestbook application
Using the previous Openshift admin user with the
oc client, create a new project and download the guestbook .yaml file:
oc new-project kubetest wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml
In this example there is a
frontend deployment that will try to run a
php-rediscontainer (from image
gcr.io/google-samples/gb-frontend) that executes a web server listening in port 80, using the root user inside the container. Although this would work without issues in a typical Docker or Kubernetes configuration, in this Openshift setup it won’t. By default, containers are not allowed to run with the root user (Openshift will select an arbitrary UID for each container deployment), so
php-redis will fail when it tries to bind to port 80 with a non-privileged user.
Since we don’t want to modify the image in order to run the webserver in another port, we will simply relax the security constraints for this deployment. Using the
oadm client from the master1 node:
oadm policy add-scc-to-user anyuid system:serviceaccount:kubetest:default
This will add the security context constraint
anyuid to the service account
kubetest:default, i.e., in the project
kubetest containers will be free to use any UID they want.
anyuidis one of the preconfigured security context constraints that we can set up. You can check all the available ones with
oc get sccusing your system admin account (from the master1 CLI).
Now, from our local machine we can deploy the application:
oc create -f guestbook-all-in-one.yaml
You can monitor the progress of your deployment by running
oc get all, which will display all the services and pods that you just have created.
Create a route for the Guestbook UI
The guestbook UI is running in the
frontend service, but it isn’t externally available yet. For this, we need to create an external route to that service. We have two options here, via the web IU or using the CLI:
oc expose svc frontend
If we just go with the default options like this, the service should be exposed with the FQDN
frontend-kubetest.apps.example.com through the master1 address. You can check this with
oc get routes.
In order to finally access the application, we must add the new FQDN to the
/etc/hosts in the server hosting the VMs with the same IP as the master1.example.com host. For example, I have the next lines in my file (which should also match with your IPs if you didn’t touch the Vagrantfile configuration):
192.168.50.20 master1.example.com 192.168.50.20 frontend-kubetest.apps.example.com
Test that you can navigate to that URL in your browser or that
curl -k returns a proper HTML file. Also, you can see the project status with the Origin CLI:
Now that the guestbook is deployed, we can modify the number of instances of our deployments:
For example, scale up the frontend instances and check that, if we
curl the services several times, we can see in the pods logs with
oc logs [pod-name] how the requests are being load-balanced between them in a round-robin fashion. Likewise, if we scale up the redis-slaves, we can check in the redis-master logs how the new instances are being registered.
oc scale --replicas=3 deployment redis-slave oc scale --replicas=2 deployment frontend
Although we can scale up and down using the CLI, when we try to do the same from the web IU, these options are missing. Furthermore, in the Applications > Deployments sections we can’t see any of our deployments! This is because, in this Openshift version (origin v1.3.2), the Kubernetes first-class object
Deployment is not fully supported and the Openshift specific object
DeploymentConfig must be used instead (actually, the
Deploymentobject has been recently introduced in Kubernetes and its design was inspired by Openshift’s
Example 2: Jenkins-Persistent over GlusterFS
In this example, we will put the Jenkins Persistent template to use, which is by default available in our Openshift installation. Before that, we will need to provide a storage backend in order to persist Jenkins data. This storage backend will be a simple GlusterFS configuration with redundancy across our schedulable nodes.
We will use static provisioning of persistent volumes, which means that the sysadmin has the responsibility of creating
PersistentVolume objects which will be available to developers via
PersistentVolumeClaims. Instead, we could have chosen the path of the dynamic provisioning of PVs using
StorageClasses, but that would imply more configuration steps. Furthermore, we are creating the Gluster filesystem directly in paths inside our minion nodes, although another option would be to deploy containerized Red Hat gluster storage
Create a GlusterFS volume
First, we need to install the gluster packages in both the node1 and node2, and create the directory where the Gluster volume will be (e.g.
/data/gv0 in the root path). Also, we need to open additional ports at the guest VMs firewalls (
glusterd management and
49152-49251 for Gluster client access).
yum install -y centos-release-gluster yum --enablerepo=centos-gluster* install -y glusterfs-server systemctl enable glusterd.service systemctl start glusterd.service mkdir -p /data/gv0 firewall-cmd --zone=public --add-port=24007-24008/tcp --permanent firewall-cmd --zone=public --add-port=49152-49251/tcp --permanent firewall-cmd --reload
Then, from one of the two nodes, create and start the volume:
gluster volume create gv0 replica 2 node1:/data/gv0 node2:/data/gv0 force gluster volume start gv0
master1 node, now we can test that the volume can effectively be mounted. Also, in order to allow any Openshift user to create files, we will change the root directory permissions of the volume. Of course, if we wanted to do it properly at a production cluster, we should have to deal with POSIX ACLs, SELinux labels and Openshift security constraints.
yum install -y glusterfs-fuse mkdir -p /mnt/gv0 mount -t glusterfs node2:/gv0 /mnt/gv0 chmod -R 777 /mnt/gv0
Note that, in order to mount the volume, the package
gluster-fuse must be installed in all the schedulable nodes.
Register the Gluster volume in Openshift
This specification will tell Kubernetes to create a 2GiB persistent volume with RWO permissions (i.e. the volume can be mounted as read-write by a single node), which are the permissions required by the storage claim in the Jenkins-Persistent template. Therefore, using the sysadmin account in
Create a new project for the Jenkins deployment and set up the storage endpoint
With the storage backend provisioned and the necessary PersistentVolume created, we can go back to our Openshift local user and create a new project
oc new-project kubetestpv
In this project, we need to specify where the GlusterFS cluster lives using the
Endpoints object, which will be accessed by our pods through a
Note that the
glusterfs-cluster name is the same that we specified as
endpoint when we created the PersistentVolume
Thus, with the creation these objects in the
kubetestpv namespace, the PersistentVolumeClaim from the Jenkins template should be successful:
oc create -f gluster-service.yaml oc create -f gluster-endpoints.yaml
Run the Jenkins application
In the web UI, navigate to the new
kubetestpv project, click Add to Project and select the
jenkins-persistentinstant app from the template catalog. You will be presented with all of the configurable parameters:
If you are using an Origin version <1.4, it’s important that you disable OAuth authentication, since the Jenkins Openshift plugin that comes with that image doesn’t work with older Origin versions due to a incompatibility when it tries to query the OAuth service.
If we don’t know this and deploy the template with OAuth turned on, the Jenkins initialization will be stuck in a loop trying to reach the OAuth service, with the pod in an unhealthy state. Fortunately, Openshift allow us to fix this “on the fly”. Go to the
jenkins-1 Deployment and directly editing the template by clicking in Actions > Edit YAML:
After changing the OPENSHIFT_ENABLE_OAUTH parameter to false, saving the file will automatically terminate the running pods and create a new
jenkins-2 deployment, which hopefully will be up and running healthy in less than a minute.
Finally, in order to access the Jenkins UI, we need to add its route to the
/etc/hosts in the VM host, just like we did earlier in the guestbook example:
$ cat /etc/hosts ... 192.168.50.20 master1.example.com ... 192.168.50.20 jenkins-kubetestpv.apps.example.com
With this in place, the Jenkins UI should now be reachable at https://jenkins-kubetestpv.apps.example.com in your browser and, at last, you should be able to log in with the pre-configured credentials (i.e. user:
Indeed, Kubernetes is superb distributed cluster operating system that allows deploying services with ease in a reliable, scalable way. However, its management still can be cumbersome for users with limited operations background. Openshift aims to bridge that gap, offering a set of abstractions that makes Kubernetes much more accessible to developers, and adds up out-of-the-box features, like multi-tenancy and security, that makes this PaaS a strong case for being a full solution to enterprises.
However, Openshift is not the only choice if you are considering a Kubernetes-based PaaS. Therefore, in future posts we will have a look at other similar PaaS products, with the intention of grading which one of them is the most appropriated solution for a number of different use cases.