Chris Weber

Manually Running Kubernetes vs Manually Running Openshift


Sat Apr 04 2020

Introduction

I've been running various kubeadm Kubernetes clusters for personal use "on-prem" since version 1.13. More recently I've also been spinning up and using Openshift 4.x clusters as well. Since these are for personal use and experimentation I haven't been as worried about uptime, performance, or other production metrics, but in using both have noticed some pros and cons with each flavor of Kubernetes that may be useful for some folks trying to decide between the two. In this post I aim to explore some of my personal findings around creating, maintaining, and using the different clusters. I will not dive into too many specifics, and this is not a step-by-step guide, but if you're interested in that, let me know!

Before we dive in, it's worth mentioning that these clusters were all run on a single physical machine (again, personal use and non-production) and the nodes were virtualized or containerized using a portion of the available resources.

Creating a Kubernetes cluster with kubeadm

My initial setup using kubeadm wasn't without it's hiccups. To be fair, my goal was to create a kubernetes cluster using LXD containers as nodes instead of virtual machines and some of the challenges I faced were around getting the container profiles correct with the right permissions (or lack thereof) so that kubeadm could execute successfully. Luckily, each error that kubeadm spit out was very google-able and the preflight checks that were giving me problems were safely ignorable (--ignore-preflight-errors=Swap and --ignore-preflight-errors=FileContent--proc-sys-net-bridge-bridge-nf-call-iptables). Those nuances aside though, the kubeadm setup docs were straightforward to follow and after some trial and error I was able to get a master node up and running. A couple worker nodes quickly followed thanks the join command, and then after tweaking the Calico quick-start manifest and applying it, all my nodes were reporting "Ready".

Creating an Openshift 4.x cluster

While the development of OKD4 is actively underway, and I look forward to experimenting with it, my experience to date has been with the Red Hat product and not the open source upstream project. That being said, the first thing that jumped out at me when I looked into creating my own Openshift cluster was the sheer number of prerequisites. While my Kubernetes installation required nodes (and technically a network), Openshift had some very stringent requirements that were a bit more difficult to meet. An advantage I had here was that my entire cluster environment was virtualized, so setting up custom DNS records (including wildcard and SRV records) and the static IPs to go along with them wasn't an insurmountable challenge (it also spared anyone using my regular consumer WiFi/LAN from random outages due to changing configs). A simple HTTP server for a handful of static files was a layup and setting up an external NGINX load balancer was a somewhat straightforward affair (I did eventually move to use HAProxy as the load balancer to compare. For what it's worth online docs and tutorials seem to favor this setup).

Finally, once everything was in place and the appropriate Ignition files generated, booting up the VMs and creating the cluster was [almost] a breeze. I found taking the extra step to setup PXE booting was well worth it, but using the .iso image and customizing a handful of VMs wasn't too much trouble either. The main speed bumps I hit were around the most efficient timing to boot worker nodes, remove the bootstrap node, and finally just waiting for all the operators to finish initializing (this last one I blame squarely on my last-gen server hardware I'm running everything on). However, after a bit of experimentation and patience, I was greeted by the Web Console login and a fully initialized cluster was waiting on the other side.

My Personal Take on Creating Clusters

All in all, the initial creation of a Kubernetes cluster is much easier due to the investment in kubeadm, it's flexibility, and limited dependencies on external configurations and servers. To it's credit, the Openshift cluster creation isn't terribly complex in and of itself, but the amount of prerequisite work that needs to be completed prior increases the ignition cost.

For anyone looking for simpler/alternative ways of creating clusters for experimentation, development, or testing there's also the following that may be of interest:


Kubernetes in Use

Using the Kubernetes cluster was a pretty standard affair. I enjoyed having admin access and was able to install whatever I wanted, setup whatever arbritrary Limits I wanted to experiment with, and when I inevitably broke something was able to dig in and explore it from all angles. Thanks to the wealth of documentation online I was able to set up dynamically provisioned storage, set up MetalLB to allocate IPs for LoadBalancer services, automatically provision certificates from Let's Encrypt, and countless other things that caught my fancy. Overall it was incredibly empowering, and I installed/purged a ton of Helm charts. Eventually my LXD containers were hitting their limit, but the process to join a new worker node was so straightforward I was able to write a rudimentary Ansible playbook to configure a new node and join it to the cluster with ease.

The trouble hit when I started messing around with Kubevirt. Unfortunately I never quite fully grasped was the issue was, but suffice to say that something about being in a container didn't allow the Kubevirt pods to come up correctly. At this point I had also been messing with setting up some virtual machines using KVM so I elected to run my Ansible playbook against a VM instead. To my pleasure, scheduling Kubevirt pods onto the VM worker node solved all my strange issues, and I enjoyed messing around with that for a bit as well.

Openshift in Use

Using Openshift was truly an enjoyable experience. For one, the built in Web Console completely overshadows anything I've seen bolted on to a vanilla Kubernetes environment. While I never felt uncomfortable or limited navigating my Kubernetes cluster using the CLI, I truly appreciated how simple it was to click around to view aggregated information, create users, and perform other multi-step or complex tasks. With my Kubernetes cluster, I found myself adding in additional functionality for logging and monitoring as my usage expanded, but Prometheus and Grafana were integrated into Openshift from the beginning (including a few dashboards!). I not only appreciated this, but it saved me considerable time not having to recreate deployments over and over to get the config just right.

The UI around the OperatorHub was also a convenient addition. While it's not complicated to copy some commands and run them against a cluster, the integrated experience was again convenient and effective. The exception would have to be Istio and Knative, the operator experience there felt a bit less intuitve but was by no means challenging to Google my way through.

Taking the UI out of it, Openshift was a very similar experience to Kubernetes from the CLI point of view. When installing various things into the cluster to play around, I found that there wasn't a difference when using Openshift, or if there was it was often an additional command to provide the correct permissions. An example here was setting up dynamic storage - I was able to follow the same steps as Kubernetes after providing an additional permission to the service account. Having access to the oc utility gave me some additional commands that were handy as well.

Another surprising win was the Openshift Routers. First, replacing the generated certificates with one issued from Let's Encrypt was pretty simple, and their existence totally eliminated my need to install and configure MetalLB.

My Personal Take on Using Clusters

When it comes to using the clusters, the up-front cost paid to create the Openshift cluster really starts to pay off. With all the work already done to integrate Prometheus, Grafana, user accounts/credentials, operators, etc it's a simple affair to focus on deploying applications. While opinionated, if your needs are met by this stack then it immediately gives you an advantage. To be fair, many of these things are also possible with Kubernetes, but the time you save when creating the cluster will be paid back when it comes time to configure and integrate all of this together.

It's worth noting that Openshift also provides some additional functionality for developing applications, such as templates, S2I, and a dedicated CLI client odo. While handy, I haven't been able to utilize these extensively enough to have an informed opinion on what value they add over a more "typical" development workflow.


Updating the Kubernetes cluster

When it came time to update the cluster I again leaned on the docs. While it was a bit tedious to manually update each machine, for the most part everything went smoothly moving between minor versions and then jumping major versions. The kubeadm output was also super helpful for determining which jumps were supported and which required interim updates.

A more intimidating update was for Calico, mostly because after updating the cluster a few times I had forgotten that Calico was a separate yet active project, which also had slight changes around its installation procedure. After taking a look at some of the docs and making the same IP address updates I had made for the original manifests, all subsequent updates went smoothly.

Updating the Openshift cluster

The process of updating an Openshift cluster is considerably more straightforward than a vanilla Kubernetes cluster, but with great power comes great inflexibility. By default an Openshift installation is highly available with 3 Master nodes. While not monumentally important when all nodes are on the same physical hardware, this is a critical enabler for updates. In that world, updating is as simple as logging into the Console and clicking "Update". The cluster then proceeds to update ...everything. There's tremendous advantage to be leveraged from its [mostly] immutable and integrated stack and it really shines during update time. As all the Cluster Operators get updated and their Deployment, ReplicaSets, etc along with them, the cluster remains fully available and functioning until it finishes.

Where I shot myself in the foot was deviating from the default configuration of Openshift, namely by reducing the masters to 1. While the default configuaration was technically functional, reducing the masters to 1 eliminated a ton of consensus overhead and made for a much more enjoyable experience. The flip side of this is with a single point of failure, the cluster correctly detected that unavailability of any components would be unacceptable and wouldn't introduce that condition by updating. I'm sure there's a way to brute force it, but with all the prerequisites in place, and so much automatically done by the cluster, I found it simpler to simply spin up a new version if there was new functionality I was looking for.

My Personal Take on Updating Clusters

Updating clusters may be an area I'm mostly indifferent on, which is directly related to my decision to handicap the Openshift upgrade process. However, the Kubernetes upgrade process is straightforward and I've seen some decent success in automating it and [knock on wood] haven't had any dramatic fallout moving from version to version.


Conclusion

Overall I've had really positive experiences with both kinds of clusters and recommend anyone that's curious to take them both out for a spin. External factors may force your hand though, if you cannot meet Openshift's requirements (both at the environment level but also CPU/memory/nodes) then vanilla Kubernetes will be your choice. The flip side is since Kubernetes can run on lower-end hardware, you can experiment with true multi-node clusters much more easily using raspberry pis and other less expensive hardware.

Hopefully these musings were worthwhile for you, any comments or questions feel free to reach out to me on Twitter!

All thoughts and ideas expressed on this site, unless explicitly state otherwise, are solely the representation of myself Chris Weber, and not of my employer, company, family, or friends.