openshift-versus-kubernetes
OpenShift versus Kubernetes
May 4, 2020
Liviu Miron
Tech

How did we get to containers?

In the beginning there were nothing but bare-metal machines.

The applications where deployed directly on these machines and they all shared the same hardware and operating system.

We call this traditional deployment and it was a simple to understand but cumbersome process.

If we wanted to install the application we first had to install the required dependencies of that application. If different applications from the same server had incompatible dependencies we had to find workarounds for that. This made transferring the same application to different environments a real nightmare.

Then there was also the problem of scaling. If we wanted to scale our application on the same server we were limited by the amount of resources on the physical machine. Also each of the installed applications competed for the same limited resources and if one was more resource hungry than the others that could had led to issues for other applications installed onthe same machine.

If we chose to scale it on multiple servers we again run into the installation problems described above and also problems like application coordination and load balancing.

Then came a solution to this problem: virtual machines.

A virtual machine is a virtual environment that functions like a virtual computer system with it’s own: CPU, memory, network interfaceand storage. The virtual machine is created on a physical machine or even another virtual machine (yes, it’s possible and it’s called nested virtualization).

A software called hypervisor separate the guest machine’s resources from the host machine’s. This allows us to control how much resourcesour applications can use and it also allows us to isolate different applications.

Inside virtual machines we can pack our applications, their dependencies and the operating system we want them to use.

We call this a virtualized deployment and it’s a huge improvement from the traditional one. The installation process requires simply copying the virtual machine from one environment to the other. The scaling process requires just the same.

If everything looks so good, why does this article even exists?

Well… let’s do the math. Let’s assume we have one application deployed on a bare-metal machine. The application will use 1GB of memory, the operating system another GB. In total we use 2GB of memory.

How will that look in a virtualized deployment?

The application will still use 1GB, the guest operating system another 1GB and, even if we ignore the hypervisor let’s assume the host operating system also uses another GB of memory. That’s 3GB.

And that is just for one instance of the app. If we scale the application in the traditional deployment to let’s say 3 instances we will have a total of 4GB of memory (2GB + 2GB for the additional instances).

If we scale the application in the virtualized deployment to 3 instances we will have 7 GB of memory (3GB + 4GB for the additional instances). It’s almost double!

And that’s why this article is about containers.

A container consists of the deployed application and it’s dependencies. It uses the host operating system’s kernel.

The container is being run by a container runtime.

Containers are isolated from one another and each can have it’s own dependencies and resources to use.

This is called the container deployment. Unlike the virtualized deployment we removed the guest operating system. Also instead of a hypervisor we now have a container runtime. The container runtime uses the host operating system and it isolates the different containers.

Because we no longer have the guest operating system, there will be less overhead than in the case of a virtualized deployment. Let’s do the math to prove that!

Let’s assume like before that the operating systems uses 1GB of memory and the application another GB. This gives a total of 2GB of memory.Just like in the case of traditional deployment.

The difference in resources when scaling can be seen below:

In other ways is cheaper to use containers than to use virtual machines. Also it has less CPU overhead as we no longer have to virtualize the hardware. And these two lead to one thing: we can spawn a lot of containers.

Next let’s see how we can make our own containers. We’ll use Docker, because it’s the industry standard for containers.

Docker images

Images? It’s not a trick, we are not talking about virtual images but Docker images.

A Docker image is an immutable file that can contain: the source code (or the binaries) of your application, the required dependencies of that application and other tools and files needed to run it.

Images can be created in multiple ways:

- from Dockerfiles (local or remote)

- from running containers

- by pulling them from an image repository

A Dockerfile is a text file that contains all the commands auser can type in the command line to create a Docker image. Because Dockerfiles are text files we can store and transfer them very easily. It also allows us to use source control systems to manage them.

Below you can see an example of a simple Dockerfile for a Python web application:

1 FROM python:3.6

2 WORKDIR /app

3 RUN pip install Flask

4 COPY app.py /app

5 EXPOSE 5000

6 CMD ["python", "app.py" ]

Let’s explain the content of the Dockerfile. We can divide it in 3 steps:

1.    the import step, in which we specify on which image we want to base our own:

1 FROM python:3.6

2.   the application installation step, in which we install the required dependencies, we copy the application and we expose a port:

2 WORKDIR /app

3 RUN pip install Flask

4 COPY app.py /app

5 EXPOSE 5000

3.    the application running step, in which we start our application:

6 CMD ["python", "app.py" ]

We can generate an image from the Dockerfile with the following command from the same directory as the Dockerfile:

          docker build–tag myapp:1.0 .

The result will be a new image in the local image registry that is tagged myapp:1.0. The images in the local image registry can be listed with the command:

          docker imagels

The use of images allows us to have a consistent way of recreating the deployment environment for our deployed applications that can easily be automated.

A Docker image consists of a number of read only layers. The layers are generated when the Dockerfile commands are being executed during the Docker build process. Each command is corresponded to a layer generated by the execution of that command.

In the image below we can see the output of the docker build command:

Each command in the Dockerfile is a step in the build process. After each step a new layer is generated. Each layer has a generated ID. For example the ID of the first layer is b63ef4ef530f.

In the image to the left we can see that the layers are stacked in the image. The bottom layer is the base image and we add on top of it a new layer for each extra command from theDockerfile. All the layers are read only.

Docker containers

Images can be used to create container. A container is the instance of an image. It’s defined by that image and it’s runtime configurations.

In order to create the image previously built from theDockerfile we can use the command:

          docker run –publish 8080:5000 –name myapp myapp:1.0

What does it do? The command forwards the port 5000 of the container to the port 8080 of the local machine and it creates and starts a container called myapp from the image tagged myapp:1.0 from the local image registry.

We can then easily check if it’s accessible by opening a browser on localhost:8080.

To the left we can see the layers of a container. A container consists of all the layers of it’s image(that are read only) and a container layer (that is writable). The container layer is the only one that can be changed during the container’s life cycle. However when the container is stopped all the changes are lost.

But if all the changes are lost…what if we actually want to persist something for the next time we create a container?

There are three ways to do that:

   • volumes: are stored in a part of the filesystemthat is managed by Docker, they are created and managed by Docker. A volume can be mounted by multiple containers at the same time.

   • bind mounts: can be stored anywhere on thefilesystem. When you use a bind mount, a file or directory on the host machine is mounted into a container.

   • tmpfs mounts: are stored in the host system’s memory only and are never written in the file system. It can be used by a container during the lifetime of the container, to store non-persistent state or sensitive information.

Containers are a good way bundle and run applications but in production environments we may have to run hundreds of them at the same time. Managing and monitoring them can be a difficult task.

Here come technologies like Kubernetes and OpenShift.

Kubernetes

Kubernetes is a container orchestration platform developed byGoogle. It's an open-source platform aiming to automate container operations: deployment, scaling and operations of application containers across clusters of hosts.

We can interact with the Kubernetes cluster in several ways:

   • by APIs

   • by dashboard

   • by the kubectl CLI

Kubernetes interaction is done through the master node. This node has varios components like:

   • kube API server

   • scheduler (watches for newly created Pods with no assigned node, and selects a node for them to run on)

   • controller (tracks at least one Kubernetes resource type. These objects have a spec field that represents the desired state. The controller(s) for that resource are responsible for making the current state come closer to that desired state).

   • etcd (consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data)

The master manages the rest of the cluster. The other nodesin the cluster are called worker nodes (and were previously called minions). The master node is responsible for maintaining the desired state of the cluster.

Each of the nodes resides on a physical or virtual machine and is created outside Kubernetes.

A node contains the services required to run pods:

   • a container runtime: the software responsible for running the containers (usually Docker).

   • the kubelet: an agent that runs on each nodes and makes sure that containers are running in a pod. It only manages containers created by Kubernetes.

   • the kube-proxy: a network proxy that runs on each node, it maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.

A pod is made of one or more containers that work together. The containers inside the pod share the same network identity and the same storage resources. Pods represent the units of deployment and are managed by the Kubernetes cluster as a single unit. A pod runs on a single node.

Kubernetes takes a cloud-native view of systems, and is able to handle constant change.

Your cluster could be changing at any point as work happens and control loops automatically fix failures. This means that, potentially, your cluster never reaches a stable state. The desired state is described in a deployment.

As long as the controllers for your cluster are running and able to make useful changes, it doesn’t matter if the overall state is or is not stable.

Kubernetes uses YAML files used to describe the desired state. In those files we can define anything from a load balancer to a group of pods to run the application. The YAML files can be easily read and stored in a source control system like Git.

Kubernetes can also help with configuration management using ConfigMaps where the user can define environment variables and configuration files. There are also objects called secrets that can contain authentication credentials, certificates and other sensitive information in a secure way.

A replication controller ensures that a specified number of replicas of a pod are running at all times. If pods exit or are deleted, the replica controller acts to instantiate more up to the desired number. Likewise, if there are more running than desired, it deletes as many as necessary to match the number.

Kubernetes can be installed on almost any Linux distribution.

In Kubernetes you can check the health of pods or applications in with probes run by a kubelet agent. There are 3 types of probes we can define:

   • Readiness probes that can tell us if a container can receive requests. In case of failure the pod will no longer be available for further requests.

   • Liveliness probes that can tell us if the container should be restarted. In case of failure the pod is restarted.

   • Startup probes that can tell us if the container has started. In case of failure the pod is removed and another one is created.

Each of the three types of probes can be setup to be used after a certain timeout has passed or after a set time interval. A common issue that can arise is when the probe checks the pod too soon if the application is ready and when the result is negative it shuts down or restarts the pod. It’s important to have timeouts long enough to take into account pods that are slower to start.

As we could see in the Docker containers section, we were required to expose a port in order for our application to be accessible externally. How can we do that in Kubernetes, especially if we are likely to have a cluster with many nodes and even more pods for a single application? This raises questions like how to perform service discovery and how to achieve load balancing.

In Kubernetes a service is an abstraction that allows us to expose an application that runs on one or more pods as a network service. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods and can load-balance across them.

In order to expose a service externally we can use something called Ingress. A Kubernetes Ingress is an object that manages external access to the services in a cluster. They may provide may provide:

   • load balancing

   • SSL termination

   • name-based virtual hosting

One important aspect that should be taken into account is that in order for the Ingress resource to work the cluster must have an ingress controller running.

With Kubernetes you may set up your own Docker registry but there is no concept of an integrated image registry.

OpenShift

OpenShift is an enterprise Kubernetes distribution developed by Red Hat. It builds on Kubernetes and it adds several novel features. Being an enterprise distribution it comes with professional support but it canonly be installed on Red Hat operating systems.

Unlike Kubernetes which allows different container runtimes, OpenShift allows only Docker.

Just like in Kubernetes we can interact with OpenShift in 3 ways:

   • by an CLI program called oc

   • using a REST API

   • from a GUI

Unlike Kubernetes the out of the box installation of OpenShift comes with an image repository. This can be used to host the Dockerimages use to create our containers. Of course, we can also use external repositories for our images, including Artifactory Docker registry. One thing that must be noted is that when using an external registry it’s not unusual to have some form of authentication before we can access the resources. In terms of OpenShift that means a secret object (which is just a Kubernetes secret object) must be defined first that allows us to pull the desired image from that registry.

OpenShift inherits a lot of the concepts of Kubernetes(being a Kubernetes distribution is not suprising). These concepts include:

   • master and worker nodes

   • pods

   • deployments (called DeploymentConfigs)

   • ConfigMaps and secrets

   • services

OpenShift introduced the concept of image streams. An Image Stream provides a stable pointer to an image using various identifying qualities. This means that even if the source image changes, the image Stream will still point to a known-good version of the image, ensuring that your application will not break unexpectedly. An Image Stream contains all of the metadata information about any given image that is specified in the Image Stream specification.

Builds and Deployments can be automatically started when a given Image Stream is modified. This is achieved by monitoring that particular Image Stream and notifying the controller (the Build or Deployment) when a change was detected.

OpenShift also has the notion of project. A project is a Kubernetes namespace with additional annotations. We can only create a new deployment inside a project. The projects allows the management of the user’s access to resources. The users can manage their own resources in isolation. The access rights are given by the administrators.

Another interesting OpenShift concept is that of route. A route is a way in which we can expose services externally by providing them with an externally accessible hostname and port.

OpenShift build on the Kubernetes replication controllers by adding expanded support for the software development and deployment life cycle with the concept of deployments. In the simplest case, a deployment just creates a new replication controller and lets it start up pods. However, OpenShift deployments also provide the ability to transition from an existing deployment of an image to a new one and also define hooks to be run before or after creating the replication controller.

One key difference between OpenShift and Kubernetes is that it’s default security policies are much stricter. For example many container images can’t be used because the default OpenShift policy forbids the running of containers as root.

The OpenShift web console it’s a great tool that allows the user to perform most of the tasks directly from it. It’s deployed as a pod on the master node. It interacts with the cluster using the REST API.

The web console requires authentication, this can either be simple HTTP authentication (which uses passwords from a flat file generated using htpasswd) or one that involves LDAP integration.

Another tool that can be used to manage OpenShift is the ocCLI. This tool allows us to create and manage projects and applications fromthe terminal. It’s more powerful than the web console and it can be used in the following cases:

   • when we work directly with the project source code

   • when we want to script OpenShift operations

   • when we can’t use the web console because of various restrictions (like band width)

Before running the CLI we have to configure it. As it’s a client application it can be run not only on the master node but from anywhere we have access to that node. The configuration information of oc is stored in~/.kube/config. It includes cluster information and a series of authentication mechanisms. The oc can authenticate with the command oc login.

More information about the CLI can be found at https://docs.openshift.com/container-platform/3.11/cli_reference/get_started_cli.html

More information and an interactive learning environment can be found at https://learn.OpenShift.com/playgrounds/OpenShift36/.

OpenShift uses Pipeline build, a form of source-to-image build that refers to an image containing a Jenkins which in turn monitors ImageStreamsTags. When there is a need to update, it can start a Jenkins build.

An important aspect when using an enterprise product is it’s pricing. What is the cost of using a cluster of 4 OpenShift nodes for a year? From what I managed to find online the 2017 price of a standard subscription was $3800 for a year for each node. The price of the Red Hat operating system’s subscription also for a year was $799. Multiply each by 4 and add them and you will get an estimated cost of over $18000 a year. Not all projects have that licensing budget, but it includes support.

However there is also a FREE community version of OpenShift called OKD (https://www.okd.io/).It’s basically a clone of the enterprise OpenShift with several small differences. The most important of those is that it’s open source, it has no professional support and it cannot use any of the official Red Hat images. It’s also limited to the CentOS operating system.

Conclusion

Both Kubernetes and OpenShift can play the same role and they mostly allow their user to do similar tasks but their differences are also important and need to be taken into account.

OpenShift is easier to use for a beginner as it comes with a lot of out of the box capabilities and an amazing web console. It’s main limitations come from the fact that it’s too strongly connected with the RedHat suite and the costs it implies. It’s a great option when you have money to throw at your projects and not so many team members experienced with Kubernetes. The learning curve is not very steep and with a couple of clicks and commands you can do most tasks (about 80% in OpenShift 3 and over 90% in OpenShift 4). Also the command line interface of OpenShift, oc is easier to use. And most importantly, it comes with support from Red Hat.

Kubernetes allows a lot more freedom(in terms of containe rruntime, operating system, security policies and much more) and it’s free. However it’s more difficult to use by beginners (and it’s dashboard is less useful) and requires more experienced users.

Talk to the team

ASK US SOMETHING OR JUST SAY HELLO, WE'RE HERE.
Contact