What does managing a Kubernetes (K8S) cluster mean in practice?

kubernetes-official-site-logo-screen-montreal-canada-march-open-source-container-orchestration-system-automating-177672924

What is Kubernetes?

Anyone who has ever encountered K8S containers knows that the larger the number of them, the more administrative burden they have to endure. In the meantime, questions such as:

  • How do I update my applications to a new version without crashing?
  • If there is a problem, how can I roll back to an older version?
  • What happens if my given virtual server is not available and I detect a service outage?
  • When I perform security updates/changes in the lower layers (server, operating system, virtual server, disk, network), how can I achieve this without stopping my application running on them?
  • My application temporarily receives a high workload (CPU, memory), how do I give it extra resources?

Kubernetes (K8S) is an open source container orchestration platform developed by Google.

The services of the Kubernetes platform are able to solve all the above mentioned problems. The management, scaling and deployment of K8S containers can also be automated.

Rolling upgrade and rolling downgrade can be implemented, which means that we can update a container to a new version without stopping. In case of an error, we can revert to an older version.

If one of our virtual servers stops, Kubernetes can notice the error and place our application on an available server. With its help, we can maintain several replicas of the same application on several servers at the same time, so we don’t even have to count on minimal downtime.

When we perform planned maintenance, we can move our containers from the server we want to maintain to a temporary server with great security and without downtime.

When our applications get a bigger load and need to temporarily serve more clients, Kubernetes can scale our containers horizontally and evenly distribute the load on the available servers.

What do we call Vanilla Kubernetes?

Vanilla Kubernetes or open-source Kubernetes means the platform that contains only the main/basic components: Control Plane, etcd, api-server, scheduler, controller manager, kube-proxy, container runtime (e.g. Docker).

Why is managed Kubernetes better?

I managed to start my first cluster, but… when running your own Kubernetes cluster, several embarrassing questions may arise:

  • A given container can be moved to any server in the cluster to ensure high availability. How can I provide distributed storage for my applications?
  • How do I make my application available to the outside world?
  •  How do I get the logos into a centralized, well-visualized environment?
  • What happens if I scale my application horizontally, but the cluster runs out of available resources?
  • How can I automate my work processes? Container image creation, uploading, version management, deploy?
  • How do I renew the certificates of my web application automatically?
  • I’m a visual type, how can I see my resources instead of using the CLI?
  • How do I update the version of my cluster? New K8S versions are released every 3 months.

In many cases, companies plan to use Kubernetes due to its many advantages, but do not have the resources to maintain/build their own cluster.

Starting with little experience, in many cases unexpected/complicated malfunctions occur during operation. Finding the cause of these errors can take days or weeks, and researching the various Kubernetes Guides takes a lot of valuable time away from effective work. What can we do if we want to avoid these traps?

In the case of managed K8S, the burden of operation is taken off the customer's shoulders. We show how we achieve this with Datatronic's Kubernetes based service:

  • When running Kubernetes, we support our customers with the help of Rancher and RKE2.
  • With our distributed storage solution and StorageClass, our containers can run on any server that is part of the cluster, yet data storage is consistent and persistent.
  • Public access is provided by default for developers on the Rancher web interface and with the help of kubectl. The applications running on the cluster are accessed by the outside world with the help of Nginx Ingress controller and external LoadBalancer.
  • We collect and view the logs from the application in a central location, visualizing them with the help of Kibana.
  • It may happen that in the case of horizontal scaling, the servers of the cluster also reach the end of their capacity, in which case we are able to add new servers to the cluster, which expands the number of clients that can be served.
  • Datatronic’s team supports developers in automation within the framework of professional consulting, e.g. container image creation, CICD pipeline planning and implementation.
  • Web application certifications can be easily renewed using the cert-manager in the cluster. An automated process, so there is even less administration.
  • Datatronic can provide a visualization of the cluster and its resources through the Rancher interface. Rancher is able to manage multiple users, clusters of other cloud providers can also be integrated. We can also manage the resources there using the web user interface accessible from anywhere.
  • Datatronic ensures that everyone has an up-to-date system. The task of cluster upgrade is removed from the client’s to-do list.
If you also want to operate your applications in a more efficient and highly available cloud platform, choose the Datatronic Microservice Cluster (DMC) from our services!