Kubernetes at a glance
Jeroen • January 5, 2020
Kubernetes is a zero-downtime deployment strategy for any application running in (Docker) containers. Right here I intend to keep an overview of the basic but important concepts for Kubernetes to make it easier to learn and comprehend.
Here is a short list to start off with a couple of terms that are central to understanding and working with Kubernetes. Kubernetes is often shortened to K8s.
A (virtual or physical) server. A node accomodates everything that is needed to run Pods.
A set of machines, called nodes, that run containerized applications managed by Kubernetes. A cluster has at least one worker node and at least one master node. The worker node(s) host the pods that are the components of the application. The master node(s) manages the worker nodes and the pods in the cluster. Multiple master nodes are used to provide a cluster with failover and high availability.
The basic execution unit of a Kubernetes application – the smallest and simplest unit in the Kubernetes object model that you create or deploy. A Pod represents processes running on your Cluster.
Your application might consist of multiple containers, a Pod therefore could run multiple containers as well. It does however only run the containers necessary for one instance of your application. Instead of replicating one container of your application, Kubernetes replicates the Pod, thus the whole set of containers.
You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments.
A service exposes your Pods within the cluster. It decouples different Pods (for example, a backend and frontend) so one Pod can be replicated, created or destroyed without having the other to change for example the IP address to that Pod.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them “backends”) provides functionality to other Pods (call them “frontends”) inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload? Enter Services.
An API object that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination and name-based virtual hosting.
Processes that tie everything in the cluster together. There are Master Components, which control the cluster as a whole, and Node Components that control the Pods with their containers on each Node.
The Master Components are:
The Node Components are:
Controllers are control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.
Basically, a volume contains files. Most importantly to know first is that data is not persistent within a container. And generally, a volume in Kubernetes lasts as long as the Pod does. When the Pod goes away, so does the volume.
At its core, a volume is just a directory, possibly with some data in it, which is accessible to the Containers in a Pod. How that directory comes to be, the medium that backs it, and the contents of it are determined by the particular volume type used.
When you start working with keeping files persistent across deployments or Pods you will have to read more about Persistent Volumes.
Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces. Namespaces can not be nested inside one another and each Kubernetes resource can only be in one namespace.