View Categories

KCA Prep Guide

1 min read

Fundamentals & Theory #

Terminology & Theory #

API Server: All commands and requests are going through HTTPS calls to API server. Even control plane services are communicating between each other through it.

Cluster Store (etcd): Holds the state of all apps and cluster components. Every node (we will see later what node is) is running an etcd replica for HA but for large cluster it is recommended to run a dedicated etcd cluster. It is also recommended to keep an odd number of etcd replicas in order to avoid split brain situations.

Controllers: Kubernetes uses controllers in order to implement cluster intelligence. Some of those controllers can be Deployment Controller, Statefulset Controller, Replicaset Controller.

Controller Manager: Its responsible for spawning and managing controllers.

Scheduler: Communicates wih the API Server to get new tasks and assigns them to a capable node.

Cloud Controller Manager: For clusters that are on the cloud and the app requests a cloud service, for example a load balancer, the Cloud Controller Manager provisions it.

Worker Nodes #

kubelet: It handles all communications with the cluster. Watches the API Server for new tasks and instructs the runtime to execute them.

runtime: Most kubernetes clusters have pre-installed containerd runtime and it is responsible for pulling container images and managing container lifecycle operations.

kube-proxy: every node runs a kube-proxy service which implements cluster networking and load balances traffic for tasks running in the node.

How Apps are packaged #

In theory we could just run a pod with k8s, but we would lose the bread and butter of k8s auto scaling and much more which are provided if we wrap this pod into a Deployment.

Here is a high level overview of how we can achieve that.

We package our app into a deployment yaml file, we post it to the API Server and kubernetes will persist it in the store.

Pods #

Kubernetes can run containers and many other apps but they all need to be wrapped in pods. A pod can run one or more containers. For example we can run service meshes, apps, helper sidecar apps or init containers to initialize the environment. This helps us to implement the single responsibility principle.

Scaling up and down means that there are new pod replicas are added instead of more containers of the application into the same pod.

A pod is a shared execution environment. For example all containers in a pod share the same IP, volume and memory. Containers in the same pod can communicate with each other using localhost and they must have different ports.

Services #

When a pods are scaled, new pods get new IPs, when Pods are rescheduled, the new Pod get a new IP. So we need Services in order to provide a stable way to access and load-balance traffic to Pods, even if Pods come and go, making applications reliable and discoverable.

Powered by BetterDocs

Leave a Reply