Kubernetes Cluster Setup with Containerd

saurabh kharkate
5 min readMar 16, 2022

As the Kubernetes team has announced that it is deprecating the use of the Docker container runtime sometime after the release of Kubernetes 1.20, which is the upcoming release of the container orchestration tool. Instead, Kubernetes will use runtimes that use the Container Runtime Interface (CRI).

What is Container Runtime Interface

The CRI is a plugin interface which enables the kubelet to use a wide variety of container runtimes, without having a need to recompile the cluster components. — an agent that runs on every node in a Kubernetes cluster use more than one type of container runtime.

You need a working container runtime on each Node in your cluster, so that the kubelet can launch Pods and their containers.

The main protocol for the communication between the kubelet and Container Runtime.

The Kubernetes Container Runtime Interface (CRI) defines the main gRPC protocol for the communication between the cluster components kubelet and container runtime.

Why Does Kubernetes Need CRI?

To understand the need for CRI in Kubernetes, let’s start with a few basic concepts:

  • kubelet — the kubelet is a daemon that runs on every Kubernetes node. It implements the pod and node APIs that drive most of the activity within Kubernetes.
  • Pods — a pod is the smallest unit of reference within Kubernetes. Each pod runs one or more containers, which together form a single functional unit.
  • Pod specs — the kubelet read pod specs, usually defined in YAML configuration files. The pod specs say which container images the pod should run. It provides no details as to how containers should run — for this, Kubernetes needs a container runtime.
  • Container runtime — a Kubernetes node must have a container runtime installed. When the kubelet wants to process pod specs, it needs a container runtime to create the actual containers. The runtime is then responsible for managing the container lifecycle and communicating with the operating system kernel.
  • There are multiple container “runtimes”, which are programs that can create and execute containers that are typically fetched from images. That space is slowly reaching maturity both in terms of standards and implementation.

Selecting a container runtime for use with Kubernetes Interfaces

  • Native
  • Docker
  • rktnetes
  • CRI
  • cri-containerd
  • rktlet
  • cri-o
  • frakti

Here we are using containerd i.e cri-containerd for our kubernetes setup and application.

Containerd — Containerd is an OCI compliant core container runtime designed to be embedded into larger systems. It provides the minimum set of functionality to execute containers and manages images on a node. It was initiated by Docker Inc. and donated to CNCF in March of 2017. The Docker engine itself is built on top of earlier versions of containerd, and will soon be updated to the newest version. Containerd is close to a feature complete stable release, with 1.0.0-beta.1 available right now.

cri-containerd — Cri-containerd is exactly that: an implementation of CRI for containerd. It operates on the same node as the Kubelet and containerd. Layered between Kubernetes and containerd, cri-containerd handles all CRI service requests from the Kubelet and uses containerd to manage containers and container images. Cri-containerd manages these service requests in part by forming containerd service requests while adding sufficient additional function to support the CRI requirements.

Now lets, Start our Kubernetes Cluster Setup

note : first remove all the configuration files and purge kubelet , kubeadm and docker and reset the cluster if you already present there.

Step 1:

Use the following commands to install Containerd on your system:

Install and configure prerequisites:

$ cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

$ sudo modprobe overlay
$ sudo modprobe
# Setup required sysctl params, these persist across reboots$ cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
# Apply sysctl params without reboot
$ sudo sysctl --system

Step 2:

Now setup and install kubectl , kubeadm , kubelet and containerd

# Update the apt package index and install packages needed to use the Kubernetes apt repository:$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https ca-certificates curl

# Download the Google Cloud public signing key:
$ sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
# Add the Kubernetes apt repository:
$ echo “deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main” | sudo tee /etc/apt/sources.list.d/kubernetes.list# Add the container apt repository:$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -$ echo "deb [arch=amd64] https://download.docker.com/linux/debian buster stable" |sudo tee /etc/apt/sources.list.d/docker.list# Update apt package index, install kubelet, kubeadm, kubectl, and containerd pin their version:$ sudo apt-get update
$ sudo apt-get install containerd
$ sudo apt-get install -y kubelet kubeadm kubectl
$ sudo apt-mark hold kubelet kubeadm kubectl

Step 3:

Configure contianerd

$ sudo mkdir -p /etc/containerd
$ containerd config default | sudo tee /etc/containerd/config.toml

Step 4:

Using the systemd cgroup driver

To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true

Now restart the Containerd service

$ systemctl restart containerd

Step 5:

Now install crictl plugin

$ VERSION="v1.22.0"
$ wget
https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
$ sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
$ rm -f crictl-$VERSION-linux-amd64.tar.gz

Setp 6:

Now Setup our kubernetes Cluster with below command:

$ kubeadm init --pod-network-cidr=10.244.0.0/16$ mkdir -p $HOME/.kube$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 7:

Install flannel plugin

$  kubectl apply  -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Step 8:

Adding Private registry to containerd :

  • just edit the file /etc/containerd/config.toml
  • insert your registry name , endpoint and auth as show below in bold and italic font style
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
..
[plugins."io.containerd.grpc.v1.cri".registry.auths][plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."registry.com".tls]
insecure_skip_verify = true
[plugins."io.containerd.grpc.v1.cri".registry.configs."regisrty.com".auth]
auth = "cGl2b3RjaGFp"
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."registry.com"]
endpoint = ["https://registry.com"]

now restart the containerd service and check the status if container d not started check the file again and restart.

$ systemctl restart containerd

Step 9:

Now Check you node is in ready state or not with containerd runtime interface

$ kubectl get nodes -owide

Now you can deploy your application with an ease

Thanking you ,🙏

Hope you enjoyed reading article…🙂

--

--

saurabh kharkate
saurabh kharkate

Responses (1)