Setting up Kubernetes Production Ready.

Ahson Shaikh
5 min readFeb 25, 2024

--

This article is being written by the date of 25/02/2024 — So will be using updated versions of K8 components. Plus, K8 stopped supporting Docker Engine as a container runtime for nodes in v1.24. Currently, it is v1.29

Let me clear this concept for Container Runtime and CRI for Kubernetes before we start working on it.

The CRI (Container Runtime Interface) is a specification defines how Kubernetes will interact with the container runtime. Kubernetes had a tight integration with Docker as a default container runtime. But, it was later decided to decouple it, and make it more flexible. So, now Kubernetes can interact with any Container Runtime which follow the CRI standards, that also prohibits K8 for direct integration with any Container Runtime.

I have been trying to start up with Kubernetes for so long, but procrastination is something undeniable. And due to so many tools and concepts of Kubernetes, it has been really tough to keep the track of many. So, today we would be figuring out which tool fits the best for bootstrapping Kubernetes Cluster?

Don’t think of Kubernetes as a whole. It is combination of multiple processes that you need to install and configure separately, unless you are using a K8 management platform for that. But, since you should be learning Kubernetes from the very scratch, we are going to use only CLI for spinning up clusters and nodes.

Prerequisites:

  1. Setting up Machines — t2.medium (2vCPU & 4GB) Master Node and t2.small (1vCPU & 2GB ) Worker Node.
  2. Loading up Modules and Tuning Network.
  3. Disabling Swap Memory
  4. Container Runtime: Containerd v1.7.13 (Latest)
  5. Kubeadm v1.29 (Latest)
  6. Network Plugin Addon (Calico)

We need to do this on every-node (controller, worker)

Setting up Machines.

We have setup two machines, one for the master and other for the worker.

The below is the given port ranges required for the K8 master-to-slave communication.

https://kubernetes.io/docs/reference/networking/ports-and-protocols/

So we made two security groups.

K8-Master-SG
K8-Worker-SG

Loading up Modules and Tuning Network

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

Disabling Swap Memory

sudo swapoff -a

Make sure you make it persistent through fstab.

Container Runtime: Containerd v1.7.13 (Latest)

This is the official document from github to install the Containerd, so please update your version if this blog goes outdated.

https://github.com/containerd/containerd/blob/main/docs/getting-started.md

We are building the containerd from binaries and that requires two additional packages.

  1. runc
  2. cni-plugins.

So, Let’s start.

Downloading Containerd Package:

wget https://github.com/containerd/containerd/releases/download/v1.7.13/containerd-1.7.13-linux-amd64.tar.gz
tar Cxzvf /usr/local containerd-1.7.13-linux-amd64.tar.gz

Creating containerd.service file for systemd:

The directory path and the file containerd.service is mentioned below.

/usr/local/lib/systemd/system/containerd.service
copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

Now restart the systemd

systemctl daemon-reload
systemctl enable --now containerd

Downloading Runc:

https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc

Downloading CNI-Plugin:

wget https://github.com/containernetworking/plugins/releases/download/v1.4.0/cni-plugins-linux-amd64-v1.4.0.tgz
mkdir -p /opt/cni/bin
tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.4.0.tgz

Create the config.toml file:

containerd config default > /etc/containerd/config.toml

You also need to setup containerd to use systemd cgroup driver. Read why that’s necessary here: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#systemd-cgroup-driver

# Configuring the systemd cgroup driver

# To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set

# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
# ...
# [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
# SystemdCgroup = true



Kubeadm v1.29 (Latest)

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# If the folder `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Initializing Cluster

sudo kubeadm init --pod-network-cidr=[server-ip]/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Network Plugin Addon (Calico)

You can use any network plugin from the non-exhaustive list:
https://kubernetes.io/docs/concepts/cluster-administration/addons/#networking-and-network-policy

We are using Calico

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml
wget https://docs.projectcalico.org/manifests/custom-resources.yaml
nano custom-resources.yaml

Edit the ipPools section in the file to reflect your configuration. Put the server's IP address in the cidr field.

kubectl create -f custom-resources.yaml

Go to the worker Node:

And join the cluster with the token generated after “kubeadm init”:

Run the below command on Worker Node:

kubeadm join $IP:6443 --token $TOKEN

Verify the nodes:

kubectl get nodes -o wide

If you would be facing any issue related to nodes like status not showing up to be ready, then do “kubectl describe nodes”

I’m also providing the link to the repo. Repo contains rough files, which were required during the entire process. That might help you.

https://github.com/MuhammadAhsanDonuts/K8_Setup

Thanks for reading → ❤

--

--

Ahson Shaikh

DevOps Engineer with 4 years of experience in cloud and on-premises environments, specializing in Python automation with Selenium, currently working at Arpatech