kubernetes

Kubernetes Installation

This is a small tutorial to install Kubernetes v1.32 on Ubuntu 24.04.1 LTS

For this environment, I am using 3 virtual machines on my local lab. These machines will have the following names and IP’s:

srv-node-01 - 192.168.10.11
srv-node-02 - 192.168.10.12
srv-node-03 - 192.168.10.13

srv-node-01 will be the master and the other two servers will be the additional nodes.

First things first, so make sure that all your servers are up to date. For that, run on all servers:
sudo apt update && sudo apt upgrade -y

This will update apt cache and install latest patches. There might be some packages left behind because of ubuntu phasing.

After the upgrade, reboot all servers.

Next step is to make sure that your servers know each other. You can do this on your DNS, if you have one, or locally on each server, in /etc/hosts file. We will see how to do the latter.

On each server, edit /etc/hosts with your favorite text editor and add:
192.168.10.11 srv-node-01
192.168.10.12 srv-node-02
192.168.10.13 srv-node-03

Once this is done, all your servers will know each other by their name. You can ping srv-node-01 from any server and you should get a reply.

Now that we have the base ready, we can go into the next phase.

SWAP

Since Kubelet fails, by default, if it finds swap, we will disabled it. More info on:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#swap-configuration

To disable swap, we remove it from /etc/fstab. This can be done manually removing the correspondent line in /etc/fstab, or magically running:
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Then, you can either reboot, or run:
sudo swapoff -a
This should be done on all nodes.

Kernel Modules and parameters


Once again, this step should be done on all nodes.

To enable needed kernel modules, run:

sudo modprobe overlay
sudo modprobe br_netfilter

To make this permanent across reboots, create a file inside /etc/modules-load.d, for example kubernetes.conf and add the modules to it. Should look like this:
cat /etc/modules-load.d/kubernetes.conf

overlay
br_netfilter

For kernel parameters, add the following to /etc/sysctl.d/kubernetes.conf:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1


run sudo sysctl --system to apply the parameters.

Containerd

Since version v1.20, Kubernetes deprecated Docker support as a container runtime. As an alternative, we will use Containerd. This also needs to be done on all nodes.

More info: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd

First we need to install needed dependencies:
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates

Once we have all dependecies installed, we need to had Docker repository that has Containerd. To do that, we run:
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmour -o /etc/apt/trusted.gpg.d/containerd.gpg
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

This will add the correct repository for the ubuntu version where this is being run.

Now we need to retrieve package information from the repository:
sudo apt update
And then, we can install Containerd:
sudo apt install containerd.io -y

With Containerd installed, we now need to do some changes.
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd-systemd

Generate a new config:
containerd config default | sudo tee /etc/containerd/config.toml

Configure the systemd cgroup driver. Edit file /etc/containerd/config.toml and find line:
SystemdCgroup = false
and replace it with:
SystemdCgroup = true

This can also be done with:
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

Also, replace pause3.8 image with pause3.10.
In the same file, replace:
sandbox_image = "registry.k8s.io/pause:3.8"
with:
sandbox_image = "registry.k8s.io/pause:3.10"

Restart Containerd:
sudo systemctl restart containerd

Kubernetes packages repositories – v1.32

No we need to add Kubernetes repository. v1.32 is the latest version available at the time of writting this, so this is what we will use. To add repositories we first need to download the public signing key for the Kubernetes package repository:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes.gpg
Then we can add the repository:
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

Kubernetes package install

Once we have the needed repository, we can proceed and install kubernetes. To do that, we run:
sudo apt update
sudo apt install kubelet kubeadm kubectl -y

It is recommended to hold these packages to avoid issues with unwanted updates:
sudo apt-mark hold kubelet kubeadm kubectl

Initialize Kubernetes Cluster

We are now ready to initialize the Kubernetes Cluster.

srv-node-01 is the master node, so this is where we run this command and what we use as control-plane-endpoint. To initialize it, run:
sudo kubeadm init --control-plane-endpoint=srv-node-01

This will generate some output and if all is ok, there will be some instructions to run, to allow the user to control the cluster. This will be:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

You might need to logout and login to make this work.

There will be more information at the bottom, on how to add more nodes into the cluster. Something like:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
This can be run on the other nodes, to add them to the cluster.

More info: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#more-information

Install Calico Network Add-on Plugin

If you now run kubectl get nodes you will see the 3 nodes, but on not ready state. This is because we need a Network add-on plugin. For that, we will use Calico.

More info: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network

To install the add-on, run:
kubectl apply -f https://raw.githubusercontent.com/projectcalico/calico/v3.29.1/manifests/calico.yaml

Once this is installed, nodes will be in ready state shortly after. You can also list pods created by the addon:
kubectl get pods -n kube-system

Now, we have a full functional Kubernetes cluster installed, ready to roll!

Leave a Reply

Your email address will not be published. Required fields are marked *