How to Install a Kubernetes on Ubuntu 24.04 LTS

Kubernetes on Ubuntu

Kubernetes is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications.

Learning Kubernetes has become crucial for efficient DevOps as more and more organizations transition to microservices and use containers for application deployment and management. 

In this tutorial, we will discuss how to install and set up Kubernetes (K8s) on Ubuntu 24.04. However, before that, let us take a quick look at the prerequisites. 

The Prerequisites to Installing Kubernetes on Ubuntu 24.04

Before diving into the installation, ensure you have the following:

  • You have installed and configured Ubuntu 24.04 LTS on your machine
  • You have a minimum of 2 GB RAM (4 GB or more recommended) free 
  • A user account with sudo or administrative privileges
  • Terminal or command line access

How to Install Kubernetes on Ubuntu 24.04

Kubernetes is a powerful tool for managing containerized applications. Follow the steps below to install and set up Kubernetes on Ubuntu 24.04.

Step #1: Update and Upgrade the System

Before moving into the installation process, it is necessary to update your package repository. 

Execute the following command to update the package list and upgrade all installed packages.

# sudo apt update && sudo apt upgrade -y

sudo apt update and upgrade

Step #2: Install Docker

Once you have updated your package repository, the first step is to install Docker.

Docker is an open-source platform that automates the deployment of applications inside lightweight, portable containers.

Install Docker by running this command:

# sudo apt install -y docker.io

sudo apt install -y docker.io

Now, verify the installation by running this command that prints version information:

# sudo docker --version

sudo docker --version

Step #3: Install Kubernetes Components

Next, add the Kubernetes package repository.

# echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.30/deb/ /" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.30/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

Once you have added the package repository, update package lists and install kubeadm, kubelet, and kubectl on each node. 

kubeadm, kubelet, and kubectl are three essential components that work together to create and manage the Kubernetes cluster efficiently.

# sudo apt update

# sudo apt install -y kubelet kubeadm kubectl

sudo apt install -y kubelet kubeadm kubectl

Next, freeze the packages at their current version to avoid automatic updates. This is crucial for maintaining the stability and compatibility of the Kubernetes cluster.

# sudo apt-mark hold kubelet kubeadm kubectl

sudo apt-mark hold kubelet kubeadm kubectl

Step #4: Disable Swap

Once you have locked the packages at their current version, disable Swap for every node.

Disabling Swap ensures that Kubernetes can effectively manage and allocate resources, providing a more stable and predictable environment for running containerized applications.

Run the following commands to disable Swap:

# sudo swapoff -a

# sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

Note that the command will not produce any output if the Swap is successfully disabled.

Step #5: Initialize the Master Node

Now that you have disabled Swap, use kubeadm to initialize the Kubernetes cluster configuration on the master node.

# sudo kubeadm init --pod-network-cidr=10.244.0.0/16

sudo kubeadm init --pod-network-cidr

Once you initiate the Kubernetes cluster, a message will be displayed. Execute the commands as mentioned in the message to add worker nodes to the cluster.

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 178.162.175.86:6443 --token jrbntb.3bx09qa373n82fze \

     --discovery-token-ca-cert-hash sha256:f5a5ac97236f1843b9296bd3026d503918ce96d3e940c343481592cd57edc0c9

Step #6: Configure kubectl for the Master Node

Next, set up the kubeconfig file for the root user. 

By setting up the kubeconfig file for the root user, users can ensure that the root user has the necessary access and permissions to manage the Kubernetes cluster effectively.

Run the following commands to create the necessary directories and files. 

# mkdir -p $HOME/.kube

# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

# sudo chown $(id -u):$(id -g) $HOME/.kube/config

To verify the cluster status, run the following command:

# kubectl get nodes

kubectl get nodes

The output should display Ready. If not, check Kubernetes documentation to resolve the issue. Some probable causes include the lack of resources, kubelet issues, and network connectivity issues. 

Step #7: Install a Pod Network

Next, deploy a pod network to enable communication between your pods. 

This is a crucial step as the lack of a proper pod network can severely limit the functionality and reliability of the Kubernetes cluster.

We recommend deploying Flannel to set up the network between multiple nodes in the Kubernetes cluster.

# kubectl apply -f

https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f

Once you have deployed the pod network, ensure all the nodes are active. 

# kubectl get nodes

kubectl get nodes

Step #8: Join Worker Nodes

Next, ensure you have installed containerd on both master and worker nodes. If not, install it by running this command:

# sudo apt install containerd -y

Once you have installed containerd, start and enable it on both master and worker nodes.

# sudo systemctl start containerd

# sudo systemctl enable containerd

Next, verify the status of containerd.

# sudo systemctl status containerd

If the output produced is active, you have successfully enabled containerd.

Step #9: Enable IP forwarding

Once you have successfully enabled containerd, set the ip_forward parameter to 1 on the master and worker nodes.

# sudo sysctl -w net.ipv4.ip_forward=1

This allows packets to be forwarded between nodes, enabling pod communication across the cluster.

Make this change permanent by adding or updating the following line in /etc/sysctl.conf file.

# echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf

Then reload the sysctl settings to make the changes into effect.

# sudo sysctl -p

Step #10: Join Worker Nodes

On each worker node, execute the join command obtained during the master node initialization step. The command appears like this:

# sudo kubeadm join <master-ip>:<master-port> --token <token>

--discovery-token-ca-cert-hash sha256:<hash>

sudo kubeadm join

Replace <master-ip>, <master-port>, <token>, and <hash> with the actual values from the master node initialization output. This command joins the worker node to the Kubernetes cluster.

To verify, execute the following command:

# kubectl get nodes

kubectl get nodes

If you have successfully joined the worker node to the Kubernetes cluster the output should display Ready.

You have now successfully installed Kubernetes.

Step #11: Deploy a Test Application

To check if you have successfully installed Kubernetes, deploy a basic NGINX application.

# kubectl create deployment nginx --image=nginx

# kubectl expose deployment nginx --port=80 --type=NodePort

Note: Exposing applications directly to the internet through NodePorts poses security risks. Consider all security implications before performing this step.

To access NGINX, you can either temporarily disable the firewall or add the NodePort and NGINX ports to the firewall.

Locate the NGINX service and its NodePort.

# kubectl get svc

kubectl get svc

Next, to access the NGINX application, navigate to http://<node-ip>:<node-port> in your web browser.

Replace <node-ip> with the IP address of one of your worker nodes and <node-port> with the actual NodePort number.

Step #12: Check Cluster Status

Execute the following command to check the cluster status:

# kubectl cluster-info

kubectl cluster-info

This command displays general information about the Kubernetes cluster, including the Kubernetes version, server version, and API server endpoint.

To check the status of the nodes run this command:

# kubectl get nodes -o wide

kubectl get nodes -o wide

This command retrieves detailed information about each node in the cluster, including the node name, status, operating system, role, version, internal IP address, external IP address, and age of the node.

Note: Using NodePorts for external access is suitable for testing purposes, but consider implementing a more secure approach like an Ingress controller for production environments.

Conclusion

There are several critical processes involved in building a Kubernetes cluster on Ubuntu 24.04, ranging from configuring the cluster and deploying apps to setting up Docker and Kubernetes components. 

By following this guide, you can successfully create and manage a Kubernetes cluster, enabling you to deploy, scale, and maintain your applications efficiently. This setup provides a strong foundation for working with containerized applications and improving your DevOps workflows.

FAQs

Q. What is the first step to start the installation process for Kubernetes on Ubuntu 24.04?
Update your package manager using the apt command with sudo apt update.

Q. Why should I use virtual machines for my Kubernetes cluster?
Virtual machines provide isolated and controlled environments for each node.

Q. How do I add the Kubernetes Apt repository?
After downloading and installing the key on your machine using the curl command, add the Kubernetes repository to your list of sources.

Q. How can I check the status of my nodes in Kubernetes?
Use kubectl get nodes to see the node status.

Q. What is the purpose of the pod network in Kubernetes?
Pods on several nodes can communicate with one another thanks to the pod network.

Q. How do I check if my nodes are in the ready status?
Check the STATUS column for Ready and use kubectl get nodes.

Q. How many CPU cores do I need for the master node in a Kubernetes cluster?
You need at least 2 CPU cores for the master node.

Q. How do I add additional nodes to my Kubernetes cluster?
Run the kubeadm join command on the new nodes using the token provided by the master node.

Q. What changes should I make to the Swap configuration?
Disable Swap with sudo swapoff -a and edit /etc/fstab to comment out or remove swap entries.

Q. What is the role of the kubelet service in Kubernetes?
The kubelet service manages the pods and containers on each node.

 

Jignesh J

Jignesh is a senior server administrator at RedSwitches. He keeps everything up & running while tackling advanced server management and high-availability cluster issues. He’s a big fan of blockchain and web security. When not at his terminal, he loves to work out and is a fitness freak. If you have any support issues, contact him at [email protected]

Related articles

Latest articles