Build Kubernetes Cluster On Ubuntu 22.04
Build Your Own Kubernetes Cluster on Ubuntu 22.04: A Step-by-Step Guide
Hey everyone! So, you’re looking to dive into the awesome world of Kubernetes and want to set up your own cluster on good ol’ Ubuntu 22.04, huh? That’s a fantastic move, guys! Setting up your own K8s environment is a super valuable skill, whether you’re learning, testing out new apps, or just want more control. Forget those complicated cloud setups for a sec; we’re going to get our hands dirty and build a solid cluster right on Ubuntu. This guide is gonna walk you through everything, making it as painless as possible. We’ll cover the nitty-gritty, from prepping your machines to getting your first pods running. So, buckle up, grab your favorite beverage, and let’s get this Kubernetes party started!
Prerequisites: What You’ll Need Before We Start
Alright, before we jump headfirst into creating our Kubernetes cluster on Ubuntu 22.04, let’s make sure we’ve got all our ducks in a row. Having the right setup from the get-go will save you a ton of headaches down the line. First off, you’ll need at least two machines – one that will act as your
control plane
(the brain of the operation) and at least one, preferably more, that will be your
worker nodes
(where your actual applications will run). These can be virtual machines or physical servers, whatever floats your boat. The key is that they should all be running
Ubuntu 22.04 LTS (Jammy Jellyfish)
. Make sure they have decent specs – a minimum of 2GB RAM and 2 CPUs per machine is a good starting point, but more is always better, especially as you scale up. Don’t forget about network connectivity; your nodes need to be able to talk to each other seamlessly. A private network is ideal, but ensure they can reach each other over the internet if you’re setting this up remotely. Oh, and
SSH access
to all these machines is non-negotiable. You’ll be running commands, so make sure you can log in without any issues, preferably using SSH keys for added security and convenience. Lastly, you’ll need
sudo
privileges on all the machines, as we’ll be installing software and configuring system settings. So, double-check that you have these basics covered, and then we’re golden. Let’s move on to getting our nodes prepped!
Step 1: Preparing Your Ubuntu 22.04 Nodes
Okay, team, let’s get our Ubuntu 22.04 machines ready for Kubernetes action. This initial prep work is super important for a smooth Kubernetes cluster setup. We need to make sure our nodes are configured correctly so that Kubernetes can work its magic without any hiccups. First things first, let’s update our package lists and upgrade any existing packages. Open up your terminal on each node (both control plane and worker nodes) and run:
sudo apt update && sudo apt upgrade -y
This ensures you’re running the latest software and security patches, which is always a good practice, especially when setting up critical infrastructure like a Kubernetes cluster. Next, we need to disable swap . Kubernetes doesn’t play nicely with swap enabled; it can cause performance issues and unpredictable behavior. So, let’s turn it off. On each node, run:
sudo swapoff -a
And to make sure it stays off after a reboot, you’ll want to comment out the swap line in your
/etc/fstab
file. Use your favorite text editor, like
nano
:
sudo nano /etc/fstab
Find the line that mentions
swap
and put a
#
at the beginning of it. Save and exit. Now, we need to enable some kernel modules and configure
sysctl
settings that Kubernetes relies on. Run these commands on all your nodes:
sudo modprobe overlay
sudo modprobe br_netfilter
sudo sysctl net.bridge.bridge-nf-call-iptables=1
sudo sysctl net.ipv4.ip_forward=1
These settings are crucial for networking within your Kubernetes cluster, allowing pods to communicate effectively. To make these
sysctl
settings persistent across reboots, create a new configuration file:
sudo nano /etc/sysctl.d/kubernetes.conf
Add the following lines to this file:
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
Save and exit. Then, apply these settings immediately with:
sudo sysctl --system
Finally, we need to install a container runtime. Kubernetes needs something to run containers, and containerd is a popular and robust choice. Let’s install it on all nodes:
sudo apt install -y containerd
After installation, we need to configure
containerd
to use the systemd cgroup driver, which is what Kubernetes expects. Create the default configuration file:
sudo nano /etc/containerd/config.toml
If the file is empty, paste the following configuration. If it exists, find the
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
section and ensure
SystemdCgroup = true
is set. Make sure to uncomment the line if necessary:
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "k8s.gcr.io/pause:3.6"
[plugins."io.containerd.grpc.v1.cri".containerd]
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
Save the file and restart
containerd
to apply the changes:
sudo systemctl restart containerd
And enable it to start on boot:
sudo systemctl enable containerd
Whew! That’s a lot, but we’ve successfully prepped all our nodes. They’re now ready for the next crucial step: installing Kubernetes components!
Step 2: Installing Kubernetes Components (kubeadm, kubelet, kubectl)
Alright, now that our nodes are all prepped and looking sharp, it’s time to install the actual Kubernetes tools! We’ll be using
kubeadm
,
kubelet
, and
kubectl
.
kubeadm
is the command-line tool that helps us bootstrap a Kubernetes cluster.
kubelet
is the agent that runs on each node and ensures containers are running in a pod. And
kubectl
is our command-line interface for interacting with the cluster. Let’s get these installed on
all
of your nodes (control plane and worker nodes). We need to add the Kubernetes package repository first. Run these commands:
sudo apt-get update
# apt-transport-https may be installed from the previous step but we will install it again to be sure
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
# Download the public signing key for the Kubernetes package repository
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
# Add the Kubernetes apt repository
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
Now that the repository is added, let’s update our package list again to fetch the new information:
sudo apt-get update
And now, the moment of truth! Install
kubeadm
,
kubelet
, and
kubectl
. We’ll also pin the version to ensure we’re installing a specific, compatible version. For this guide, we’re targeting Kubernetes
v1.29.x
, but you can adjust this if needed. Let’s install the latest available
v1.29
version:
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl
The
apt-mark hold
command is important because it prevents these packages from being automatically updated when you run
apt upgrade
, ensuring your cluster stays on the version you intended. Finally, we need to ensure
kubelet
starts on boot and is ready to go:
sudo systemctl enable --now kubelet
And that’s it for the installation part! We’ve got the essential Kubernetes tools on all our machines. Now, the exciting part begins: initializing the control plane and joining the worker nodes.
Step 3: Initializing the Kubernetes Control Plane
Alright, this is where the magic really starts! We’re going to initialize our
control plane node
. Remember, this is the brain of your cluster. All the commands that follow should be run
only
on the machine designated as your control plane. First, let’s initialize
kubeadm
. You’ll need to specify the Kubernetes version you want to use (make sure it matches what you installed) and the Pod network CIDR. The Pod network CIDR is a private IP address range that your pods will use for communication. A common choice is
10.244.0.0/16
for Flannel or
192.168.0.0/16
for Calico. Let’s use
10.244.0.0/16
for this example, assuming we’ll use Flannel later.
Run the following command, replacing
<YOUR_CONTROL_PLANE_IP>
with the actual IP address of your control plane node:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --control-plane-endpoint=<YOUR_CONTROL_PLANE_IP>:6443 --upload-certs
Let’s break down that command a bit:
-
--pod-network-cidr=10.244.0.0/16: This tells Kubernetes the IP address range for your pods. It’s essential for the network add-on you’ll install later. -
--control-plane-endpoint=<YOUR_CONTROL_PLANE_IP>:6443: This specifies the stable endpoint for your control plane. If you have multiple control plane nodes later, this would be your load balancer’s IP or hostname. -
--upload-certs: This flag encrypts and uploads the certificates needed for joining other control plane nodes (if you plan to add more later) and other components. It’s super handy for multi-control plane setups.
This command will take a few minutes to complete.
kubeadm
will set up the control plane components, including the API server, etcd, scheduler, and controller manager. Once it’s done, you’ll see some important output, including instructions on how to configure
kubectl
and a
kubeadm join
command.
IMPORTANT:
Copy that
kubeadm join
command and save it somewhere safe! It contains a token and a discovery hash needed for your worker nodes to join the cluster. It looks something like this:
kubeadm join <YOUR_CONTROL_PLANE_IP>:6443 --token <SOME_TOKEN> --discovery-token-ca-cert-hash sha256:<SOME_HASH>
Now, to be able to use
kubectl
from your regular user account (not just
sudo
), you need to configure it. Run these commands on your control plane node:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This creates the
.kube
directory if it doesn’t exist, copies the admin configuration file, and sets the correct ownership so your user can access it.
Let’s verify that our control plane node is up and running. You can check the status of the control plane components:
kubectl get componentstatuses
(Note:
componentstatuses
is deprecated in newer versions, but can still be useful for initial checks).
You should also see your control plane node listed as ‘Ready’ (though it might be
NotReady
until you install a network plugin):
kubectl get nodes
It might show up as
NotReady
right now. Don’t sweat it! That’s because we haven’t installed a network plugin yet, which is our next critical step.
Step 4: Deploying a Pod Network Add-on
Okay guys, your control plane is initialized, but your nodes are still showing as
NotReady
. Why? Because Kubernetes needs a network plugin (also known as a CNI - Container Network Interface) to allow pods to communicate with each other and with services outside the cluster. Without a network, your pods can’t talk, and that’s a big no-no for a functional cluster! There are several popular options like Flannel, Calico, Weave Net, and Cilium. For simplicity and ease of use, we’ll go with
Flannel
in this guide. It’s lightweight and works great for most use cases.
First, ensure your
kubectl
is configured correctly (you should have done this in the previous step, but it’s worth double-checking). Then, apply the Flannel manifest using
kubectl
. You can usually find the latest manifest on the Flannel GitHub repository, but here’s a common version. Make sure the
--pod-network-cidr
you used during
kubeadm init
matches the one in the Flannel configuration (which is
10.244.0.0/16
in our case, matching what we used).
Run this command on your control plane node :
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
This command downloads the Flannel configuration file and applies it to your cluster. It will create the necessary pods (daemonsets) that run on each node to manage the pod networking. Give it a minute or two to deploy.
Now, let’s check if our nodes are ready. Run this command again:
kubectl get nodes
Aha!
You should now see your control plane node listed as
Ready
. If you had other nodes already joined (which we’ll do next!), they should also show up as
Ready
after the network plugin is up and running on them. Pretty cool, right?
If, for some reason, a node is still
NotReady
, don’t panic. Common reasons include:
- Firewall issues: Ensure that necessary ports are open between your nodes (check Kubernetes documentation for the exact ports).
-
kubeletnot running: Double-checksudo systemctl status kubeleton the node. -
Container runtime issues:
Ensure
containerdis running and configured correctly. -
Incorrect Pod CIDR:
Verify that the
--pod-network-cidrused inkubeadm initmatches the one configured in your network add-on.
Once your control plane node is
Ready
and Flannel is deployed, you’ve successfully got the networking layer sorted for your cluster. This is a huge milestone!
Step 5: Joining Worker Nodes to the Cluster
We’re in the home stretch, folks! We’ve got our control plane humming along, and the networking is sorted. Now it’s time to bring our
worker nodes
into the fold. These are the machines that will actually run your applications (your pods and deployments). Remember that
kubeadm join
command we saved earlier from the
kubeadm init
output? This is where you’ll need it!
Log in to each of your worker nodes via SSH. Then, paste and run the
kubeadm join
command on
each
worker node. It will look something like this:
sudo kubeadm join <YOUR_CONTROL_PLANE_IP>:6443 --token <SOME_TOKEN> --discovery-token-ca-cert-hash sha256:<SOME_HASH>
If you lost your token or it expired (they usually last 24 hours), you can generate a new one on your control plane node with:
sudo kubeadm token create --print-join-command
This command will output a new
kubeadm join
command that you can use on your worker nodes.
When you run the
join
command on a worker node,
kubeadm
will connect to the control plane’s API server, authenticate using the token and discovery hash, and configure the
kubelet
on that worker node to join the cluster. It will also download the necessary container images and set up the necessary components.
Once the
join
command has completed successfully on a worker node, switch back to your
control plane node
and run:
kubectl get nodes
You should now see your worker node(s) listed! Initially, they might show up as
NotReady
. This is normal! It usually takes a minute or two for the network plugin (Flannel, in our case) to be deployed to the new worker node via its DaemonSet, and for the
kubelet
on that worker to establish its connection and report its status. Keep running
kubectl get nodes
every few seconds until your worker nodes show
Ready
.
Troubleshooting Worker Nodes:
If a worker node doesn’t become
Ready
after a few minutes:
-
Check
kubeletstatus: On the worker node, runsudo systemctl status kubelet. See if there are any errors. -
Check logs:
Use
sudo journalctl -u kubelet -fon the worker node to view live logs. -
Network connectivity:
Ensure the worker node can reach the control plane IP and port (
<YOUR_CONTROL_PLANE_IP>:6443). Check firewalls! -
Container runtime:
Verify
containerdis running on the worker node (sudo systemctl status containerd). - Firewall rules: Ensure your firewall isn’t blocking traffic between nodes on necessary Kubernetes ports.
Congratulations! If your worker nodes are showing
Ready
, you have officially built a functional Kubernetes cluster on Ubuntu 22.04! You’ve got a control plane and at least one worker node ready to accept workloads. This is a massive achievement!
What’s Next? Deploying Your First Application!
So, you’ve got your Kubernetes cluster up and running on Ubuntu 22.04 – awesome job, guys! But a cluster is only useful if you’re running something on it, right? The next logical step is to deploy your very first application. This is where you’ll start seeing the power of Kubernetes in action. You can deploy applications using simple YAML manifests that define Deployments, Services, Pods, and more.
Let’s create a simple Nginx deployment. Save the following content into a file named
nginx-deployment.yaml
on your control plane node:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # We want 2 instances of our Nginx pod
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
Now, apply this deployment to your cluster:
kubectl apply -f nginx-deployment.yaml
Check the status of your deployment and pods:
kubectl get deployments
kubectl get pods -o wide
You should see your
nginx-deployment
running with 2 replicas, and the pods should be in a
Running
state, potentially showing which worker node they are running on. To make this Nginx service accessible from outside the cluster, you’ll need to create a Kubernetes Service. Let’s create a
LoadBalancer
type service (note: in a bare-metal setup like this, you’ll typically need an external load balancer solution like MetalLB, or you’ll use
NodePort
or
Ingress
controllers). For a simple test, let’s use
NodePort
:
Save this to
nginx-service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30080 # You can choose a port in the 30000-32767 range
type: NodePort
Apply it:
kubectl apply -f nginx-service.yaml
Now you can access your Nginx server by navigating to
http://<ANY_NODE_IP>:30080
in your web browser. You should see the default Nginx welcome page!
Conclusion: Your Kubernetes Journey Begins!
And there you have it, folks! You’ve successfully navigated the process of setting up your very own Kubernetes cluster on Ubuntu 22.04. From preparing the nodes and installing the essential components to initializing the control plane, deploying a network add-on, and finally joining your worker nodes, you’ve accomplished a significant feat. This hands-on experience is invaluable for understanding how Kubernetes operates under the hood. Remember, this is just the beginning of your Kubernetes journey. From here, you can explore advanced topics like:
- Ingress Controllers: For managing external access to your services.
- Persistent Storage: Using Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) for stateful applications.
- Namespaces: To logically partition your cluster resources.
- Monitoring and Logging: Setting up tools like Prometheus and Grafana.
- CI/CD Integration: Automating deployments with tools like Jenkins or GitLab CI.
Keep experimenting, keep learning, and don’t be afraid to break things and fix them – that’s how you truly master this powerful technology. Happy container orchestrating!