Back to Blog
Infrastructure10 min read

Kubernetes from Scratch in 2026: A Practical Guide to Building Your Own Cluster

Alex Ozhima
|March 9, 2026

Cloud-managed Kubernetes (EKS, GKE, AKS) is convenient, but it comes with a price tag that doesn't always make sense — especially for startups and small teams running predictable workloads. A two-node cluster on Hetzner Cloud costs a fraction of what you'd pay AWS, and you get full control over the stack.

This guide walks through setting up a multi-node Kubernetes cluster from scratch on cost-efficient VPS servers. We'll use Ubuntu 24.04, containerd, kubeadm, and Cilium as our CNI — a modern, production-grade stack.

What you'll need

At minimum, two virtual or physical servers connected over a private network. Hetzner Cloud makes this straightforward through their console UI, but the same steps apply to any provider.

Create a private network connecting the servers: 10.11.0.0/16.

You need two servers:

  1. katlex-0 — runs the K8s control plane (10.11.0.2)
  2. katlex-1 — a worker node (10.11.0.3)

Use Ubuntu 24.04 on all machines. More workers can be added later by repeating the worker node setup.

Optional: create a storage volume (e.g., 500 GB) and attach it to a node for persistent data like Postgres.

Important: assign a public IPv4 to both nodes — without it, they can't reach the public internet, which will break package installs and many workloads. If you only run internal jobs (e.g., databases), you can add the public IP for installation and remove it afterward.

Make sure you can SSH into both machines before continuing.

Firewall rules

For a proper setup, allow the following:

PortProtocolPurpose
8472UDPVXLAN — node-to-node CNI traffic (private network only)
2379–2380TCPetcd
10250TCPkubelet
6443TCPKubernetes API server

Kubernetes quick 101

If you're already comfortable with K8s concepts, skip ahead to the setup section.

Kubernetes is a cluster management tool that originated from Google's internal infrastructure team. It uses Linux containers for virtualization and isolation, providing availability, reliability, and scalability for backend software.

Key concepts:

  • A pod is a bundle of containers sharing a single virtual network address (filesystems remain isolated between containers in the same pod)
  • A node is a Linux machine running a container runtime (like containerd) that Kubernetes orchestrates

The main components:

Running outside the container runtime:

  • kubeadm — cluster setup tool
  • kubelet — process on each node that schedules pods
  • kubectl — CLI client for interacting with the cluster

Running inside the container runtime:

  • kube-proxy — routes traffic to pods (runs on every node)
  • kube-apiserver — the API that kubectl talks to
  • coredns — DNS server for resolving pod and service names
  • CNI plugin — provides the network layer for pods, including cross-node traffic
  • etcd — key/value store for cluster configuration

The API server and scheduler run on the control plane node. Worker nodes only need kubelet and networking components.

Initial setup (all nodes)

Run these steps on every node in your cluster.

Disable swap

swapoff -a

Comment out or delete any swap line in /etc/fstab.

Configure iptables bridged traffic

Edit /etc/ufw/sysctl.conf:

net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1

Install essential packages

apt-get update
apt-get install -y ebtables ethtool ca-certificates curl gnupg lsb-release

Enable required kernel modules

Create /etc/modules-load.d/99-k8s.conf:

overlay
br_netfilter

Load them immediately:

modprobe overlay
modprobe br_netfilter

Add bridge and IPv4 forwarding rules

Create /etc/sysctl.d/99-k8s.conf:

net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1

Reboot

shutdown -r now

Install containerd

Add the Docker repository (containerd is distributed through Docker's repos):

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
  gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/docker.gpg > /dev/null

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/trusted.gpg.d/docker.gpg] \
  https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install and configure containerd:

apt-get update
apt-get install -y containerd.io

This next step is crucial — K8s requires SystemdCgroup enabled in containerd:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
service containerd restart

Install Kubernetes

Add the Kubernetes apt repository:

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | \
  sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/kubernetes-apt-keyring.gpg

echo "deb https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /" | \
  sudo tee /etc/apt/sources.list.d/kubernetes.list

Install and pin the packages:

apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl

Configure kubelet to use private network IPs

Create /etc/default/kubelet:

KUBELET_EXTRA_ARGS=--node-ip=10.11.0.X

Replace 10.11.0.X with the private IP of each node (check with ip addr).

systemctl daemon-reexec
systemctl restart kubelet

Control plane setup

Run these steps on the control plane node only (katlex-0).

Initialize the control plane

Use the private network IP for the API server. Add any public IPs or domain names as extra SANs:

kubeadm init --apiserver-advertise-address=10.11.0.2 \
  --apiserver-cert-extra-sans="10.11.0.2,<PUBLIC_IP>,your-domain.com"

Save the join command from the output — you'll need it for worker nodes. It looks like:

kubeadm join 10.11.0.2:6443 --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>

Set up kubectl access:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Tip: copy the kubeconfig to your local machine for remote cluster access. If you're connecting over a VPN, update the server IP accordingly.

Verify the cluster

kubectl get pods -A -o wide

At this point, coredns pods will show as Pending — that's expected, because we haven't installed a CNI plugin yet:

NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-668d6bf9bc-8hkbr          0/1     Pending   0          112s
kube-system   coredns-668d6bf9bc-mpc49          0/1     Pending   0          112s
kube-system   etcd-katlex-0                      1/1     Running   0          119s
kube-system   kube-apiserver-katlex-0            1/1     Running   0          119s
kube-system   kube-controller-manager-katlex-0   1/1     Running   0          119s
kube-system   kube-proxy-fb2j8                  1/1     Running   0          112s
kube-system   kube-scheduler-katlex-0            1/1     Running   0          2m

Install Cilium CNI

Cilium is a modern CNI plugin that manages networking, security, and observability using eBPF — a Linux kernel technology that runs sandboxed programs within the kernel without modifying its source code.

Why Cilium over Flannel or Calico?

  • Performance — kernel-level packet processing, no iptables overhead
  • Pod networking — assigns IPs and routes traffic between pods across nodes
  • Network policies — L3/L4/L7 firewall rules, including HTTP and gRPC-aware policies
  • Load balancing — can replace kube-proxy entirely using eBPF
  • Observability — deep traffic visibility through Hubble
  • WireGuard encryption — built-in encrypted node-to-node traffic
  • Service mesh — kernel-level enforcement without sidecars

Install the Cilium CLI and deploy:

curl -LO https://github.com/cilium/cilium-cli/releases/latest/download/cilium-linux-amd64.tar.gz
tar xzvfC cilium-linux-amd64.tar.gz /usr/local/bin
rm cilium-linux-amd64.tar.gz

cilium install --set k8sServiceHost=10.11.0.2 --set k8sServicePort=6443

Verify Cilium is healthy:

kubectl -n kube-system exec daemonset/cilium -- cilium status

At this point, the control plane should be fully operational and coredns pods should transition to Running.

Joining worker nodes

On each worker node, run the join command you saved from the kubeadm init output:

kubeadm join 10.11.0.2:6443 --token <token> \
  --discovery-token-ca-cert-hash sha256:<hash>

If you lost the join command or the token expired, generate a new one from the control plane:

kubeadm token create --print-join-command

Verify the cluster

Back on the control plane, check that all nodes are Ready:

kubectl get nodes -o wide

Confirm all system pods are running across both nodes:

kubectl get pods -A -o wide

You should see Cilium, kube-proxy, and coredns pods scheduled on the worker node.

What's next

At this point you have a working multi-node Kubernetes cluster. From here you'll typically want to:

  • Set up an ingress controller (e.g., nginx-ingress or Traefik) for HTTP routing
  • Install cert-manager for automatic TLS certificates via Let's Encrypt
  • Configure persistent storage with a CSI driver or local-path-provisioner
  • Deploy monitoring with Prometheus and Grafana
  • Set up GitOps with ArgoCD or Flux for declarative deployments

The entire stack — two Hetzner VPS instances, private networking, and a storage volume — runs at a fraction of what managed Kubernetes costs on hyperscalers. For teams that want full control without the cloud premium, it's hard to beat.

Alex Ozhima

Alex Ozhima

Founder & CEO at Katlextech

Ready to Ship Your Product?

Let's discuss how we can implement these strategies for your business