
Is Kubernetes Good for a Homelab?
Table of Contents
Is Kubernetes Good for a Homelab?
The question of whether Kubernetes is suitable for a homelab setup has been circulating for a while, and it’s one that recently caught my attention after watching Raid Owl’s video on Docker Swarm. In the video, he highlights Kubernetes’ complexity and suggests it might be overkill for most homelab enthusiasts. As someone who’s been working with Kubernetes clusters for over a decade, I found this perspective interesting—and it prompted me to reflect on whether Kubernetes really is too complex for home use.
Docker Swarm: Simplicity in Action
In his video, Raid Owl showcases how easy it is to get a Docker Swarm up and running with just a few commands. Here’s the process, including Docker installation for completeness:
# Install Docker
sudo apt update
sudo apt install -y docker.io
sudo systemctl enable docker
sudo systemctl start docker
# Initialize Docker Swarm
sudo docker swarm init
# On additional nodes, join the swarm
sudo docker swarm join --token <SWARM_JOIN_TOKEN> <MANAGER_IP>:2377
This streamlined setup is part of Docker Swarm’s appeal—it’s quick, simple, and gets the job done without much fuss.
Kubernetes: Complexity with Purpose?
Now, let’s compare that to RKE2 (Rancher’s Kubernetes distribution), which requires a few additional steps but still isn’t overly complicated:
# Install RKE2
curl -sfL https://get.rke2.io | sudo sh -
sudo systemctl enable rke2-server.service
sudo systemctl start rke2-server.service
# Retrieve the node token
sudo cat /var/lib/rancher/rke2/server/node-token
# On additional nodes, install the agent and join the cluster
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_TYPE="agent" sh -
echo "SERVER_URL=https://<SERVER_IP>:9345" | sudo tee /etc/rancher/rke2/config.yaml
echo "TOKEN=<NODE_TOKEN>" | sudo tee -a /etc/rancher/rke2/config.yaml
sudo systemctl enable rke2-agent.service
sudo systemctl start rke2-agent.service
If you want something even closer to Swarm’s simplicity, K3s (another lightweight Kubernetes distribution) offers an ultra-simplified setup:
# Install K3s on the server
curl -sfL https://get.k3s.io | sh -
# On additional nodes, join the cluster
curl -sfL https://get.k3s.io | K3S_URL=https://<SERVER_IP>:6443 K3S_TOKEN=<NODE_TOKEN> sh -
The ease of setting up K3s makes it a compelling choice for those who want the power of Kubernetes without the overhead of a complex installation process. The downside of K3s is that it’s not as feature-complete as standard Kubernetes, and version compatibility can be trickier.
Both Kubernetes solutions, RKE2 and K3s, come with a built-in ingress controller enabled by default. RKE2 includes a hardened NGINX ingress controller, while K3s opts for Traefik as its default.
For the astute reader, you might notice I didn’t recommend or show the installation of “upstream” Kubernetes . There are options to deploy upstream Kubernetes using Kind , Minikube , and Kubeadm . However, I believe this is the area that gives Kubernetes its reputation for complexity. In my professional life, I’ve rarely dealt with upstream Kubernetes directly. More often, you’ll find yourself working with managed solutions like GKE, EKS, AKS, or on-premise solutions such as OpenStack , Talos/Omni , or Rancher’s RKE2 .
So, Is Kubernetes Overkill?
Ultimately, both Docker Swarm and Kubernetes are solving the same problem: orchestrating containers at scale. Under the hood, both systems are similarly complex—they require databases to store clustering state, leader elections for masters, and mechanisms for balancing workloads across nodes. Docker Swarm successfully hides much of this complexity within Docker itself, making it appear simpler on the surface. However, these complexities still exist, and I’ve had my share of Swarm clusters fail due to network issues and leader-election problems.
I want to clarify—I don’t dislike Docker Swarm. In fact, I think competition in the container orchestration space is a good thing as it drives innovation and growth. I’ve used Swarm in the past and encourage others to explore it. However, when considering whether Kubernetes is “too much” for a homelab, I, ultimately, agree with Raid Owl’s conclusion that it really depends on your goals.
For many of us, homelabs are more than just places to host small services for tinkering. They’re learning environments where we experiment with current technologies and hone skills that are relevant in our professional lives. Whether you’re a current or aspiring Administrator, SRE, Cloud Engineer, Network Engineer, or Software Developer, having hands-on Kubernetes experience is invaluable in the workforce.
In today’s tech landscape, Kubernetes is the dominant clustering solution in the industry. While some companies use Docker Swarm, they make up a rather small portion of the market. If you’re aiming to stay competitive in the job market, learning Kubernetes will provide you with more opportunities. For this reason, I recommend starting with and using Kubernetes in your homelab—and if you’re still curious, you can always circle back to Swarm as an additional skillset later.
What’s Next?
In an upcoming article, I’ll walk through setting up a small Kubernetes cluster and show you how to deploy ArgoCD in an App of Apps configuration, pulling deployments from a Git repository. This is the exact method I’ve used to manage my homelab for years, and it’s very similar to the configuration(s) I’ve implemented in corporate environments at scale. This type of Git-Ops driven workflow lends itself beautifully to a proper CI/CD implementation.
Over the years, I’ve had to destroy and rebuild my homelab cluster multiple times, but getting back to a “steady state” no longer takes more than a few minutes to an hour tops. Here’s my typical workflow:
- Stand up a Kubernetes cluster using Ansible.
- Install ArgoCD .
- Deploy Gitea as my self-hosted Git solution (recovering from backup if necessary).
- Link ArgoCD to Gitea.
Once these steps are complete, everything else is handled automatically, ensuring a consistent, reliable homelab environment. I would ultimately like to optimize this workflow even further by implementing bare metal management in the environment, negating most of the need for Ansible.
Stay tuned for that deep dive—and as always, happy tinkering!