I'm two days behind on my Mastodon timeline because my K8s cluster project has been eating my brain. I probably should go to therapy instead. π€£
#HomeLab #TalosLinux #Kubernetes
I'm two days behind on my Mastodon timeline because my K8s cluster project has been eating my brain. I probably should go to therapy instead. π€£
#HomeLab #TalosLinux #Kubernetes
Aw man. Another rabbit hole.
This whole Talos/Kubernetes exploration is making me rethink my home lab DNS situation. π
Edit: I've been using Pi-hole as my primary DNS with static hostnames, and I found out that K8s external-dns does have support for it's API, so now I'm trying to decide if I wanna keep doing that, or if I just daisy-chain with PowerDNS. π
Well... Every time I create a repository on my self-hosted Forgejo I set the object format to sha256, because I thought any modern things should work fine.
It so happens that Flux only talks to repos in sha1 format. :picardfacepalm:
It's interesting when I go down this rabbit hole of learning new things: because of Talos I need to learn Talhelper (as opposed to Terraform), Cilium (as opposed to Calico/Flannel), LGTM (as opposed to Kube-Prometheus), and now I found out about Taskfile (as opposed to Makefile). My head is spinning. π΅
We know #TalosLinux is π€ but is it really the smallest?
We ran the tests. Weβve got the data. Check it out if you like numbers.
Watch β https://youtu.be/atPvnJMGdfs
Read β https://www.siderolabs.com/blog/which-kubernetes-is-the-smallest/
After a good night of sleep I realized I was unfair on my rant about Talos Linux: it's not their fault.
Setting up a basic cluster was easy. Doing the same with Talhelper was even easier.
But it took me hours to set up UEFI secure boot and TPM disk encryption. Talos doesn't have a native way to manage secrets, and their Terraform provider is very incomplete. Talhelper made it less bad, even though still not ideal.
Bootstrapping with extended security like encrypted local storage, privileged namespace exceptions and network firewalls were very cumbersome to implement. Apparently it's supposed to be easier if you do post bootstrapping.
So, as you can see, my problems are mostly because I'm paranoid, and I want to run a home lab with the same level of automation and security as a production environment.
I'm sure it's not supposed to be that hard for most people. Please don't get discouraged by my experience.
I'm still working on getting it up and running the way I want. I'm getting there.
Seriously... Building this Talos Kubernetes cluster on my local home lab machine is turning out to be a lot harder than building an Azure AKS cluster. π
Complexity can creep into your infrastructure fast, and once itβs there, it slows everything down.
Complex systems mean more effort, more stress, and more things that can break.
Simple, on the other hand, is reliable. Simple systems like Talos Linux and Omni can reduce maintenance time by up to 66%, giving time back to technologists and providing clearer oversight of your entire deployment.
https://www.siderolabs.com/blog/cut-kubernetes-infrastructure-costs-with-omni-and-talos-linux/
#Kubernetes #PlatformEngineering #TalosLinux #SRE #CloudOps
And why did I choose Talos Linux instead of k3s, minikube, or so many other ways to deploy Kubernetes? Very simple answer: immutable deployment + GitOps. I have a number of hosts that need to run apt/dnf update on a regular basis. As much as this can be automated, it is still tiresome to manage. I don't have to worry as much about an immutable host running a Kubernetes cluster, mostly because the bulk of the attack surface is in the pods, which can be easily upgraded by Renovate/GitOps (which is also something I miss on the hosts running Docker Compose).
Now the research starts. I know Kubernetes, but I don't know Talos Linux, so there's a lot to read because each Kubernetes deployment has it's own nitpicks. Besides, I need to figure out how to fit this new player in my current environment (CA, DNS, storage, backups, etc).
Will my experience become a series of blog posts? Honestly: most likely not. In a previous poll the majority of people who read my blog posts expressed that they're more interested in Docker/Podman. Besides, the Fediverse is already full of brilliant people talking extensively talking about Kubernetes, so I will not be " yet another one".
You will, however, hear me ranting. A lot.
3/3
The main reason for replacing my Proxmox for a Kubernetes deployment, is because most of what I have deployed on it are LXC containers running Docker containers. This is very cumbersome, sounds really silly, and is not even recommended by the Proxmox developers.
The biggest feature I would miss with that move would be the possibility of running VMs. However, so far I've only needed a single one for a very specific test, that lasted exactly one hour, so it's not a hard requirement. But that problem can be easily solved by running Kubevirt. I've done that before, at work, and have tested it in my home lab, so I know it is feasible. Is it going to be horrible to manage VMs that way? Probably. But like I said, they're an exception. Worst case scenario I can run them on my personal laptop with kvm/libvirt.
2/3
Quick talk about the future of my home lab. (broken out in a thread for readability)
After lots of thinking, a huge amount of frustration, and a couple of hours of testing, I am seriously considering replacing my Proxmox host for a Kubernetes deployment using Talos Linux.
This is not set in stone yet. I still need to do some further investigation about how to properly deploy this in a way that is going to be easy to manage. But that's the move that makes sense for me in the current context.
I'm not fully replacing my bunch of Raspberry Pi running Docker Compose. But I do have a couple of extra Intel-based (amd64/x86_64) mini-PCs where I run some bulkier workloads that require lots of memory (more than 8GB). So I am still keeping my promise to continue writing about "the basics", while also probably adding a bit of "the advanced". Besides, I want to play around with multi-architecture deployments (mixing amd64 and arm64 nodes in the same k8s cluster).
1/3
π£ #TalosCon2025 is coming soon!
π October 16 - 17, Amsterdam
Join for technical content, community learning, and real-world insights from the people building with #TalosLinux and Omni.
Get your early bird tix or submit your proposial now! https://taloscon.com/
Still wrestling with Kubernetes?
Join Justin Garrison from Sidero and Jorian Taks from TrueFullstaq for a real conversation on why Kubernetes management feels so unnecessarily complex, and what you can do about it.
ποΈ Thu, July 17, 2025
π 16:00-17:00 CEST
π Register https://www.bigmarker.com/truefullstaq/stop-fighting-kubernetes-start-managing-it
This oneβs for the platform engineers, DevOps teams, and tech leads who want Kubernetes to just work.
#Kubernetes #PlatformEngineering #DevOps #CloudNative #TalosLinux
Just had an interesting issue with Talos Linux. The network interface names changed after I created the initial configuration. During an OS upgrade, the floating API IP was not assigned to the new `etcd` leader, resulting in a broken cluster.
Spun up a quick rescue box so I could work from within the VPC to reapply the corrected `MachineConfig`.
Fortunately, the worker nodes remained unaffected and continued to operate normally.
If a team member spends just 5 hours per week wrangling infrastructure, that adds up to over 6 weeks of work by the end of the year. Your budget and your team deserve better.
Keep reading: https://www.siderolabs.com/blog/cut-kubernetes-infrastructure-costs-with-omni-and-talos-linux/
#Kubernetes #PlatformEngineering #TalosLinux #SRE
System Extensions are the primary way to extend Talos Linux beyond the bare minimum files and services to run Kubernetes. Hereβs how to create your own.
Hathora set out to make infrastructure invisible for game studios building multiplayer titles. The deeper they went into cloud-based #Kubernetes, the more visible the limitations became.
So they pivoted.
https://dzone.com/articles/bare-metal-hybrid-multiplayer-infrastructure
βI had this idealized way of how we were going to manage #Kubernetes clusters. I would tell people, βDonβt SSH onto them.β And of course they would, and of course I would, and weβd have to change things.β
#TalosLinux has no SSH. And itβs for a reason.
https://thenewstack.io/no-ssh-what-is-talos-this-linux-distro-for-kubernetes
How did Nokiaβs NESC team build a fully open-source, L3-only Kubernetes platform on bare metal? Kai Zhang shares their story at KCD Helsinki 2025.
https://www.youtube.com/watch?v=TJThJT9Domk&list=PL09s8ZalKQe8in2Ypw7BHfTidIRpxN18w
Simple Kubernetes cluster management. Simple design. Simple pricing.
β‘οΈ Let's say you have a massive worker node. A provider might charge $45k per machine per year. With 10 nodes, you pay $450k.
β‘οΈ With a Sidero enterprise plan, you pay $1k per month for 10 nodes. In total, that's $12k.
Keep reading β https://www.siderolabs.com/blog/cut-kubernetes-infrastructure-costs-with-omni-and-talos-linux/