Shobhit Sharma
Posted on:January 2, 2026 at 01:00 PM

State of Homelab 2026

If you read my State of Homelab 2025 post, you know I’ve been making changes. This year I added new hardware, retired old gear, and learned hard lessons.

The biggest change was retiring my “frankenstein machine” setup, a mix of laptops and Raspberry Pis that somehow worked together. I liked having that laptop in the mix since it was just sitting around, but managing it got difficult. It was an old system without an auto start feature in the boot menu, and it was starting to heat up.

I initially planned on getting mini PCs. But I wanted to expand the infrastructure and get a proper NAS server. I wanted to run TrueNAS on a server with ECC RAM, so I decided on a Dell Optiplex 7060 SFF. I got a good deal on a refurbished one and it’s been running smoothly.

Overview

The heart of my homelab is a Dell Optiplex 7060 SFF my workhorse. I grabbed four 16GB DDR4 RAM modules before “the great RAM apocalypse of 2025,” giving me 64GB total, plenty of headroom for everything I’m running. Storage includes a 512GB M.2 SSD for the OS, two 4TB HDDs for bulk storage, and two 500GB SATA SSDs for fast storage.

My Raspberry Pi fleet keeps running: two Raspberry Pi 4 Model B units with 8GB RAM each, plus two Raspberry Pi Zero 2 W boards for lighter workloads. These little workhorses are reliable. They just keep running, handling whatever I throw at them.

I also have my main tower PC (AMD Ryzen 7600X with 32GB DDR5 RAM) that I connect to the cluster when I need extra power. It has a 1TB M.2 SSD and a 500GB SATA HDD, perfect for heavy tasks that would stress the Pis. I use this as my main development and gaming machine.

Architecture

Diagram

Everything centers around the Dell Optiplex running TrueNAS, which serves as our family NAS. With 64GB of RAM, I have plenty of headroom to run TrueNAS plus a virtual machine that hosts my k3s master node.

The architecture is simple: the k3s master node runs as a VM inside TrueNAS, and it orchestrates a cluster with two Raspberry Pi 4s as worker nodes. This three-node cluster runs 24/7, handling all my self-hosted services. It’s powerful and efficient, exactly what I need without overkill.

I’ve hardened the setup since last year. Security has been a major focus, and I’ve implemented both remote and cloud backups for critical data. Here’s what I’ve learned and what’s changed.

Learnings

3-2-1 Backup Strategy

I learned the hard way this year that the 3-2-1 backup strategy isn’t just a best practice. It’s essential.

I use Longhorn to manage storage on my k3s cluster. My setup replicates volumes across multiple nodes and takes persistent backups to an S3 bucket. Everything seemed fine until I updated to the latest version of Longhorn. That’s when things went sideways.

After the update, I started noticing some services behaving oddly. A few pods were failing to start, and I couldn’t figure out why. When I dug deeper, I discovered that several volume replicas had become corrupted during the upgrade process. It wasn’t immediately obvious. The corruption was subtle enough that some volumes appeared fine while others were silently failing. Classic “everything’s fine until it’s not” scenario.

My backup strategy saved me. All my configurations, databases, and critical backups are stored in an S3 bucket replicated to both AWS and Cloudflare R2. This dual-cloud replication meant I had multiple copies of everything important, and I restored from backups without losing any critical data. Crisis averted, lesson learned.

The experience reinforced my commitment to proper backups. The S3 strategy works for configurations, databases, and backups, but I don’t store large media files there. The costs would be prohibitive. I’m setting up a remote backup at my parents’ place. Once that’s complete, I’ll have true off-site backup for all my data, giving me peace of mind if disaster strikes.

k3s Quorum and High Availability

I run my homelab in high availability mode with three nodes and a quorum of two. The cluster can tolerate one node going down and still remain operational, a feature that’s saved me more than once.

HA isn’t without challenges. I’ve dealt with quorum loss a couple of times, and it’s as painful as it sounds. When you lose quorum (more than one node goes down or becomes unreachable), the cluster freezes to prevent split-brain scenarios. Debugging this is tricky because you’re dealing with a cluster that’s intentionally refusing to operate. It’s like trying to fix a car that’s designed to not start when something’s wrong, frustrating, but for good reason.

Node draining is another common challenge. When a node needs maintenance or is being removed, k3s needs to gracefully move all workloads off that node. Sometimes this process gets stuck, especially if pods have persistent volumes or specific node affinities. I’ve learned to be patient and check pod eviction policies carefully before attempting any node operations. Rushing it only makes things worse.

The key lesson: always have a plan for quorum loss, and never drain nodes during critical operations. Document your recovery procedures. You’ll thank yourself later when you’re troubleshooting at 2 AM.

Sometimes Simple is Just Better

Not everything needs to run on Kubernetes. I found this out with my arr stack (Radarr, Sonarr, and friends).

I spent too much time trying to get it working on my k3s cluster. I experimented with SMB mounts on Kubernetes, tried using Longhorn for persistent storage, and wrestled with the complexities of running stateful workloads in a container orchestration system. With lots of media constantly flowing through the system, it was a constant battle to keep things stable. Every update felt risky.

I took a step back and asked myself: what value is k3s actually adding here? The answer: not much. So I moved the entire arr stack to run directly on Docker on the TrueNAS server, and the difference was immediate. The system is much more stable now, and I’m not constantly debugging storage issues or pod evictions. It just works.

Sometimes the simplest solution is the best one. Not every service needs the complexity of Kubernetes, especially when you’re dealing with stateful systems that benefit from direct access to storage. The arr stack runs perfectly on Docker, and that’s good enough.

New Finds of 2025

I discovered several tools this year that made my homelab life easier. Here are the ones that stood out.

Arcane Docker Management UI

After years of using Portainer, I switched to Arcane. Portainer felt restrictive, with too many important features locked behind enterprise tiers. Arcane is completely free and open source, yet it feels more powerful and polished.

The UI is fast, intuitive, and makes sense. The agent mode is the standout feature. Managing my remote machines is easy. I can monitor and control everything from a single interface without the complexity I experienced with Portainer. If you’re looking for a Docker management UI, Arcane is worth a try.

Authelia

I’ve had Authelia running for a while, but this year I went all-in on it. I migrated most of my services to use OIDC/LDAP authentication, and it changed how I manage access. Single sign-on works well, and the security benefits are obvious.

The best part: my wife started using my self-hosted services without complaint. No more managing a dozen different passwords or dealing with authentication headaches. She logs in once, and Authelia handles the rest. That alone made the migration worth it. Sometimes the best metric for success is spousal approval.

Timetagger

As a freelancer, I’ve tried many time tracking tools: Tick, Clockify, Toggl, you name it. None of them clicked with how I actually work. Then I found Timetagger.

The UI is clean and simple, but what sold me is the tag-based approach. Instead of tracking time against specific tasks or projects, you track against tags. This flexibility means I can analyze my time data however I need, and it matches how I think about my work. Finally, a time tracker that works the way I work.

Vaultwarden

Vaultwarden is the open-source, self-hosted implementation of Bitwarden, and it’s been rock solid for my family. We’ve been using it for a while now, and it’s become an essential part of our infrastructure.

The best part: it’s fully compatible with all Bitwarden clients, so we get the full Bitwarden experience while keeping all our passwords on our own infrastructure. If you’re looking to self-host a password manager for your family, try Vaultwarden. It’s one of those tools that just works, and you forget it’s even there, which is exactly what you want from a password manager.

AdGuard Home

After years of running PiHole on a Raspberry Pi 3b+, I switched to AdGuard Home. The decision came down to a few advantages: native support for encrypted DNS (DoH/DoT) for better privacy, a modern and intuitive web interface, and per-client configuration that lets me set different filtering rules for different devices.

The transition was smooth, and I haven’t looked back. If you’re considering alternatives to PiHole or new to network-wide ad blocking, AdGuard Home is worth checking out. The encrypted DNS alone is worth the switch.

Self Promotion

Wololo

I should mention my own project: Wololo. It’s a simple and efficient web-based Wake On LAN management tool built with Rust, designed for homelab environments.

What I love about it is that everything is config-driven, which means you can version control your entire Wake On LAN configuration. No more manually managing wake-on-LAN setups across different machines. I’m planning to add more features this year, but in the meantime, if you have suggestions or want to contribute, open an issue or pull request.

Looking Ahead

Enhanced Backup Strategy

I’m working on a few things for the rest of 2026. First, I’m completing that remote backup setup at my parents’ place. Having true off-site backup will give me the final piece of a robust 3-2-1 backup strategy.

Introduce Headscale

I’m planning to introduce Headscale for better network management. I’m also continuing to harden my security posture. There’s always more to learn and implement, and I want to make sure my homelab remains both functional and secure. Security is one of those things where you’re never really “done,” but that’s part of what makes it interesting.

Expand Infrastructure

I’m watching for opportunities to expand or upgrade, but I’m trying to be strategic about it. The current setup is working well, so any changes will need to solve real problems rather than just adding complexity. I’ve learned my lesson about over-engineering things.

I plan to get an upgrade for my NAS CPU. Right now I have an Intel Xeon E3-1220 v5, but I want to get a more powerful CPU. I’m considering the Intel Xeon E3-1240 v5 or the Intel Xeon E3-1240 v6. Keeping an eye on the used market for good deals.

Contribute to Open Source

I also plan to start contributing to open source projects this year. I’ve been using many open source projects and I’ve learned a lot from them. I want to give back to the community and help others. I started with Wololo and I have a new project coming soon.

Here’s to another year of learning, building, and occasionally debugging at 2 AM.

Open to Collaboration

Hey there! If you've got a new project brewing in your head, or just want to share something cool, or even just drop a casual hi, please feel free to hit me up! It's always great to connect with folks who stop by, so don't be shy!