Why I Built a Home Server

Table of Contents
- The spark that pushed me over the edge
- What I actually needed from a home server
- The hardware I ended up using
- The network problems I had to work around
- The software stack I settled on
- Wins, headaches, and cost
- Lessons I wish I had earlier
- What I want to try next
- Getting started if you have an old machine
“Cloud bills hurt, and an old laptop in a drawer is free compute.”
That thought kicked off the rabbit hole that became my home server.
What started as an old machine on the side turned into a mini PC humming quietly beside the router, running more services than the paid VPS I used before.
1 · The spark that pushed me over the edge
I had wanted a homelab for a while, mostly after seeing polished self-hosted setups online.
Three things kept me from starting sooner:
- Public internet access was annoying, since Indonesian ISPs love CG-NAT and blocked ports.
- Electricity felt like a real concern, especially with stories about power-hungry storage boxes.
- Hardware was the last barrier, because I did not have anything rack-ready and Raspberry Pis were hard to find.
Then I remembered a dusty laptop with a cracked screen.
A RAM and SSD upgrade gave me a decent test box. After that, monthly cloud fees for slow, undersized VMs started to feel pretty hard to justify.
2 · What I actually needed from a home server
I was not trying to build a data center.
I just wanted a box that could handle a few practical jobs:
- Backups and storage, since cloud storage limits disappear fast on a student budget.
- A home automation hub, including Grafana dashboards for household power and Home Assistant for a few relays.
- An unlimited sandbox for Docker, Kubernetes, n8n, CI jobs, and other things the cloud kept throttling.
- Total control, including root access and the freedom to break things while learning.
3 · The hardware I ended up using
I eventually moved from that laptop to a lean mini PC.
The setup ended up looking like this:
- Intel i5-10400 mini PC: better thermals and power draw than the battered laptop.
- 32 GB RAM and 1 TB NVMe: enough headroom for VMs and ZFS cache.
- DIY UPS with a car-battery mod: around 8 hours of uptime during PLN outages.
- MikroTik router: VLANs, fail-over WAN, and easy QoS.
It stayed small, quiet, and cheap enough to run 24/7, around 50 to 100 k IDR a month in electricity.
4 · The network problems I had to work around
The hardware was only half the story.
The network side needed a few workarounds:
- Tailscale for mesh VPN access and remote SSH without port forwarding.
- Cloudflare Tunnel for the services that actually needed to be public.
- Warp as a fallback when the ISP blocked something unexpected.
That stack let me keep most traffic private while still reaching the services I needed from outside the house.
5 · The software stack I settled on
Proxmox VE became the base layer because the web UI was easier than bouncing through endless virsh commands.
Inside that, I ran Docker and k3s in VMs for the parts that were clearly overkill, but useful for experimentation.
My regular stack included:
- PostgreSQL
- n8n
- Pi-hole
- Nextcloud
- MinIO
- Redis
- Jenkins
- OpenWebUI
- Grafana and Prometheus
- Nginx Proxy Manager
- Uptime Kuma
- a Minecraft server, mostly for weekend nostalgia
Monitoring came first, because what you cannot see eventually breaks.
6 · Wins, headaches, and cost
The best moments were simple.
The first Proxmox install booted cleanly on the old laptop. Grafana lit up with real-time power draw from my ESP32 and PZEM004T. Spinning up a new VM took about 60 seconds, not a credit card form.
The hard parts were just as real.
- CG-NAT got in the way, until Tailscale helped punch through it.
- Random port blocks forced me to lean on Warp.
- Double NAT, split DNS, and SSL for internal hostnames turned into long routing rabbit holes.
- Cooling mattered, both for hardware and for my electricity bill, so undervolting and shutting down idle VMs became routine.
Cloud invoices were painful. My home server power use landed around 50 to 100 k IDR per month, tracked through InfluxDB. Turning off unused VMs and spinning down disks kept it sane.
7 · Lessons I wish I had earlier
Three things would have saved me a lot of friction:
- Start small. One reliable box is better than a half-finished cluster.
- Monitor everything from day one. Troubleshooting blind is miserable.
- Automate power protection. A UPS plus graceful shutdown scripts saves a lot of pain.
8 · What I want to try next
The setup is still a playground for me.
Next on the list:
- A full Kubernetes cluster experiment
- Database replication playgrounds
- Hosting an AI stack like Llama inside the homelab object storage
9 · Getting started if you have an old machine
If you have an old PC collecting dust, that is enough to start.
Grab a USB stick, flash Proxmox, Ubuntu, or UnRAID, and spin up one service you have always wanted to self-host. The learning beats any certification course, and you get to control every byte.