Do what you can, with what you’ve got, where you are.
The Fox and the Cat
Aesop’s fox got so distracted by the many plans he had to escape from the hounds that he forgot the crucial point of actually doing something about it, and got caught. I’ve never found myself running away from dogs but much like the fox I often find myself thinking more about a project than actually doing it.
This is not without some advantages, as I rarely find myself unprepared to tackle a project, but at some point we have to climb the tree to flee the hounds.
So when I recently got some free time on a long weekend I decided to finally get a backup server for my homelab up and running with whatever hardware I had I had laying around.
This is the relevant pieces of hardware that I could find in my garage:
- A refurbished HP Erica 3 office PC.
- An 2U computer case that I literally found in the trash (trash bins at CERN are full of little treasures sometimes, but that’s for another day)
- A couple PCIe SATA adapters, same origin as the case.
- An assortment of 8x HDDs of different capacities, some from old computers, some donated by a friend who had gotten rid of his own server.
But.. Why bother with a homelab?
The more economically-minded individuals might cringe with the realisation that what I’m trying to do here is ridiculously cost-ineffective. It would be cheaper to just use a PaaS / IaaS provider or even buy a regular server, unless we completely discard the cost of my own labor. And even if we do, a server with hand-soldered cables sitting in a garage can’t compete with enterprise cloud infrastructure. So why bother?
Well, because a homelab is a learning tool, not a low-cost hosting solution.
Even then, it’s fair to doubt what’s to be learned by hacking a server together like this, since you’d never (I hope) do something like this in a production environment.
But I find that it develops my systems thinking by being able to explore all kind of solutions, even, a priori, ridiculous ones, which lets me gain a kind of intuition that’s difficult to acquire without this practical experience. Black boxes become less black, and this skill set transfers beyond just servers.
Also, and more to the point of the current post, it teaches me to ship. Whatever I have lying around, that’ll have to do one way or another.
Now, back to the tinkering…
Getting the server into the box
The first challenge was to fit the motherboard into the U2 case. The Erica 2 is a custom form factor, which means the MicroATX/Mini-ITX holes in the case didn’t fit. I would have to drill a new set, only I don’t have a tap to create the thread for the screws. But following the spirit of “doing what you can, whith what you’ve got, where you are”, I drilled plain holes in the case and secured the screws in place with rings and nuts. It turned out ok!

Getting the motherboard through the long screws wasn’t that easy, but I didn’t have shorter ones and the whole point was to make do with what was available here and now.
Next problem was the PSU: Of it wasn’t 2U compatible either. Indeed it was so much smaller that I would have needed to create some sort of adapter to be able to fit it in the huge PSU slot.
Again, that would mean delaying the whole thing so I started to look for alternative solutions, and it was then that I realised that the distance between the screws of the PSU was pretty much exactly 4x PCI slots wide… and since the motherboard is so tiny that it didn’t even reach the PCI slots of the case, I just screwed the PSU it into the PCI placeholders (see pic below)

I removed two PCI placeholders to make space for the PSU fan. Ok it’s hackish but look at that picture on the left, the fit looks almost made on purpose!. Also it’s a lot more sturdy than it looks I promise!
But that wasn’t the end of the problems caused by using a custom form-factor PC as a server. HP designed the original case so that when pressed, the power button would push on a switch located directly on the motherboard. There is no Power SW
cable!.
I considered resorting to WoL from a RaspberryPi or something but I figured not having to open up the case to manually power cycle the server was probably a good idea, so I went ahead and soldered a Power SW
cable directly into the motherboard. Which was as well because as it turns out WoL doesn’t work in this model.

I have no idea what the purpose of that slot in the board is, but it sure was handy to pass the newly soldered Power SW cable to the front!
Getting it running
I installed proxmox, a KVM hypervisor with LXC, because it makes it very easy to tinker with VMs, networking, backups, etc. and it provides a nice overview on the web interface. It also has native ZFS management capabilities which is plus.
Following the 3-2-1 backup rule, my plan was to have a /data
directory mounted on a SSD and shared through NFS, which will backed up in the local ZFS pool as well as in a remote server.
I created the ZFS pool in the hypervisor and shared it with a AlmaLinux VM, and then configured borg for the backups:
borg init --encryption=repokey /pool/backup/
borg init --encryption=repokey ssh://[email protected]:/backup/
And created a systemd service and timer to launch this synch script once per day:
LOCAL_BACKUP=/pool/backup
REMOTE_BACKUP=ssh://[email protected]:/backup/
borg create --verbose --stats --compression lz4 \
"${LOCAL_BACKUP}::$(date +%Y-%m-%d)"
borg create --verbose --stats --compression lz4 \
"${REMOTE_BACKUP}::$(date +%Y-%m-%d)"
And here’s where the last issue raised. Even though I was only really using the ZFS pool once per day, the HDDs kept spinning and the power usage was reflecting that. A lot has been said about whether it’s a good idea or not to keep the HDDs always spinning, but with the current developments in the energy price and given that I literally only use them once per day I really want to keep them off.
In theory hdparm -S 60 /dev/sdX
should ensure they turn off after 60s but it didn’t work. I thought ZFS might be doing some housekeeping in the background but zpool iostat -Hy
showed no activity at all. Perhaps the kind of HDDs that I had randomly found were not meant to be used in this manner?
Anyway, again I decided to timebox the issue and what I came up with was a script to manually turn the HDDs off if there’s no activity in the pool:
# ... config omitted
function process_line {
local line=$1
local rw_count time_diff current_time
rw_count="$(echo "${line}" | awk '{print $4 $5 }')"
if [ "$rw_count" == "00" ]; then
# ...
if [ "$pool_status" != "inactive" ]; then
pool_status="inactive"
echo "pool inactive"
fi
current_time=$(date +%s)
time_diff=$(( (current_time - last_activity ) / 60 ))
if [ $time_diff -ge $MAX_IDLE_MINUTES ]; then
echo "pool inactive for $time_diff minutes"
echo "sending standby command to disks"
for DISK in /dev/sd{a,b,c,d} ; do
hdparm -y "$DISK"
done
pool_status="disks_on_standby"
fi
else
if [ "$pool_status" != "active" ]; then
pool_status="active"
echo "pool active"
fi
last_activity=$(date +%s)
fi
}
zpool iostat -Hy $POOL_NAME $EXECUTION_INTERVAL_SECONDS | while IFS= read -r line; do
process_line "$line"
done
That did the trick! The power consumption went from 40W when all HDDs are on to just 19W when idling.
The Lesson
I hope that this time I was closer to the cat than to the fox! and following the fable theme here’s the lesson from this weekend’s adventure:
- “Perfect is the enemy of good” It’s very tempting to try to solve all the problems that we come up with along the way rather than the ones we set up to solve in the first place. But stick to the plan and don’t shave the yak!
- Timeboxing is a useful tool. If you’ve read The Principles of Product Development Flow or similar related methods like TPS, you’d be wary of deadline-driven project management. But that’s not to say that timeboxing is not useful as it prevents perfectionism, makes progress measurable as at the very least you’ll have gained some knowledge of what can be achieved in a given unit of time, and also reduces procrastination since it makes the commitment to the task less daunting.