The next challenge I’ve decided to take on in my homelab journey is to replace one of my business’s remote servers hosted at OVH with a custom server hosted in my rack here at home. While technically this server will be separate from all homelab/non-production components, there is enough overlap that I don’t mind lumping it into the homelab category. After all, the primary purpose of my homelab is to support production in my business.
As part of that separation, I’ve ordered a /29 block of IP addresses, 6 of which I’ll be able to use (I’m using a separate IP address for the gateway). Technically I could use all 8 if I were okay with not being able to reach the neighboring addresses in the surrounding /25 block, but I’m not sure if I want to do that yet. (On second thought, the whole block was routed to me without assigning any of them to an interface, so I could probably just NAT each address to a host and use all 8 without compromising any routing ability.)
All business traffic will come through these IPs into my DMZ, assigned to specific machines. To start with, they’ll all be assigned to the same server, bridged to distinct LXC containers within that server. Over time I’ll reassign them as I add more prod servers to the rack.
On the hardware side…I can’t believe how long it’s been since I last built a computer. There are so many things I forgot how to do, suddenly remembering them mid-build. What I expected to take me about two hours ended up taking closer to four, with several returns and exchanges along the way.
Everything in the server build was new except the motherboard, a Supermicro eATX board with IPMI/BMC that I sourced from eBay. The hardest part was getting the PSU. Originally I opted for a Thermaltake GF3 850-watt PSU, but it was lost en-route from Amazon and there were no replacements available, so I had to step up to the 1000-watt model. I doubt this machine will ever come close to drawing that much power, even with all the drive bays filled, but the extra capacity doesn’t hurt.
Once the server was finished and racked, I installed Ubuntu, reset the BMC admin password, and it was ready to configure. I decided to convert my base server runbook into an Ansible playbook to build it, and since I don’t know anything about Ansible other than what people use it for, I’ll probably put together a write-up on that as well, once I’m done.