Modern 1U Server Build for Hosting

Alright… after waffling for a while on it, I think I’ve decided to actually go about a 1U server build to colo in a semi-local datacenter. This will replace almost all my cloud hosting, as well as providing the capability of offering some backup services for friends/family. As the internet seems like it’s heading off in a direction where distributed, federated services have gone from a novelty to what seems like a really darn good idea, and cloud providers are happily kicking people out of the entire hosting industry simultaneously, seems a thing to consider again. Even if I bloody well hate legacy sysadmin and was happy to be done with it… sigh

So, in no particular order, use cases for this server:

  • Remote data backup. There’s stuff I’d like to keep mirrored remotely, and I expect other people I know wouldn’t mind some backup too. Years ago I created an rsync-hardlink-based backup rotation system, and will probably re-implement it if I can’t find something that does it already. Figure a few TB of storage for this.
  • Hosting various services. Conversation would move over, as would my Matrix Synapse homeserver. These would get more RAM, and I could also more easily spin up some dev instances to mess with (I’m particularly interested in Dendrite as a Go based Matrix homeserver, should be an awful lot lighter to run than the Python-based Synapse, which is a heavy bloated pig). Several VMs for this.
  • Hosting some church services, similarly in VMs. I’d like to move our church off Slack, and RocketChat integrates nicely with our stuff. I’m also hosting a Jitsi instance that still gets some use, and would be nice to have moved off DigitalOcean. They’ve got some funky jitter.
  • Either being a video reflector or a video transcoder for livestreaming. In the simple case, reflecting a h.264 stream out to a few endpoints, in the more complex case doing some transcoding. See below for questions here.
  • A general trusted point for VPN services to come in from mobile devices. I don’t trust first hops very much, though with the utter lack of travel, it’s been less of an issue.
  • Assorted other services and traffic proxying that would make some use of bandwidth.

As I’m trying to de-Intel my life, because I don’t like what they’re doing and no longer trust their chips, this sort of stuff practically leaves me with AMD as an option. If ARM options meaningfully existed in my price range, I would consider one, but at this point, for a production server, I think x86 is still the right answer. AMD processors are fine, but do leave me without hardware transcoders…

I’m eyeballing this server:

https://www.asrockrack.com/general/productdetail.asp?Model=1U4LW-X470#Specifications

It supports plenty of disk, and I could boot from a NVMe mirror, put some big disks in for bulk storage, and, if I wanted, some 2.5" drives for “large fast storage.” Or just some SATA SSDs for that purpose. Not sure if I need them or not.

The system supports a single height PCI expansion device, and here’s where I’m looking for some advice. If I could have a reasonably affordable hardware transcoding capable device in here, that would be awesome - but I can’t find very many single height devices that would fit (some of the “single slot GPUs” have a thicker cooler, and others don’t support hardware transcode). Does such a thing exist?

As for the main storage array, I’m planning on a set of 4 3.5" drives in RAID6, mostly because that gives me 2 drives worth of storage an awful lot of redundancy, including the ability to figure out which drive has glitched a sector during a regular array scan. I think. I like my RAID6 arrays, because they just work…

CPU and RAM: I’m probably stuck with the 3000 series Zen chips, because the 5000s just aren’t meaningfully available yet, and I’m not sure the increased CPU performance really matters for anything but the possible software transcoding. And even then, with enough cores, I’m probably fine…

Given that RAM is relatively cheap, I’ll either stuff 64GB or 128GB in, ECC, because… hey, RAM solves most known issues.

Any specific advice on disks? I know the new stuff is helium based, no idea if it’s any good or not…

On a more general software stack level, if you’re going federated services why not go whole hog with Peertube?

Right now I’m working on spinning up a hubzilla instance (low priority, and it’s built for LAMP v. the LEMP I’m running) as it’s pretty cross-compatible.

Didn’t know it was a thing. But as much as I hate video, I might throw a TB or two at it.

Ouch. Spec’d out what I was thinking about, came to a rather eye-wateringly large sum.

Maybe I’ll back off on the RAM and storage space a bit…

Keep an eye on what you need and where you can expand - for example 2x32GB is better than 4x16GB as you can upgrade the former later.

Depending on the costs it may be cheaper to go with a 2U - depends on how your colo bills. 2U usually allow multiple full height cards.

You might start “small” with an off-lease 1U and replace it when performance gets to be an issue.

Depending on the colo power may not be a factor at all - often it’s bundled and “free” so you can run the most power hungry equipment you can find.

And again it may be much cheaper to transcode before upload on local equipment.

nods RAM is definitely something I’m thinking of pulling back a bit on. Do I need 128GB? Well, probably not…

A 2U vs 1U is an extra $75/mo. All in, for hosting, a 1U with 25 bursting 100 runs me about $120/mo. Which… still high, but better than $200/mo. Power is included, yes, but I’m not sure I want to run vintage hardware for this. I intend to keep it there a while.

The problem here is that our upload is just poor. I can’t upload multiple streams very easily and still maintain quality. The church has about 15.5Mbit up, I don’t like to push it past 12 as I see problems there, and I’d rather keep it around 8. Splitting that into multiple bitrates starts eating into bandwidth quickly.

I’m playing with a few concepts, but cutting out some of the overkill might make sense. Not like it’s hard to wander over to Boise and swap out a CPU or add RAM.

Alright, I figured some stuff out. A bit less RAM, a bit less drive, stuff came down to where I wanted. Parts ordered.

… for a guy who claims not to like modern hardware, I’ve sure ordered a lot of it in the last 5 years. :confused:

Would you be willing to share the final configuration?

People at work are “cloud addicted” but I am still absolutely certain we could buy the server we “rent” for one-month’s rent and then just have to pay colo …

Yeah, I’ll document the build once it’s worked out.

Mostly, 12 cores of AMD Zen 2 (can’t find a Zen 3 anywhere), 64GB RAM, and some 8s for disk, with a NVMe boot mirror.

woot.

Boxes arrived. :smiley:

Oooh. I have a blag post to write.

I can’t easily find benchmarks on the difference in performance on 4k sector drives, between 512e (emulated 512 byte sectors) and native 4k sectors. With the 512 byte sectors, the drive has to do a read-modify-write for changing a 512 byte sector so, under heavy modification loads, could be half the performance of 4k sectors (that are written as one). I’ll have to try!

1 Like

Yeah, if it ALL possible you want the OS to know what the “real” sectors are (and/or align with the actual sectors) - lots of stuff on tuning filesystems in that area.

I did it for my drives:

tank ~ # zdb -C  tank | grep ashift
                ashift: 12
tank ~ # zdb -C  root | grep ashift
                ashift: 9

2^12 = 4k (2^9 = 512)

Well, server assembled, minus the hardware transcode GPU, which isn’t here yet.

Time to build a few more network cables so I can actually get it on my LAN and start playing! :smiley:

Watch out, you might like it so much you need another one to go in the actual colo… :wink:

I’m good, I have a very similar homeserver I can upgrade if I need more capacity locally. It only has 32GB of ECC, but it’s working for what I need.

I’ve been playing with my new MikroTik 4 port 10GB and man it’s nice.

Insane that i have devices on the same network that are three orders of magnitude different in speed (10mb vs 10gb).

I really, really like Mikrotik gear. I’ve been using it for over 15 years and while the learning curve is stiff on the routers, if you can dream it you can do it, and they really do just sit there and work.

I set one up for my parents and friends in CA before I got married and moved away - they’re still running just fine half a decade later. Forgot they were there.

Aaaand, it locks up quickly when idle. Wonderful.

Going through the normal AMD Linux idle troubleshooting steps. :confused:

  • Set Power Supply Idle Current to “Typical” and disabled global C-states.
  • Disabled DRAM Power down mode.
  • Set processor max_cstate to 1 (basically, don’t drop any cores to idle, which is fine in a DC… beats a box locking up).
  • Installed 5.8 kernel.

Letting it sit now for testing…

// EDIT: That didn’t work. sigh

Server is getting returned.

I can’t get it to idle for more than an hour or so, despite disabling all the C6 state stuff I can find. I don’t know if it’s a CPU issue, fundamental CPU design issue, mainboard issue, or what, but I cannot trust it, and I will not rack a box I can’t trust. :confused:

I guess I pay for Intel hardware, since it’ll probably at least work. :frowning: