Dream NAS - Overkill

The main idea is to create something that is very flexible in terms of configuration but still uses (mostly) standard PC components and standard form factors. Upgradeability goes hand in hand with flexibility.

Upgradeability is key component of future proofing. Whilst I may eventually need a rack based solution, a moderately powerful desktop NAS should last me for 3-5 years if not more.

Linked to upgradeability is using older, slower components in some places to reduce cost. These will be replaced as funds become available.

This is a "dream machine".  

I've named the putative device "Overkill".

Use cases

It's a high performance NAS for for a couple of workstation class machines that have high speed (>10Gbe) networking. Workloads tend to be database and data crunching rather than video.

It's also a NAS and media server for lower specification desktops, laptops and other devices.

Especially with an upgraded CPU, lots of RAM and fast storage it would be good for running virtual machines.

Also, fast host/target system for Software Defined Storage (SDS) development.

The motherboard might make a good basis for a small form factor lower end workstation/server. I suspect it could be very popular.

Hardware

CPU

Selecting a desktop processor over a mobile/embedded one is mostly about the upgrade route.

The choice between Intel and AMD processors is pretty easy.

Ryzen CPUs support ECC whereas Intel's Core offerings, support is restricted to certain SKUs in combination with the (more expensive) chipset.

The most modern Ryzen CPUs offer 24 PCIe Gen 5 lanes (plus 4 x Gen 4 for the chipset). Intel's equivalents offer 16 Gen 5 and 4 Gen 4 lanes (plus 8 x Gen 4 for the chipset). 

The ideal CPU would be a Ryzen 3 9000 desktop CPU. This doesn't currently exist but there are rumours that AMD will release one. Until one does arrive, the Ryzen 5 7600 is the lowest cost option.

There's also the upgrade route to Epyc 4004 CPUs.

Ryzen 9000 and Epyc CPUs have integrated GPUs don't use use PCIe lanes and are capable of transcoding.

Chip set

The B650E chipset is the obvious choice.

Motherboard form factor

mATX is preferred over ITX because it allows for four PCIe slots without the need for adapters. Also because Overkill will have an eight drive bay, there's no need for a smaller motherboard.

RAM slots

With upgradeability in mind, four slots are better than two. 64 GB ECC UDIMMs don't exist as yet but 32GB ones do. RAID performance is enhanced by lots of RAM. Lots of RAM is useful with VMs.

PCIe configuration

One PCIe Gen 5 x16 slot supporting x8, x8 and x4, x4, x4, x4 bifurcation. This slot is intended for a PCIe to 4 x NVMe adapter. 

Whilst it is unlikely that anyone would want to install a x16 graphics card in a NAS, it would be possible. 

One PCIe Gen 5 x 8 slot supporting x4, x4 bifurcation. This is intended for a higher speed network card/storage controller. It could be used for a graphics card, so it should be physically x16. It could also be used with a PCIe to 2 NVMe adapter.

Two PCIe Gen 4 x4 slots attached to the chipset for less demanding cards. The x4 slots should be physically x8.

Less demanding cards could include SAS/SATA HBAs/expanders/extenders for external HDD enclosures.

Onboard SAS controller

AMD Ryzen chipsets at maximum support 8 SATA III ports. 4 is more usual with the lower end chipsets. Some NAS device motherboards have integrated SATA controllers as well as the chipset.

Rather than a SATA controller, an onboard eight port SAS-3 controller attached to the chipset could be used. It may cost slightly more than a USB controller but it has a lot advantages including the potential use of SAS expanders and external drive enclosures to increase capacity, even if the drives used are SATA.

HDD storage offers a lot of capacity, but SATA and SAS SSDs also exist. SAS-3 is twice the speed of SATA-III.

The onboard SATA controller is mostly intended for mirrored SATA SSDs to act as boot drives but the remaining two ports could be used for additional storage.

I have a bunch of 3TB SAS HDDs I'd like to reuse.

USB

Does Overkill need USB 4?

I have a number of Thunderbolt 3 equipped laptops and networking over USB 4 is an attractive option. Their workloads tend to be office focussed and don't require a lot of bandwidth. Gigabit ethernet and Wifi is good enough most of the time.

I'm unlikely to purchase a USB 4 equipped laptop in the short to medium term. I'll never buy anything like a Mac Studio. More likely to upgrade existing AM4 desktops to AM5 which will be able to accommodate faster PCIe ethernet adapters without the need for expensive adapters.

USB 4 is a nice to have but isn't required for my use cases.

USB 3 does offer an easy way to add reasonable ad-hoc storage to Overkill. The AM5 platform supports multiple 10 Gb/s connections and can do one 20 Gb/s connection. Connecting a USB 3 DAS would be possible and might be preferred over using a SAS HBA/expander and a SAS drive enclosure. 

Iirc, Terramaster products come with a USB port on the motherboard for OS installation? At a push, these can be used with USB to SATA adapters which could be useful for a SATA DOM etc. Also USB fan controllers exist.

There is a question how useful internal vs external USB ports are.

Storage

With the right adapters, Overkill could have six Gen 5 NVMe drives -  four on an x16 carrier card, two on a x8 carrier card. 

The largest Gen 5 SSDs at the time of writing are 4TB. There are 8TB Gen 4 drives. Configured as a RAID array, this is a lot of very fast storage. So fast that network speeds will be a bottleneck. Whilst there are options to further increase NVMe storage with special PCIe cards, they are very expensive.

Gen 3 SSDs are considerably cheaper than Gen 5. A RAID array of these could easily saturate a 25Gbe connection. 

Hot-swap isn't really an option for internal NVMe SSDs.

Overkill would have two 4 x 3.5" hot swap bays and SAS backplanes. The drive carriers must be able to accept 2.5" drives. Part of the thinking is that with two modular units, one could be swapped for a unit with even denser SAS/SATA 2.5" SSD storage backplane and mounting. It would likely require an additional SAS/SATA HBA and/or expander/extender.

Networking

The integrated 2.5Gbe ports would be useful for connecting to a slower network. As previously mentioned there are devices that don't need high speed networking.

One of the problems with faster network adapters is that they tend to be PCIe x8. There are x4 10Gbe adapters but I can't find any for 25Gbe or faster.

Second user faster network adapters can be found for not a lot. As enterprises upgrade their networking, more and more of these devices come onto the market. The limiting factor re adoption of faster networking is the cost of switches. With direct attachment no switch is needed.

Overkill will connect to two workstation class machines. With dual port cards, it's possible to connect each machine to every other in a ring network topology.

Looking at the second user market, both dual port 25Gbe and 40Gbe adapters look good value. The cost of transceivers and cabling, less sure about.

Wifi duties better left to other devices.

Case

The case would have two compartments - an upper compartment where the motherboard lives and the lower compartment to hold the 3.5" drive bays and PSU (for weight distribution). 

The height of the upper compartment is dictated by cooling requirements, in particular the CPU cooler. Ryzen 9000/Epyc 4004 CPUs can draw more than 200W. 200mm seems a reasonable figure. This would also allow for front mounted fans of up to 180mm. The depth of the case would depend on the thickness of the front fans. 280mm is probably enough.

The height of the upper compartment would allow for full height PCIe adapters to be used. The depth will preclude full length cards.

Eight bay NAS/drive enclosures are around around 200-250mm wide. mATX motherboards are 244mm wide. Larger cases allow for better airflow, so a case width of 300mm seems reasonable.

It is better to mount the PSU in the lower compartment for weight distribution. There's also a consideration of repurposing the lower compartment as a drive enclosure/DAS box etc. Common tooling saves costs. The question is whether PSU results in extra width or extra height? 

SATA 4 x 2.5" mount

The onboard four SATA ports could be used for 2.5" SATA SSDs. Given that case is specified with generous dimensions, a mount could be placed somewhere even if not hot-swap. The exact location isn't that important. It might make sense to have a SATA backplane and use a MiniSAS connector between it and the motherboard rather than a spaghetti of cables.

There is the question of whether mount should be able to accommodate 15mm tall SATA devices. 

Cooling

I don't have access to CFD tools or the ability to create a 3D model so working out an ideal air flow isn't possible.

Upper compartment: Front intake fan aligned with CPU cooler and a rear exhaust fan makes sense. Hot running SSDs, NICs etc, front intake and a top or side mounted exhaust fan would be sensible.

Lower compartment: Less important here. PSU can look after itself. HDDs etc, as long as their is good air intake at the front, a couple of rear exhaust fans should be sufficient.

"Bonus boxes" etc

Whilst having a larger case does allow for better airflow, especially with correct fan placement, you can up with an adequately cooled empty space that's not quite right for storage form factor devices.

If Overkill used USB C rather that USB A as the onboard loading OS stuff, internal Google Coral USB Edge TPU would be an option if there some space for it.

I've a project to use Raspberry Pi Pico for advanced fan control.

Somewhere to mount these kind of devices other than Velcro to the inside external cases would be potentially be very helpful.

Software

Terramaster OS has some compelling features for my use cases. In particular, the ability to create a RAID array of heterogenous devices and to add drives to an existing RAID array fits well with starting with modest storage and upgrading over time.

TOS is also user friendly.

Client side apps for backup and synching are very important. 

Tiered storage is of great interest. It's understood that file caching works best with lots of RAM. Some NAS OSes like UnRAID use SSDs as r/w caches and move files to slower storage with an overnight batch job. There are enterprise solutions for tiered storage that are very sophisticated. It would be interesting to have TOS implement some of those enterprise features.

Being able to install Proxmox on Overkill and running TOS as a VM would be extremely useful. I could run other VMs at the same time.

Running other storage OSes would be very interesting but of course, they would not be able to access TRAID pools. But of course, TOS would be able to access plain RAID arrays?

Coda

The idea that Overkill's motherboard could be used for lightweight workstation/server duties is quite powerful.

But there's also the business of leveraging a modular storage case system. For example, I can imagine just the upper compartment of the system with external PSU accommodating four 10TB 2.5" HDDs.

I can imagine the lower compartment as the basis of SAS/USB/Thunderbolt drive enclosure. 


Comments

Popular posts from this blog

Windoze

Rehabilitation of Old PCs