The Custom Storage build was one of the most serious server builds I have done, and I am very happy with how it turned out. Here is the story in more detail.
For more than a week, I was busy building my Storage server. For years, I had planned to make my existing servers diskless, remove their drives, and move everything into a single server to free up more space for data center GPUs. I really enjoy using vGPU because it lets me split 1 GPU and use it across different VMs running different workloads, from heavy CAD work to AI tasks.
Another reason for this build was flexibility. I wanted a storage platform that I could use with both vSphere and Proxmox, while keeping compute and storage separate. When I run AI workloads, my storage needs sometimes grow much faster than the CPU and RAM usage on a single server, so I can end up needing more disk space even though that host is not fully utilized. By moving storage into its own server, I can expand capacity independently, keep the hypervisor hosts diskless, free up more space for GPUs and other PCIe devices, and manage storage from 1 central system. It also makes upgrades and maintenance much easier, since I can move VMs to other hosts, upgrade vSphere, or make changes on a hypervisor without being tied to the local drives inside 1 machine.
My goal for the Storage server was to use data center and consumer M.2 NVMe drives and choose a motherboard and CPU with as many PCIe lanes and PCIe slots as possible. I also wanted the PCIe expansion slots to be full x16 slots, not split and not running through bridge chips. RAM prices have gone up sharply during the current AI boom, motherboard prices are high, and CPU availability is poor. So I decided not to go with a PCIe Gen 5 motherboard and CPU, especially since getting suitable RAM would have been difficult.
At the same time, I already had spare DDR4 memory available. I had one 64 GB DDR4 stick in reserve, and if necessary I could also take additional RAM sticks from some of my other servers. Across my ESXi servers, I have a total of 3x 512 GB of DDR4 RAM. That meant the motherboard and CPU also had to support the DDR4 memory I already had. The motherboard also had to include 2x 10 gigabit X550-AT2, or network card that is supported by hypervisors.
I had four motherboard options:
Gigabyte MC62-G40 Rev 1.0
ASUS PRO WS WRX80E-SAGE SE WiFi E-ATX sWRX8
Supermicro M12SWA-TF E-ATX WRX80
ASRock ROMED8-AT
After taking a look at their motherboard topologies, the two boards I liked the most and that suited my needs best were the Gigabyte MC62-G40 and the ASUS PRO WS WRX80E-SAGE SE WiFi E-ATX.
I already own one Gigabyte server motherboard and I am extremely happy with it, but this time I chose Asus because it is highly praised and I do not have any previous experience with Asus motherboards.
For the CPU, I chose the AMD Ryzen Threadripper PRO 5945WX so the system would be fast enough and provide plenty of PCIe lanes.
Here is the exact specification of my Storages server:
CPU: AMD Ryzen Threadripper PRO 5945WX
CPU Cooler: Noctua NH-U14S TR5-SP6
MB: ASUS PRO WS WRX80E-SAGE SE WiFi E-ATX sWRX8
RAM: 2x Samsung M386A8K40BM2-CTD DDR4 64GB LRDIMM 288-pin
NIC: 100GbE, MT27700 Family [ConnectX-4]
PSU: Lian Li EDGE GOLD 1200W ATX 3.1
PC Case: HAVN BF 360 Flow
Storage:
2x KIOXIA CD6 SSD U.3 2.5” NVMe (KCD61LUL7T68) 7.68TB (At the moment, I use two of them, and the third is still in an ESXi server for safety reasons.)
3x Intel E1.S SSD Ruler D5-P4326 EDSFF PCIe (SSDPEXNV153T8D) 15.3TB
5x Intel SSD 660P 2TB
OS: TrueNAS Community Edition
CPU: AMD Ryzen Threadripper PRO 5945WX
This is my first AMD CPU, and coming from the Intel world, going into the BIOS felt quite different. There were many new things I had never heard of before, and some features were simply named differently. Intel and AMD are architecturally very different and really feel like two separate worlds. The same was immediately obvious when I looked at the motherboard topologies. As for the CPU itself, it is definitely very powerful, but it also draws more power and produces a lot of heat. It can warm up the room pretty well.


CPU Cooler: Noctua NH-U14S TR5-SP6
For CPU cooling, I chose the most powerful cooler that fits this CPU, the Noctua NH U14S TR5 SP6. There was another option that I also use with Intel CPUs, but I chose this one because it is designed specifically for AMD Threadripper.
Installing the CPU and cooler felt unusual to me. With Intel, I am used to attaching the CPU to the cooler first and then mounting it onto the motherboard. Here, the CPU is already secured to the motherboard, then I place the cooler on top and screw it down. The whole process felt different, and you never really know if you are tightening it too much.
One additional note: on Noctua’s website, it says that I should order a separate mounting kit for free because the included one may not fit. I ordered it, but later found out that the correct mounting kit was already included after all. In the end, they sent me the same bracket. I did not want to take the risk of finding out later that the included bracket would not fit the CPU and motherboard and then having to place another order. I wanted everything to arrive at the same time.



MB: ASUS PRO WS WRX80E-SAGE SE WiFi E-ATX sWRX8
I have mixed feelings about the Asus motherboard. The first thing that stood out to me was the large VRM heatsinks in front of the RAM slots, which can block airflow. What I disliked the most, though, was that it has 2x PCIe 6 pin power connectors placed at a 90 degree angle. That already meant I had to cut part of the metal in my PC case with a hacksaw just to make them usable. Even many E-ATX cases may not allow you to use them at all, but they are absolutely necessary if you want to use all of the PCIe slots. This motherboard has 7x PCIe Express 4.0 slots, and each slot can provide up to 75 W of power to the device. In total, it required 2x CPU power cables, 1x PCIe 8 pin cable, and 2x PCIe 6 pin cables. Just like the 2x PCIe 6 pin connectors, the USB 2.0 connectors that I need for my remote control are also placed at a 90 degree angle. Yes, I have a remote control for turning the server on and off. It looks a bit like a car key fob. I am extremely happy with that remote control, but I will talk more about it later.

Back to the motherboard. I could actually feel it radiating heat while it was in use. So even though this is a workstation and server class motherboard, it still needs proper cooling. I would not recommend using it in a PC case with poor airflow.
The BIOS looks similar to what you see on most modern gaming PC motherboards, and updating it is very easy. Like a proper server motherboard, it also has BMC functionality, which lets you access and monitor the system remotely. Although the BMC Asus uses comes from the same company and is similar to the one Gigabyte uses, I would say the Asus version is much more basic. With Gigabyte, I can see all connected devices in the BMC, such as the RAM modules, their speed, which cards are installed in the PCIe slots, their versions, and whether they are running at the correct speed, such as Gen 5, Gen 4, or Gen 3. That level of functionality was missing here. Otherwise, most of the other features are the same.
Another issue with this motherboard is that it uses a separate VGA adapter cable, which was not included in my case. That means every time I need video output, I have to deal with that separately. I also cannot see everything through the BMC, so if I need to enter the BIOS, I have to connect a separate graphics card because the VGA adapter cable is missing. The motherboard does have a special header for it.
On the positive side, BIOS and BMC firmware updates are very foolproof. I managed to brick both of them and still recover them and complete the updates properly. That is much easier than with Gigabyte or some other server manufacturers, such as Supermicro. I also like that this motherboard has onboard power and reset buttons, as well as a digital POST code display. The BIOS also includes a lot of settings for overclocking, although I have no plans to overclock it.
Overall, this motherboard feels like something in between. It is partly a workstation motherboard and partly a server motherboard. Would I buy an Asus motherboard again? That is hard to say. After having to cut extra holes in the case, I was definitely annoyed more than once.




PSU: Lian Li EDGE GOLD 1200W ATX 3.1
The Lian Li EDGE GOLD 1200W ATX 3.1 PSU has actually become my favorite PSU right now, mainly because its cables are usually much longer than those from other PSU brands. I used to use Corsair power supplies, but now I have a new favorite. I also like the L shaped design, which in my opinion makes cable connection much more convenient.
This PSU is quiet and works well. It also has a removable dust filter, which makes cleaning much easier. Even though this PSU has many PCIe connectors, I only had 1 PCIe cable left free, which I use for connecting a graphics card when I need to enter the BIOS or when I was installing TrueNAS Community Edition. In addition, there is also 1 free 600W 12VHPWR GPU cable.
If you are planning to buy one, I would recommend the EDGE Gold instead of the EDGE Platinum. The Platinum version has much shorter cables, and I also do not like the quality of those cables. In my AI GPU server, I am using the Platinum 1300W PSU.

PC Case: HAVN BF 360 Flow
For my other servers, I have been using the be quiet! Pure Base 500DX Black case because the airflow is good. I like the moving RGB light, and most importantly, it fits nicely into a server rack. But when choosing a case for the Storage server, I knew I needed something better, especially because I use data center NVMe drives that also need proper cooling. In normal servers, you usually have fans spinning very fast to cool everything, but I needed a quiet case. I had just seen that HAVN had released the HAVN BF 360 Flow. I watched a few reviews from tech YouTubers, and it was said to be one of the best airflow cases with large fans. It was also supposed to be a fairly quiet PC case, which is very important to me.
So that is what I chose, and the quality of this PC case is some of the best I have seen in a very long time. It reminded me of my first computer, a Gateway 2000 PC case, which was made from thick metal. At first, my idea was to drill a round hole so the motherboard power cables would reach properly, but the metal was so hard and thick that it became difficult quite quickly. So instead, I used a metal grinder and modified the case to fit my motherboard. Even though this case is supposed to support E ATX, yes, the motherboard fits, but the power cables usually do not reach properly.

But to say it again, this case is really impressive. Its looks and airflow are wild. Honestly, I would replace all of my server PC cases with this one because the airflow is very strong and it is quiet. But then I would probably have to build myself a new rack to match the dimensions of this case.










Storage:
2x KIOXIA CD6 SSD U.3 2.5” NVMe (KCD61LUL7T68) 7.68TB (At the moment, I use two of them, and the third is still in an ESXi server for safety reasons.)
3x Intel E1.S SSD Ruler D5-P4326 EDSFF PCIe (SSDPEXNV153T8D) 15.3TB
5x Intel SSD 660P 2TB
KIOXIA CD6 SSD U.3 2.5” NVMe (KCD61LUL7T68) 7.68TB

Intel E1.S SSD Ruler D5-P4326 EDSFF PCIe (SSDPEXNV153T8D) 15.3TB

These are data center ruler NVMe drives, so they are very long. Because of that, I had to build a separate frame where I could mount all of them. Fortunately, with this PC case, finding a solution was very easy and quick. I used a few metal brackets to make it and attached it to the case with screws.



Intel SSD 660P 2TB







Performance
Before getting to the most anticipated part, the performance, I will first explain how I set everything up.
My Storage server has 1x 100GbE MT27700 Family [ConnectX-4] network adapter, which is connected over 1x 100Gb DAC QSFP28 cable to a Dell EMC S4112T-ON switch. The Dell switch has only 3x 100Gb ports. 2 servers are connected over 100Gb, while the 3rd server is connected to the same Dell switch over 2x 10Gb ports.
On vSphere, I created 4 vmkernel adapters for each host and configured network port binding for the iSCSI adapter.
For the Storage Devices multipathing policy, I set it to Round Robin. Multipathing is working, but at the moment I am only using 1 NIC port instead of 2 NIC ports (This is because my 100Gb NIC only has 1x 100Gb port.).

Single Drive Performance on Windows 11
Now let’s look at the performance. When I installed Windows 11 on the server and used CrystalDiskMark to benchmark my drives, I got the following results for the individual drives.
Intel E1.S SSD Ruler D5-P4326 EDSFF PCIe (SSDPEXNV153T8D) 15.3TB

Intel SSD 660P 2TB

KIOXIA CD6 SSD U.3 2.5” NVMe (KCD61LUL7T68) 7.68TB

iSCSI and NFS Storage Performance in Windows 11 on vSphere
OS: TrueNAS Community Edition 25.10.2.1- Goldeye
Storage Drives
1x MIRROR – 2x KIOXIA CD6 SSD U.3 2.5” NVMe (KCD61LUL7T68) 7.68TB (At the moment, I use two of them, and the third is still in an ESXi server for safety reasons.)

1x RAIDZ1 – 3x Intel E1.S SSD Ruler D5-P4326 EDSFF PCIe (SSDPEXNV153T8D) 15.3TB

1x RAIDZ1 – 5x Intel SSD 660P 2TB

Storage Traffic from TrueNAS

I have not yet managed to get the storage traffic above 40Gb/s. So far, it has been around 30 to 40Gb/s. To fully max out my drives, I would need to optimize and tune the setup more specifically. Right now, it is still a very basic default setup, but that is fine for me at the moment. Later, when I have more time to tinker, tune, and experiment, I will let you know once I find the bottleneck.
At least I know it is not the network adapters, and it is not the ESXi server or the Storage server CPU, since most of the time they are barely doing any work. I also cannot use iSER, the iSCSI extension for RDMA, because I am using the Community Edition, and the NVMe-oF subsystem does not work with vSphere. There are also notes about this on the TrueNAS website:
“NVMe over TCP is incompatible with VMware ESXi environments. TrueNAS uses the Linux kernel NVMe over TCP target driver, which lacks support for “fused commands” required by VMware ESXi. This is an upstream kernel limitation that prevents path initialization in ESXi environme”