So, I rebuilt my home lab, re-installed my ESXi servers, set up vSAN, vMotion, and so on.
100Gb QSFP28 Speed Test Between ESXi Hosts
I have been waiting so long to discover the actual performance with 100Gb NICs. William Lam told me that iPerf3 is included with ESXi. I just enabled SSH on each host.
When I tried to run Iperf3 in server mode, I got the following error: “….bind failed: Operation not permitted.” However, William Lam already found a quick fix. Just copy iperf3 binary to another file and done. You can find his blog post here: (https://williamlam.com/2016/03/quick-tip-iperf-now-available-on-esxi.html)
ESX-1
I run iPerf3 in server mode
Disable firewall.
esxcli network firewall set –enabled false
changes a location
cd /usr/lib/vmware/vsan/bin/
copy iperf3 binary to another file.
cp /usr/lib/vmware/vsan/bin/iperf3 /usr/lib/vmware/vsan/bin/iperf3.copy
Start iPerf3
./iperf3.copy -s
ESX-2 and ESX-3
NB! When running iPerf3 in client mode, I do not have to copy anything. Just run it and done.
Disable firewall.
esxcli network firewall set –enabled false
changes a location
cd /usr/lib/vmware/vsan/bin/
Start iPerf3
./iperf3 -i 1 -t 10 -c (ESX-1 IP) -fm
And results that I got:
I did not manage to get over this. It seemed there was some limitation somewhere. I know many people have had difficulty getting close to 100 Gbits/sec with iPerf3, and many have suggested running multiple iperf3 processes, but that did not help at all. I did all kinds of different tests, but the results were the same or even worse. It seemed like it was testing only one channel, but mine is a 4-Channel full-duplex.
I am convinced that my testing method may have some limitations because I have seen so many people who had problems testing iperf3 100GbE networks. For Phase 2, I will write more about all kinds of testing/benchmarking and what it is like to have a 100GbE network and the point of having it.
I checked what my CPU was doing during the test, and it did not hit a max.
Iperf3 Between Two Windows Servers 2019 (VMXNET3)
I tested both variants. DirectPassthru to VM and VMXNET3, and the results were similar.
If I add P8 or increase the number, I can get a Transfer of 27GBytes and a Bandwidth of 23.2 Gbits/sec.