The Ethernet actually runs off one of the USB ports and not directly through the CPU
This is of concern to me. The younger generation of tinkerers have no hesitations about relying on usb or wifi for critical applications (there's so many people using the Raspberry Pi Zero W for pihole on wifi, ack) but for those us who lived through decades of USB issues like frequent disconnects and buggy drivers, it's simply not an option.
I'm intending to use this in a proxmox cluster that requires at least three ethernet connections (corosync, migration, vm data) so ideally an expansion card with a PLX chip would be best. But those are in the hundreds of dollars, the cheapest being those 4x m.2 NVME cards with a PLX chip that are available for under $200 with more functional ones for twice that much by
https://c-payne.com/collections/pcie-packet-switch-plx-pcbs
Apart from the price, they're also high bandwidth, which isn't necessary here. Gigabit has a theoretical limit of 1000/8=125MB/s for throughput. A single PCIe 1.0 lane is 2Gb/s or 250MB/s. So a PCIe 2.0 x1 lane would be twice that or 4Gb/s or 500MB/s — that's plenty of bandwith for 3 or 4 gigabit connections.
I could use a quad port network card but those are not readily available, and they start at 4500 refurbished and go up to 10k. Not exactly cost effective to replace and I do require a solution that would be modular enough to replace quickly and easily. Also using a quad port network card would remove the graphics card and while this platform could run headless, I do also require video out for troubleshooting.
So I need at least three PCIe based ethernet interfaces and a graphics card. In a unexpected twist, there is something uniquely suited for this — I guess I have the "mining bros" to thank for the existence of this:
This is a 1x slot PCIe 2.0 x1 to 4x slot daughter board, based on the ASMedia 1184e PCIe 2.0 Packet Switch:
https://www.asmedia.com.tw/product/556yQ9dSX7gP9Tuf/b7FyQBCxz2URbzg0 This first appeared on a motherboard way back in the Z97 era so it is tried and tested and stable. It works just like a PLX chip, but with x1 lanes instead of x8 and just like the PLX solution, it's driverless. Primary use case is for adding low bandwith interfaces to motherboards like wifi or gigabit.
An addition like this adds about 3k to the cost of the AMD 4700S system, bringing the total for CPU+Motherboard+Ram+This to 12k, which is still very much acceptable seeing that the Ryzen 7 2700 alone goes for 11k to 12k these days.
I have a few of these x1 to x1 extension cables from my 2013 mining adventure that I've kept around solely because of the physical effort expended to shave off the ends by hand since I had no power tools at the time:
And so with everything connected (1x graphics card, 3x ethernet adapters):
Sure enough, all devices are accounted for:
Now the next step was to see if nvme drives would work, not that I needed this, but I wanted to know:
And yes, it was detected as well:
However, there's no NVME boot support in the BIOS, so we'd need something like a bootloader on a sata drive or dom in order to boot off an NVME drive.
But then the 500MB/s limitations of the PCIe 2.0 packet switch would probably render an nvme drive impractical apart from not needing a SATA power cable for a storage device. The testing confirms this:
Just under 400MB/s, which I suspect is the limitation of the NVMe controller on the SSD operating at x1 instead of overhead by the PCIe packet switch card. SATA by comparison:
Just under 550MB/s is about as best we can expect considering it is a DRAM-less drive (Crucial BX500).
The next test was to confirm if it'll work headless, I removed the graphics card and powered it on with three ethernet adapters and the nvme drive and everything was detected as expected:
Last up was a simple iperf test to compare the bandwidth offered by the built-in ethernet interface and the realtek card in the PCIe packet switch board:
USB is on top, PCIe on bottom. Results are pretty much identical.
At this point I'm ready to add this to my cluster, get a few dozen VM's going and see how it handles over the next few weeks.