CPU/Mobo Why most modern CPU have so less PCIe lanes in them, even the high end series?

As the title suggests can anyone explain why the CPU lanes are so less these days? Most CPUs come with only 20 pcie lanes. Out of them 16x goes to GPU and 4x goes to NVME drives.

Now if i add another NVME drive, then i guess the GPU goes into 8x mode and 4x are sort of wasted (the two NVME taking 4x each).

So why don't manufacturers increase the lanes. Also i find that many times it mentions that certain SATA port / PCIe will be disabled if NVME is used. Why is INTEL / AMD stingy with PCIe lanes? If i remember correctly earlier high end CPU used to come with upto 40 lanes (back in the days)
 
the question is valid .if SSD is future than we either need more PCIE lanes or one more standard where we can connect multiple SSD just like our internal hard drive through a cable

Asus has one card for PCIE


But if we have a completely new standard for multiple ssd it would be great or we can say a standard for SSD storage
 
The lanes that were have now are much faster though. 20 lanes of PCIe 5.0 has enough bandwidth as 160 lanes of PCIe 2.0.

We just need to be able to bifurcate these lanes and use them for more SSDs. I don’t think any home user needs that many SSDs with 4 full lanes of Gen4/Gen5 bandwidth each.

If someone does need it, considering the prices of multiple such SSDs (especially at high capacities), I don’t think it’s unreasonable to expect the user to buy workstation hardware.

For some perspective, at Gen4/Gen5 speeds each of those SSDs are getting enough bandwidth to run flagship GPUs from one or two generations ago with minimal impact on performance.
 
Last edited:
The lanes that were have now are much faster though. 20 lanes of PCIe 5.0 has enough bandwidth as 160 lanes of PCIe 2.0.

We just need to be able to bifurcate these lanes and use them for more SSDs. I don’t think any home user needs that many SSDs with 4 full lanes of Gen4/Gen5 bandwidth each.

If someone does need it, considering the prices of multiple such SSDs (especially at high capacities), I don’t think it’s unreasonable to expect the user to buy workstation hardware.

For some perspective, at Gen4/Gen5 speeds each of those SSDs are getting enough bandwidth to run flagship GPUs from one or two generations ago with minimal impact on performance.
i want 4 tb data on ssd but i can`t afford it at once i can only buy uptill 1 tb of SSD at once
 
i want 4 tb data on ssd but i can`t afford it at once i can only buy uptill 1 tb of SSD at once
But whether you need 4x1TB of the fastest NVMe SSDs with the best sequential read/write performance is the question.

If motherboard manufacturers can provide more granular lane bifurcation options with extra M.2 slots at same cost or make available cheaper PCIe to multiple M.2 slot expanders, that would solve the problem of wanting to install more NVMe SSDs. We don’t need to give each SSD 4 Gen4/Gen5 lanes unless we need crazy high sequential performance.

And of course SATA SSDs even now are quite fast and make good secondary drives. You could always buy 1TB of fast NVMe and buy some extra SATA SSDs for cheap when you need it.

It’s not worth making CPUs and motherboards more expensive for everyone just to cater to a niche subset of users.
 
Last edited:
lane bi-furcation is more difficult smarter would be a new standard and dedicated chip. We will be now looking at off loading workloads for gpu decompression and i am sure people are already working to develop applications like 7zip etc to use a GPU instead of cpu
 
IMO Primary nvme slot wont be disabling any sata port/s while secondary slot shares its lanes.

The lane scenario will change in future when sata ports will cease to exist when there will be only nvme storages/slots.
Then except 4-6 nvme slots on mobo.

Remember the transition from IDE to SATA?
From 2 IDEs and 2 sata ports to 4 satas to only 1 ide and now 6-8 satas & zero ide ?

Now, what has started is 2 or 4 satas but 2 nvme slots.
 
The question was exactly in that sense. Is it that hard to bifurcate the lanes of PCIe? A bifurcated pci lanes will be much better for future upgrades.

Like 4 SATA port are the bare minimum on mobo since ages. Even my first SATA board (Intel 915 motherboard) had 4 sata+1 IDE port. But in case of NVME , we are given only 1 or at max 2. I am not asking that each port has to be the fastest and latest gen. But at least 24 lanes should be the bare minimum. 16 for GPU, 4 for primary drive and rest 4 can be split as per requirements. i think even PCIe 1x should be able to handle fast data transfers from drives.
 
The question was exactly in that sense. Is it that hard to bifurcate the lanes of PCIe? A bifurcated pci lanes will be much better for future upgrades.

Like 4 SATA port are the bare minimum on mobo since ages. Even my first SATA board (Intel 915 motherboard) had 4 sata+1 IDE port. But in case of NVME , we are given only 1 or at max 2. I am not asking that each port has to be the fastest and latest gen. But at least 24 lanes should be the bare minimum. 16 for GPU, 4 for primary drive and rest 4 can be split as per requirements. i think even PCIe 1x should be able to handle fast data transfers from drives.
dont look at nvme ssd under the lens of PCIE . Look it as NAND flash memory has stabilized it is reliable and we are able to produce on a large scale and will be cost effective in near future. The NAND flash / nvme is than restricted by pcie specs. Thats why a new storage spec is required especially for NAND based storage solutions
 
So why don't manufacturers increase the lanes.

They don't want to compete with their own workstation products. Ryzen 1000 series brought consumer systems out of the 4c/8t rut they were in and IPC performance has been increasing steadily ever since. So they're limiting the number of lanes available as well available expansion slots with each succeeding generation, forcing higher end users to buy more expensive processors/motherboards.

I've virtualization hosts from Ryzen 1000 series to Ryzen 5000 series and you can see clearly in the motherboards how they've even removed x1 slots from the newer consumer motherboards that barely cost anything but are critical for homelabbers for additional network cards.

A bifurcated pci lanes will be much better for future upgrades.

B550 has/had the best bifurcation support outside of HEDT (x299/threadripper), with most motherboards being able to break down the x16 slot to x4/x4/x4/x4 for quad nvme drives or x8/x4/x4 for single GPU and two nvme drives, along with the primary nvme drive, for a total of three. However, this requires additional hardware in the form of expansion cards which is not easily available to us in India.

There are specialized expansion cards too, for example: https://c-payne.com/
 
They don't want to compete with their own workstation products. Ryzen 1000 series brought consumer systems out of the 4c/8t rut they were in and IPC performance has been increasing steadily ever since. So they're limiting the number of lanes available as well available expansion slots with each succeeding generation, forcing higher end users to buy more expensive processors/motherboards.

I've virtualization hosts from Ryzen 1000 series to Ryzen 5000 series and you can see clearly in the motherboards how they've even removed x1 slots from the newer consumer motherboards that barely cost anything but are critical for homelabbers for additional network cards.



B550 has/had the best bifurcation support outside of HEDT (x299/threadripper), with most motherboards being able to break down the x16 slot to x4/x4/x4/x4 for quad nvme drives or x8/x4/x4 for single GPU and two nvme drives, along with the primary nvme drive, for a total of three. However, this requires additional hardware in the form of expansion cards which is not easily available to us in India.

There are specialized expansion cards too, for example: https://c-payne.com/

Absolutely loved your post. In short both Intel and AMD want to you spend a fortune on hardware if you want anything ever slightly above the usual thing. Even an i7 13700K which will cost north of 40K has only 20 PCI lanes. At least most mainstream AMD processor has 24 lanes. So AMD is the way to go if one wants 16x/4x/4x at sane pricing.
 
Something to keep in mind is that newer generations of PCIe makes manufacturing more expensive. You need additional circuitry, PCB layers and just increased complexity in general.

Increasing CPU lane count also requires a larger die area which equals additional cost. The latest nodes are significantly more expensive compared to previous years.

These factors along with the likelihood of the target market needing that kind of bandwidth would have been a major aspect when they decided to reduce the lane count. Sharing the existing bandwidth in a flexible manner is enough even for prosumers. It’s not as simple as Intel and AMD wanting to nudge people to buying workstation equipment.

Take the case of the motherboards for 7000 series Ryzen. The complexities introduced by moving to DDR5, PCIe 5.0 increased the price so much that people have decided to buy last-gen parts or switch to Intel with cheaper boards despite the fact that AM5 is more future proof. They decided that the cost outweighs any benefits they might derive from the additional bandwidth and features of the new platform.

We would likely see a similar scenario play out if they had decided to provide more lanes and pass on the manufacturing cost to the users.

As time passes and it becomes cheaper to manufacture and users are actually able to take advantage of these capabilities, things might change. But now, it’s an unnecessary expense both for the company and the end users.
 
Last edited:
Back
Top