Graphic Cards 1080 vs 2K Resolution

Woonz

Apprentice
Why is CPU the bottleneck in 1080p resolution vs the graphics card in 2k gaming.
In 1920x1080, why doesn't the CPU effectively switch over the job to the GPU?
 
Good question, usually Graphic cards puts in more effort to generate more pixels at higher resolutions in 2K but it is not required to render frames as frequently as the CPU for 1080p.
 
Imagine the CPU and the GPU work at the sweets shop, and their motive is to ship out as many sweets boxes (frames) as possible per unit of time (second).
GPU fills the boxes, and CPU packs and delivers to the customers in line.

It takes some time to fill the box (GPU rendering of the frame), based on the amount of sweets per box or box size (graphics quality and frame resolution).
But it takes similar time to pack and deliver irrespective of the box size and amount of sweets in it.

So for larger boxes of sweets (larger frames) or more number of sweets per box (higher graphics quality), GPU will take more time and CPU will feel relaxed.
But for smaller boxes (lower resolution) GPU will fill more number of boxes per unit of time (frames per second) and CPU will have to work harder to keep up when packing and delivering.
 
Why is CPU the bottleneck in 1080p resolution vs the graphics card in 2k gaming.
In 1920x1080, why doesn't the CPU effectively switch over the job to the GPU?
This is not necessarily true. Either can be a bottleneck at either framerate depending on how much computational power they have and how much work is required from them. But, as a general guideline, for most modern games, the GPU is the bottleneck because most of the work in gaming is required in generating the graphics. Most of the other calculations are done by the CPU. So, the lower the resolution, the higher the number of frames generated by the GPU, the higher is the number of calculations required to be performed by the CPU. So, it is more likely that you will see a CPU bottleneck at lower resolutions as it has to work harder. Most of the calculations performed by the CPU are independent of the resolution, whereas that is not the case with GPU. The number of calculations performed by the GPU are directly proportional to the resolution.

If the CPU could effectively do the work of GPU and vice-versa, we wouldn't need the other. Since both are there and are woefully inadequate in performing the work of the other, it is not really helpful to have that extra couple of percent of performance by offloading that work to the other. Moreover, a lot of information is required to be transferred from the GPU to CPU which causes slowdown so it is not worth doing this at all.
 
  • Like
Reactions: Woonz