CORSAIR Launches New 48GB, 96GB and 192GB DDR5 Memory Kits

Its great wher we are headed, but these is noting that's that much demanding for desktop applications which would need 192GB kit.
Even 3D Studio won't need that much.
 
What do you think about using this in Machine learning and AI training models :angelic:
Those primarily require VRAM and not RAM. So not a huge effect, if much at all. For ML, you need a fast GPU with loads of VRAM. The amount of system RAM does not matter that much once you get past say 32GB or so, since the model is loaded on the GPU and not the system memory.
That said, AMD did advertise HBCC (High Bandwidth Cache Controller) technology for at least their Vega generation of cards (had a Vega 64 LC which was a very silent beast of a card) which could utilise system RAM as VRAM; but unfortunately it was never fully functional. I tried to make use of said Vega 64 for training deep learning models but even in 2022 drivers did not fully/properly implement HBCC support.
 
What do you think about using this in Machine learning and AI training models :angelic:
Most algorithms that are running on cpu's are run on server/workstation boards, which utilizes the RDIMM/LRDIMM and these ram modules are available in 256GB/stick configuration for a long time and they are cheep too(DDR3 is dead cheap, even DDR4 256gb modules is around 150USD/stick I believe). So unless these new ram modules can compete with those prices it's a hard sell.

Those primarily require VRAM and not RAM. So not a huge effect, if much at all. For ML, you need a fast GPU with loads of VRAM. The amount of system RAM does not matter that much once you get past say 32GB or so, since the model is loaded on the GPU and not the system memory.
This is not entirely correct. The AI/NN algorithms can broadly be classified into two types based on their parallelizing ability. Those that cannot be parallelized don't need GPU as the number of cores provide no advantage and cpu's with their higher per code clock speed is actually an advantage here. If interested take a look at P-complete algorithms(support vector machines, recurrent neural networks)
 
With 16 cores (7950x) and 128 GB + RAM, one can technically build mini server without spending 1000s of $$. There are lot of use cases.
Dont all desktop class CPUs from AMD/Intel only support upto 128GB RAM ? Its kinda frustrating that one has to enter the expensive threadripper/xeon series for anything more.
 
Those primarily require VRAM and not RAM. So not a huge effect, if much at all. For ML, you need a fast GPU with loads of VRAM. The amount of system RAM does not matter that much once you get past say 32GB or so, since the model is loaded on the GPU and not the system memory.
That said, AMD did advertise HBCC (High Bandwidth Cache Controller) technology for at least their Vega generation of cards (had a Vega 64 LC which was a very silent beast of a card) which could utilise system RAM as VRAM; but unfortunately it was never fully functional. I tried to make use of said Vega 64 for training deep learning models but even in 2022 drivers did not fully/properly implement HBCC support.
Were you training deep learning models on AMD GPU's? We had a terrible time with them and ended up exchanging them for Nvidia ones.
 
Back
Top