Would Corsair Rm1000x be enough for 2x3080ti and 3090?

Status
Not open for further replies.
I have few questions in no specific order:
-What's the motherboard you're using for this setup?
-What's your plan for offloading the heat with these GPUs in close proximity?
-Was the choice of GPUs solely based on performance to cost ratio? Having different GPU with varying amount of vrams not impede your use case?
-What's the longest training time taken on your previous projects? Have you tried colab pro, would be good for getting your feet's wet.
- If your considering inferencing as well why not get rtx 3090 and one of those 12gb rtx 3060s or if you can gets a hands on used 12gb a2000. The rtx 3090 can be used for your learning, prototyping and training and the rtx 3060 for testing.

Coming to the question asked, Rm1000x is a very good psu, but 3x350w for GPUs alone is too much. At full thrust you would be touching OPP in no time (which is around 1250w).

Some PSUs I considered while searching were
- 2x Antec sp1300 (you can use two of these with provided oc cable to work together)
- Corsair ax1600i
- Cooler master m2000 (Rebranded Silverstone hela 2050)
- Silverstone da1650
Also plan for a inverter if you don't have one and proper room ventilation and (but if you have done mining/folding/DL earlier than you must have planned this)
 
That's around 800W of power right there. Assuming another 100W for CPU you will be walking a thin line. Get a 1200 or 1600W PSU. As already suggested running two PSUs is also recommended.
 
I have few questions in no specific order:
-What's the motherboard you're using for this setup?
-What's your plan for offloading the heat with these GPUs in close proximity?
-Was the choice of GPUs solely based on performance to cost ratio? Having different GPU with varying amount of vrams not impede your use case?
-What's the longest training time taken on your previous projects? Have you tried colab pro, would be good for getting your feet's wet.
- If your considering inferencing as well why not get rtx 3090 and one of those 12gb rtx 3060s or if you can gets a hands on used 12gb a2000. The rtx 3090 can be used for your learning, prototyping and training and the rtx 3060 for testing.

Coming to the question asked, Rm1000x is a very good psu, but 3x350w for GPUs alone is too much. At full thrust you would be touching OPP in no time (which is around 1250w).

Some PSUs I considered while searching were
- 2x Antec sp1300 (you can use two of these with provided oc cable to work together)
- Corsair ax1600i
- Cooler master m2000 (Rebranded Silverstone hela 2050)
- Silverstone da1650
Also plan for a inverter if you don't have one and proper room ventilation and (but if you have done mining/folding/DL earlier than you must have planned this)

Hi

1) Motherboard is ab350 gaming gigabyte

2) I have a relatively large room and I'll be using risers placing the gpus apart

3) I have yet to see how tensorflow handles multiple gpus with different vram for distributed training. I already have 2x3080ti and both of them have the same vram. But I think training would use the full vram for each gpu even if it's not the same across gpus

4) longest I've ever trained a model is 3 days. Nanogpt (LLM) (with powercuts).

5) i already have 2x3080ti. I think by inferencing you mean LLMs but i plan on doing standard DL (CV and object detection and some NLP). Mind you I'm a beginner.

Thanks for the psu suggestions
That's around 800W of power right there. Assuming another 100W for CPU you will be walking a thin line. Get a 1200 or 1600W PSU. As already suggested running two PSUs is also recommended.
Thanks. Yes i already have a Rm850 but that's in my hometown. Will have to get it here.
 
1) Motherboard is ab350 gaming gigabyte

2) I have a relatively large room and I'll be using risers placing the gpus apart

3) I have yet to see how tensorflow handles multiple gpus with different vram for distributed training. I already have 2x3080ti and both of them have the same vram. But I think training would use the full vram for each gpu even if it's not the same across gpus

4) longest I've ever trained a model is 3 days. Nanogpt (LLM) (with powercuts).
Those raise more questions, but I will abstain from going off topic.
You can definitely use that 850w with the existing 1000w using one of those dual psu atx cable (personally haven't used them).
 
  • Like
Reactions: draglord
That's around 800W of power right there. Assuming another 100W for CPU you will be walking a thin line. Get a 1200 or 1600W PSU. As already suggested running two PSUs is also recommended.
You're forgetting transient spikes on 3080 ti and 3090 GPUs.
 
Don't go single psu for all 3, do 2 psu and always go 80% of the psu only....so if you use 850w and 1000w psu then 80% of that will be 1480w, do divide power of both psu's. Main problem can be the hpwr cables as they tend to heat up if not fitted properly.
So go with 2 psu's.
And other thing would like to get some details for AI/ML from you as i am trying to get into same and will be starting with 3070 initially as i am learning python also
 
Don't go single psu for all 3, do 2 psu and always go 80% of the psu only....so if you use 850w and 1000w psu then 80% of that will be 1480w, do divide power of both psu's. Main problem can be the hpwr cables as they tend to heat up if not fitted properly.
So go with 2 psu's.
And other thing would like to get some details for AI/ML from you as i am trying to get into same and will be starting with 3070 initially as i am learning python also
Cool thanks for the heads up

Yes sure DM me and we can probably do a project together
 
  • Like
Reactions: rockyo27
Hi

1) Motherboard is ab350 gaming gigabyte
1000W is not enough for sure. I am running 3090 on B450 (1000W PSU) and it is a beast of a GPU. I doubt if your board can run two 3080 Tis and a 3090 at their normal speed. You will be seriously underutilizing as B350 does not have enough bandwidth to drive these three GPUs. With this much amount of heat and power draw, you may end up killing your CPU and mobo as well, especially given that you will be running ML runs for extended periods. My suggestion would be to go for cloud instances or use Google Collab. If local ML runs cannot be avoided, better design a new rig around 4090. This GPU is amazing when it comes to power efficiency and delivers 2x of more performance compared to 3090 in many ML cases. See the PCIE config for your B350 board.
  1. 1 x PCI Express x16 slot, running at x16 (PCIEX16)
    * For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.
    * Actual support may vary by CPU.
    (The PCIEX16 slot conforms to PCI Express 3.0 standard.)
  2. 1 x PCI Express x16 slot, running at x4 (PCIEX4)
    * The PCIEX4 slot shares bandwidth with the PCIEX1_2 and PCIEX1_3 slots. The PCIEX4 slot operates at up to x2 mode when the PCIEX1_2/PCIEX1_3 slot is populated. The PCIEX4 slot operates at up to x4 mode when both of the PCIEX1_2 and PCIEX1_3 slots are empty.
    * Actual support may vary by CPU.
  3. 1 x PCI Express x16 slot, running at x1 (PCIEX1_3)
 
  • Like
Reactions: draglord
1000W is not enough for sure. I am running 3090 on B450 (1000W PSU) and it is a beast of a GPU. I doubt if your board can run two 3080 Tis and a 3090 at their normal speed. You will be seriously underutilizing as B350 does not have enough bandwidth to drive these three GPUs. With this much amount of heat and power draw, you may end up killing your CPU and mobo as well, especially given that you will be running ML runs for extended periods. My suggestion would be to go for cloud instances or use Google Collab. If local ML runs cannot be avoided, better design a new rig around 4090. This GPU is amazing when it comes to power efficiency and delivers 2x of more performance compared to 3090 in many ML cases. See the PCIE config for your B350 board.
  1. 1 x PCI Express x16 slot, running at x16 (PCIEX16)
    * For optimum performance, if only one PCI Express graphics card is to be installed, be sure to install it in the PCIEX16 slot.
    * Actual support may vary by CPU.
    (The PCIEX16 slot conforms to PCI Express 3.0 standard.)
  2. 1 x PCI Express x16 slot, running at x4 (PCIEX4)
    * The PCIEX4 slot shares bandwidth with the PCIEX1_2 and PCIEX1_3 slots. The PCIEX4 slot operates at up to x2 mode when the PCIEX1_2/PCIEX1_3 slot is populated. The PCIEX4 slot operates at up to x4 mode when both of the PCIEX1_2 and PCIEX1_3 slots are empty.
    * Actual support may vary by CPU.
  3. 1 x PCI Express x16 slot, running at x1 (PCIEX1_3)
Can I not use a riser? Would there be a bottleneck? I was under the impression that there's not a lot of IO. Thanks
 
Can I not use a riser? Would there be a bottleneck? I was under the impression that there's not a lot of IO. Thanks
Its not about what you have connected. The problem is that B350 is very old chipset and has very limited amount of PCIe lanes. Any card that is connected to second one is throttled to PCIe 2.0 x4 speed and any card connected to third slot is throttled to PCIe 2.0 x1 speed.

I saw the images of your board and there are only two PCIe x16 slots. How are you planning to use 2x3080ti and 3090? If you are thinking of connecting one of the cards using x1 to x16 converter, dont. x1 is for wifi/audio card etc. It will limit your GPU performance and will cause damage in the long term. But then with ML or AI workloads, you are not aggressively moving data back and forth. Once the model is loaded to GPU memory, there is not much usage of PCIe lans.
This is PCIe 1.1 x8 (similar to PCIe 4.0 x1) vs x16. Your board has PCIe 2.0 x1 slots. (PCIEX4 and PCI Express x1 slots conform to PCI Express 2.0 standard). If you already have a 3080Ti and a riser, do test your LLM by placing GPU on x16 slot, second x16 slot that runs at x4 speed and then on x1 slot using riser cable. That should give a pretty clear picture.
1703301860111.png
 
Last edited:
  • Like
Reactions: draglord
I have a 3090Ti, was running on 1000 Watts Antec HCG with 2600K. Power usage was around 600+Watts. So to run three GPU, you will need atleast 1500 Watts SMPS or 2000 Watts to be on the safer side.
 
I was wondering if you had considered 4090 for the project :)
Oh. Sorry no it's too costly. I can get my hands on a much cheaper 3090
Its not about what you have connected. The problem is that B350 is very old chipset and has very limited amount of PCIe lanes. Any card that is connected to second one is throttled to PCIe 2.0 x4 speed and any card connected to third slot is throttled to PCIe 2.0 x1 speed.

I saw the images of your board and there are only two PCIe x16 slots. How are you planning to use 2x3080ti and 3090? If you are thinking of connecting one of the cards using x1 to x16 converter, dont. x1 is for wifi/audio card etc. It will limit your GPU performance and will cause damage in the long term. But then with ML or AI workloads, you are not aggressively moving data back and forth. Once the model is loaded to GPU memory, there is not much usage of PCIe lans.
This is PCIe 1.1 x8 (similar to PCIe 4.0 x1) vs x16. Your board has PCIe 2.0 x1 slots. (PCIEX4 and PCI Express x1 slots conform to PCI Express 2.0 standard). If you already have a 3080Ti and a riser, do test your LLM by placing GPU on x16 slot, second x16 slot that runs at x4 speed and then on x1 slot using riser cable. That should give a pretty clear picture.
View attachment 186011
Ok will try it out. Thanks
 
  • Like
Reactions: princeoo7
I have few questions in no specific order:
-What's the motherboard you're using for this setup?
-What's your plan for offloading the heat with these GPUs in close proximity?
-Was the choice of GPUs solely based on performance to cost ratio? Having different GPU with varying amount of vrams not impede your use case?
-What's the longest training time taken on your previous projects? Have you tried colab pro, would be good for getting your feet's wet.
- If your considering inferencing as well why not get rtx 3090 and one of those 12gb rtx 3060s or if you can gets a hands on used 12gb a2000. The rtx 3090 can be used for your learning, prototyping and training and the rtx 3060 for testing.

Coming to the question asked, Rm1000x is a very good psu, but 3x350w for GPUs alone is too much. At full thrust you would be touching OPP in no time (which is around 1250w).

Some PSUs I considered while searching were
- 2x Antec sp1300 (you can use two of these with provided oc cable to work together)
- Corsair ax1600i
- Cooler master m2000 (Rebranded Silverstone hela 2050)
- Silverstone da1650
Also plan for a inverter if you don't have one and proper room ventilation and (but if you have done mining/folding/DL earlier than you must have planned this)
Hi

I have bought a 3090 and now I'm in a fix. If i buy a 3060 12 gb and train my models on all 4 gpus (3090 + 2x3080ti + 3060 ) would the 3060 cause a bottleneck? Would it be better if i didn't use 3060 at all? I googled around and saw that 2x3060 take twice as long as 3090 while training a model.

Please let me know

Thanks
 
Status
Not open for further replies.