High-Performance AI/LLM PC – Ryzen AM5, RTX 5090

Hello Everyone,
I’m building a new configuration.

I’m focusing on

  1. Superior performance for AI/LLM
  2. Reliable/excellent after-sales support, as I can’t afford a downtime of more than 3 days for my work,
  3. Budget around 4-Lakhs
  4. Preferences: No gaming, No AI training, No Intel
  5. Aesthetically nice (White)
Component Model / Details
Processor AMD Ryzen 5 9600X 3.9 GHz 6-Core AM5 (6 Cores, 12 Threads)
Motherboard Asus Prime X870-P WiFi-CSM AM5 ATX
Graphics Card Zotac Gaming RTX 5090 Solid OC 32GB GDDR7 White Edition
RAM GSkill Trident Z5 RGB 96GB (48GBx2) DDR5 6000MHz
Internal SSD XPG MARS 980 BLADE 2TB PCIe Gen 5 NVMe M.2 M.2
Internal SSD XPG MARS 980 BLADE 4TB PCIe Gen 5 NVMe M.2 M.2
Cabinet NZXT H9 Flow RGB 2025 EATX Mid Tower (White)
Cooler NZXT Kraken Elite 420 RGB AIO Liquid White
Power Supply Deepcool Gamer Storm PN1000M 1000 Watt ATX 3.1 Fully Modular

Questions

  • GPU: Has Zotac’s Indian RMA improved?
  • SSD: XPG is solid, but how is after-sales support in India?

PCPriceTracker.in

Any guidance to balance my needs?

Thanks in advance

1 Like

SSD → WD or samsung with dram
Powersupply → Corsair 1000 gold
CPU cooler- > Even air cooler will do here, or see some 240mm ones

Zotac is good, got my 3080ti replaced with new one in ~2weeks

1 Like

For psu, stick to A rated from this list

1 Like

Any particular reason to go with this mobo?
I would recommend this

And why 2 Gen 5 ssds. Only 1 slot will support Gen 5 in these boards
If you need more Gen 5 slots you would need to go for above 30K boards

https://computechstore.in/product/msi-mag-x870e-tomahawk-wifi/

Something like this

1 Like

And as said above, you are overspending by a lot on the cooling

AMD’s IHS design is really bad.. 420mm AIO’s and air coolers get more or less the same performance.

Spend more on the processor and the board.

You can look at arctic coolers if you are dead set on liquid cooling.. they perform great and are priced well.

A case like the lancool 207 digital will be good enough for your usecase.. it has all the fans you need included.. just add the 360mm arctic AIO and ur good.

Also isn’t a faster processor better for AI workloads? Im not well informed but I would have guessed that to be the case..

The 14900ks as come down in price considerably.

And z790 boards are also quite cheap now.

If you dont case about upgrading every 1-2 years. Intel is much better suited for your usecase.

The multi core performance is way better on something like a 14900k which only costs like 40k nowadays.

As other members pointed out about SSD’s.. dont get 2 gen 5 ssd’s only 1.. and also one with dram.

AI performance depends on what are you using to run it. CPU is generally weaker in AI performance than GPU, so people just run it on GPU fully. In those cases, the performance of CPU doesn’t matter
So 9600X would be fine, or even 7600X

Don’t know if 1000W is enough for 5090. But if you want to go with 1000W. Get this one
It has A+ rating on the tier list

Assuming second 4TB SSD is for archiving/media-storage, recommend large CMR HDD instead of SSD, better for long term storage.

2 Likes

For local llm usag3, GPU VRAM is usually the bottleneck. For the specific workload you need, check if dual 3090s or a single 5090 would be better. The locallama subreddit is helpful for that

I had very good experience with Zotac RMA at Bangalore. No questions asked, replaced the GPU. Not once but twice on different occations.

1 Like

I’ve heard may say HDD for archival and long term storage, but HDD’s are really sensitive to vibrations and shocks as well right? Also, they now come at almost the similar price, at least the smaller ones.

Not really. We mostly run r6i.4xlarge VMs for AI workloads, 16cpu and 128GB RAM, and the ideal usage is 20% CPU and 75% RAM.

Majority of the processing is handled by GPU, in this case 5090 should be good enough. CPU does need to be fast too, but only for peak loads, like for containers or some sort of rate limiting etc.. 9600x should be sufficient.

@bekz Where does the actual inference happen? You cqn use openai’s API and handle logics and post processing on your infra. This is more efficient method for most AI workloads.

Thanks for sharing—very helpful! Let’s consider moving it to GitHub and building a vibecoded site for easier access. PN1000M rates as a B-, while the NZXT C-Series Gold ATX 3.1 is a strong A+ alternative.

Great catch, X870 boards offer only one Gen 5 M.2 slot for SSDs; the rest are Gen 4. I’ll swap one SSD to Gen4. Thank you!

Asus Prime X870-P WiFi-CSM was chosen primarily because of ASUS. I loved that White ASRock X870 Pro RS WiFi :slight_smile: Will consider changing it to .

ASUS prime series isn’t that good
If you want to go for ASUS either go for TUF or ROG

I usually back everything up to Azure Blob Storage (archival tier) for long-term safety. Faced several HDD failures, and although I’ve been putting off setting up a NAS, the cloud backup has kept things covered so far.

Currently focused on learning and experimenting with recursive, self-improving agent models (like the Darwin Godel Machine), and using vibecoding techniques locally. Due to the high cost of cloud credits and underwhelming results from small language models (SLMs). Mostly LLMs.

Managing Dual 3090s GPU cluster brings a lot of complexity! 5090 is simpler to set up, less heat/power, less noise & more future-proof. +1 Dedicated GPU VRAM (not falling for shared GPU).

I’ll adjust the cooling & SSD for sure. Thx :slight_smile: As others mentioned - GPU is where the most things happen. I’m a AMD fan (across 6 assembled machines so far) has served me well. But I hear you; Intel has made a great strides. Just that I’m picking up a known devil!