AMD’s IHS design is really bad.. 420mm AIO’s and air coolers get more or less the same performance.
Spend more on the processor and the board.
You can look at arctic coolers if you are dead set on liquid cooling.. they perform great and are priced well.
A case like the lancool 207 digital will be good enough for your usecase.. it has all the fans you need included.. just add the 360mm arctic AIO and ur good.
Also isn’t a faster processor better for AI workloads? Im not well informed but I would have guessed that to be the case..
The 14900ks as come down in price considerably.
And z790 boards are also quite cheap now.
If you dont case about upgrading every 1-2 years. Intel is much better suited for your usecase.
The multi core performance is way better on something like a 14900k which only costs like 40k nowadays.
As other members pointed out about SSD’s.. dont get 2 gen 5 ssd’s only 1.. and also one with dram.
AI performance depends on what are you using to run it. CPU is generally weaker in AI performance than GPU, so people just run it on GPU fully. In those cases, the performance of CPU doesn’t matter
So 9600X would be fine, or even 7600X
Don’t know if 1000W is enough for 5090. But if you want to go with 1000W. Get this one
It has A+ rating on the tier list
For local llm usag3, GPU VRAM is usually the bottleneck. For the specific workload you need, check if dual 3090s or a single 5090 would be better. The locallama subreddit is helpful for that
I’ve heard may say HDD for archival and long term storage, but HDD’s are really sensitive to vibrations and shocks as well right? Also, they now come at almost the similar price, at least the smaller ones.
Not really. We mostly run r6i.4xlarge VMs for AI workloads, 16cpu and 128GB RAM, and the ideal usage is 20% CPU and 75% RAM.
Majority of the processing is handled by GPU, in this case 5090 should be good enough. CPU does need to be fast too, but only for peak loads, like for containers or some sort of rate limiting etc.. 9600x should be sufficient.
@bekz Where does the actual inference happen? You cqn use openai’s API and handle logics and post processing on your infra. This is more efficient method for most AI workloads.
Thanks for sharing—very helpful! Let’s consider moving it to GitHub and building a vibecoded site for easier access. PN1000M rates as a B-, while the NZXT C-Series Gold ATX 3.1 is a strong A+ alternative.
I usually back everything up to Azure Blob Storage (archival tier) for long-term safety. Faced several HDD failures, and although I’ve been putting off setting up a NAS, the cloud backup has kept things covered so far.
Currently focused on learning and experimenting with recursive, self-improving agent models (like the Darwin Godel Machine), and using vibecoding techniques locally. Due to the high cost of cloud credits and underwhelming results from small language models (SLMs). Mostly LLMs.
Managing Dual 3090s GPU cluster brings a lot of complexity! 5090 is simpler to set up, less heat/power, less noise & more future-proof. +1 Dedicated GPU VRAM (not falling for shared GPU).
I’ll adjust the cooling & SSD for sure. Thx As others mentioned - GPU is where the most things happen. I’m a AMD fan (across 6 assembled machines so far) has served me well. But I hear you; Intel has made a great strides. Just that I’m picking up a known devil!