90K+ Powerful PC with Ryzen 9 + MB, 128 GB RAM, 4 TB NVMe, CPU, Cooler, PSU, UPS

Just an idea, nothing wrong with your plan, but from 4-5 feet away, you may not be able to appreciate the details of the 4*3840 x 2*2160 resolution. And bezels, however small will come in the way.

There are business projectors, which are kinda cheap because they don't have real HDR and their real response time isn't great so they don't work well with home movie theatres. But for your use case, they will work. Viewsonic has one PX701, which should cost you under 1lakh, though you'll have to import it. There may be more options in China.

So when you step back, just let the screen fall and switch on the projector. And for details, just 2 or 4 monitors if 4k resolution will then suffice.
Thanks for this new idea. I never thought about it earlier.

Although to be clear, I need to have more screen space in terms of pixels, so that I can put in more information on to a single screen. Just having a big size screen does not solve my intended purpose. For example if the screen resolution itself is just 1920*1080, then it really does not make much difference if I see this information on a huge screen size, because that will simply make the files, folders, charts, graphs etc. look larger, but I will not be able to display the same amount of information which I could easily show on a 4K UHD 3840*2160 pixels screen. I need to have more pixels and not just a bigger screen size in order to see the total amount of information which needs to be displayed at one time.

If the projector is suitable for doing this particular requirement of having this many pixels worth of screen space, then great, otherwise I will have to look for monitors only.

Anyway, thanks a lot for sharing this different perspective.
 
Last edited:
I don't think any reasonably priced projector satisfy that requirement.
Thank you so much for clarifying this point. So basically, I will projector is not a viable solution for this particular requirement.




btw, a friend from Bangalore @-NaniBot- made an EPYC CPU based homelab server just a few months ago and he has written a very nice blog post explaining his journey at this link -


His server configuration is as follows -
2 x EPYC 7601 processors with 64 cores and 128 threads in total
Motherboard -Supermicro H11DSI dual-socket AMD EPYC server motherboard REV2.0
RAM - 24,800 (16 sticks, with each stick having 16gb - total 256gb) DDR4 2133mhz
SSD or nvme drives - crucial p3 4tb
Server Case - Phanteks Enthoo Pro
Power Supply - Corsair Rmx1000
Cooler - 2 x NH-U14S TR4-SP3

He was able to make this server in around 1,60,000 rupees itself, which is very economical and everything is working perfectly fine and he is very happy with his server's performance and he says that it consumes much less power then his older servers.

He bought the motherboard from this guy "tugm4470" on ebay, at much cheaper price then the current one https://www.ebay.com/sch/i.html?_ssn=tugm4470
He said that the stuff sold by this guy on eBay are from datacenters so you know that they have been handled by professionals so the risk of stuff not working is negligible.

The CPU were bought from Indian vendor named serversupply.in who sells CPUs which are not vendor locked. It is extremely important to confirm this aspect before buying any second hand EPYC CPU from anywhere.

@-NaniBot- brother, thank you so much for sharing the valuable information on your blog post, and if you would like to share any more tips etc. then please let us know, so that all those guys like me, who want to build their own servers, can take inspiration and motivation from such an EPYC Adventure.

Best Regards
 
Last edited:
One thing i would like to add is that the motherboard (h11dsi rev 2.0) is also compatible with 2nd gen EPYC CPUs.

Although there is no reason for me to upgrade now, but in the future I'll definitely be upgrading to 2nd gen EPYCs. As of now, the 2nd gen EPYCs remain very expensive. Hoping the prices come down soon.

Also, you can cut down on the CPU cost if you're fine with lower clock speeds. Check out the below listing for a 32c/64t cpu for only $100 !!

 
  • Like
Reactions: alpha5555
One thing i would like to add is that the motherboard (h11dsi rev 2.0) is also compatible with 2nd gen EPYC CPUs.

Although there is no reason for me to upgrade now, but in the future I'll definitely be upgrading to 2nd gen EPYCs. As of now, the 2nd gen EPYCs remain very expensive. Hoping the prices come down soon.

Also, you can cut down on the CPU cost if you're fine with lower clock speeds. Check out the below listing for a 32c/64t cpu for only $100 !!

Thanks for the update.

Actually, just like you mentioned about going to the 2nd get EPYC, I am also interested in those 2nd or even the 3rd gen EPYC CPU's. I could make the compromise by going with a single CPU to begin with, but I would prefer the higher generation, because in my particular user case of heavy data crunching, this would be quite helpful.

The trouble is that these CPU are very costly in the Indian markets -

Therefor I will have to buy them either from US or China and then get them into India through some friend, which will save on the customs duty and charges etc. Since CPU is so small in size, so putting them into luggage is not going to be a big deal. I am not sure about other issues that might crop up, if these CPU are objected by the airport authority etc.

If someone has ever brought CPU from US or other countries, could they please tell, how practical is this approach or if it could create lots of troubles and it is better to follow the normal route of getting these CPU shipped by post and paying all the extra charges?
 
Therefor I will have to buy them either from US or China and then get them into India through some friend, which will save on the customs duty and charges etc. Since CPU is so small in size, so putting them into luggage is not going to be a big deal. I am not sure about other issues that might crop up, if these CPU are objected by the airport authority etc.

If someone has ever brought CPU from US or other countries, could they please tell, how practical is this approach or if it could create lots of troubles and it is better to follow the normal route of getting these CPU shipped by post and paying all the extra charges?
No issue with bringing stuff like processor, hdd, ssd, ram etc as long as they are in their original/compatible packaging with seal torn/opened. I know ppl who often bring old used processors from US this way. Just don't bring too many which any typical person would find suspicious (like half a dozen processors/ssd).
 
No issue with bringing stuff like processor, hdd, ssd, ram etc as long as they are in their original/compatible packaging with seal torn/opened. I know ppl who often bring old used processors from US this way. Just don't bring too many which any typical person would find suspicious (like half a dozen processors/ssd).
Thank you so much for confirming this. I will make sure that I do not get too much greedy and do not ask them to bring more then 2 CPU at one time.

Is it necessary to have the Original CPU box along with it? If yes, then that would be a problem, because these sellers from ebay, who sell second hand server parts, do not really provide the original CPU packing boxes, I guess.

Along with these 2 CPU, how many RAM Sticks and NVMe SSD's could be brought easily without causing any suspicion at the airport?
 
Is it necessary to have the Original CPU box along with it? If yes, then that would be a problem, because these sellers from ebay, who sell second hand server parts, do not really provide the original CPU packing boxes, I guess.
No issue, basically the customs rule allow eligible stuff to be brought back from abroad for personal use meaning it should not be brand new sealed packed original packaging. Any cover will do as long as it does not look like new original sealed packaging.

Along with these 2 CPU, how many RAM Sticks and NVMe SSD's could be brought easily without causing any suspicion at the airport?
This is just my guess but again 4-5 ram sticks & 3-4 ssd shouldn't be an issue while being in used/clearly not new sealed packaging.
 
  • Like
Reactions: alpha5555
As mentioned in the OP, I have prepared and posted a thread in the "WTB - Want to Buy" segment at this link -

If you know anyone who might be interested in selling any of the above parts, then please let me know. Thank you.
 
Minor item ,
But also add decent quality display cables of appropriate length to your list of things to get.

As well as power cords , extensions etc if required.
Thanks for the heads-up. I completely forgot about that. I will keep that in mind before placing any orders to buy those items, if the default cables would not suffice.
 
  • Like
Reactions: Tobikage
I specially request to those members who have some experience with creating their own homelabs or servers etc. to please point out my mistakes in trying to make this server cluster mentioned above, in which I plan to keep on adding more and more individual servers, as my budget allows in the future.

It's always exciting to follow along HEDT builds.

Do you know if your application scales linearly with more cores in a single system? There's very few that do, and almost all computing these days benefit greatly from more workers than more cores.

I started out with 16C/32T systems but I found myself being limited by memory (256GB for HEDT) well before I became CPU limited. Subsequently, I dropped down to 6C/12T with 128GB each and even then CPU usage didn't reach 50% before memory became a limiting factor.

For my use case (cpu crunching/web scraping) it made more sense to have a large number of smaller machines than a less number of high core count machines. Parts were cheaper, downtime was much lower, ROI was far quicker, and power consumption was much lower. With larger servers, more of your productivity goes down if any maintenance or troubleshooting needs to be done. I have about a dozen 128GB machines now and it's a barely a blip in output if any one of them goes down for a reboot.

During the normal process, even a simple desktop computer like Ryzen 7 or Ryzen 9 is more then sufficient for this VM. But as soon as I have to do the Heavy Data Crunching work, then I need to boost the hardware configuration of this VM to the Maximum Available Hardware Resources possible in my hands. After this work is completed, I can again lower down the VM configuration back to the smaller size.

I am hoping that what I have described above is quite possible by using Proxmox type of software for changing the configurations of the VM accordingly. I hope to save money in terms on energy bills etc. by using this method. Why keep on running the extra servers all the time, when they are needed only for some specific duration. Why not physically turn off the servers and basically make them offline, when that hardware is not needed.

With Proxmox, a VM is configurable when it's offline but not while running. However, I've never needed to scale back resources because Proxmox is very good at dealing with provisioning.

For example, I had 50 dual-core Windows VMs running off a single quad core system with 32GB of memory. These were used for downvoting my enemies on reddit, and they worked very well. CPU usage never crossed 70%, but the 32GB became a limiting factor. A fast SSD was also necessary, one with a very high number of IOPS.

Currently, this is my slowest server:

Screen Shot 2024-02-29 at 11.25.27 AM.png


KSM is what makes over-provisioning possible — at peak times, I've seen it cross 50GB for a 128GB system.

As for monitors, I have five and I want to add two more some time when my budget allows — most of those screen are used for real-time monitoring of resources across all of my servers. Eight screens would be a dream.

About ROI, the markets and trends have been very unpredictable the last few years, I would say around ten months should where you mark your budget — if this setup is meant to generate income.
 
Last edited:
It's always exciting to follow along HEDT builds.
Thank you so much for taking time to share your views here. You are one the most well versed and expert Proxmox user on the forum and your insights will greatly help me in this project.

Do you know if your application scales linearly with more cores in a single system? There's very few that do, and almost all computing these days benefit greatly from more workers than more cores.
Yes, most of my applications are able to scale linearly with more cores. I will need to do some practical testings with machines with 64 cores or more processors to notice the practical performance improvements though.

One advantage that I have is that my data files have ditto same schema throughout and they can be broken down into smaller pieces and the workflow can be run in parallel on those individual files. For example, instead of running a single workflow on a 10 GB data file, I can first split this file into two parts of 5 GB each and then run two separate workflows on both of these 5 GB files, to get the output quickly.

The workflow is going to remain the ditto same, the input files will also remain ditto same "just half in size" and the output file will also remain ditto same, in both these cases. But the workflow running time will get reduced, because I will be running both these processes in parallel rather then waiting for first workflow to complete.

This cannot be done in each and every case, but it can be done in more then 90% of my data files, which are all related to the stock market historical data only.

I started out with 16C/32T systems but I found myself being limited by memory (256GB for HEDT) well before I became CPU limited. Subsequently, I dropped down to 6C/12T with 128GB each and even then CPU usage didn't reach 50% before memory became a limiting factor.
I will need to do similar testings myself for becoming totally clear regarding the thresholds where my workflows would start to experience the bottleneck because of CPU Limit or Memory Limit or SSD I/O limits etc.

For my use case (cpu crunching/web scraping) it made more sense to have a large number of smaller machines than a less number of high core count machines. Parts were cheaper, downtime was much lower, ROI was far quicker, and power consumption was much lower. With larger servers, more of your productivity goes down if any maintenance or troubleshooting needs to be done. I have about a dozen 128GB machines now and it's a barely a blip in output if any one of them goes down for a reboot.
These points makes so much of sense. Having multiple machines with lower configuration definitely have their plus points for sure. It just depends from workload to workload. If this approach suits my requirements more, then I will definitely explore the idea of running smaller size virtual machines on different servers, instead of having one single huge virtual machine using up the hardware from multiple servers.

With Proxmox, a VM is configurable when it's offline but not while running. However, I've never needed to scale back resources because Proxmox is very good at dealing with provisioning.
I am totally fine with the requirement of turning off the server machine and make it offline before reconfiguring the VM size, depending upon my requirement at that moment. I dont really need all these hardware resources when I am not running some backtesting or some other heavy data analysis workflow etc. At that time, a simple 8 core VM is more then sufficient for me, for all my routine activities. Therefor I will be physically turning these server machines on and off, according to the requirement of the processing power and I will re-configure the Windows VM size accordingly from time to time. All this will be done for the sake of saving electricity costs etc. as there is no point in keep all these servers running, while they are not being used for data analysis.

For example, I had 50 dual-core Windows VMs running off a single quad core system with 32GB of memory. These were used for downvoting my enemies on reddit, and they worked very well.
You must me be KIDDING ME, right?
But if you are serious, then I would say to your enemies on reddit, that it is a very bad idea, to mess up with a guy who has such powerful multiple VM's at his disposal. :)


As for monitors, I have five and I want to add two more some time when my budget allows — most of those screen are used for real-time monitoring of resources across all of my servers. Eight screens would be a dream.
Thank God, finally I find someone who understands the practical value of having big screen real estate. Any type of real time monitoring becomes so much easier and effortless, when you are able to spread that information on multiple screens. Otherwise just the task of continuous flipping across multiple tabs/windows/screens creates additional stress.

Thanks again for your inputs, I request you to please visiting this thread in the future as well and keep on guiding people like me, who wants to build their home servers.

Thanks a lot.
 
  • Like
Reactions: rsaeon
I want to thank all the friends who have helped me here - @rsaeon @Alucard1729 , @Psycho_McCrazy , @rockyo27 , @guest_999 , @aasimenator , @-NaniBot- , @nRiTeCh , @Tobikage

I have decided to buy the combo from this thread for the time being as my temporary machine -

ProcessorAMD Ryzen 7 7800X3D
MotherboardAsus Strix B650E-F Motherboard
RAMXPG 64GB (32GB x 2) 6000CL30
CPU CoolerDeepcol LT720 Cooler
SSDKC3000 SSD 2TB
HDDWD Seagate Barracudda Pro 10TB HDD Drive
PSUCorsair RM1000x with ATX 3.0 Cable


I am still looking for the possibility of buying a higher end CPU then the AMD 7800X3D, in which case I will simply swap the CPU from that combo, so that I get higher computational processing power.

I have already got 64GB RAM (32GB x 2) but I am looking for one more kit of 64 GB, so that I can have 128 GB RAM total in this machine.

Also looking for the remaining items like second hand 4K UHD Monitors, GPU and UPS etc. as mentioned here -

Thanks a lot.
 
Looks like a good start! The KC3000 is a fantastic SSD for high IOPS, I saw a pretty significant difference when I upgraded from the 970 EVO. I routinely run 5000+ web scrapers from a single drive.

If this is an open-air system or a high airflow system, an air cooler will be better. For 12C/16C systems, I use dual towers from either Deepcool or Noctua and high end single towers (more than 4 heat pipes) for anything 6C/8C. Our climate shortens the lifespan of AIO liquid coolers, unless you're in an air-conditioned environment.

Parts shortage is something higher-end builds are plagued with, most of my systems were built after the lockdown and availability was terrible, even with inflated prices — on average I had to pay 15k per 32GB stick of memory.

7800X3D will be easy to offload when the time comes to upgrade. Hopefully, other members would have opinions on the other parts, I usually stay a generation behind since long-term stability is more important to me than raw performance, so I don't have any experience with AM5 motherboards.
 
  • Like
Reactions: alpha5555
Looks like a good start! The KC3000 is a fantastic SSD for high IOPS, I saw a pretty significant difference when I upgraded from the 970 EVO. I routinely run 5000+ web scrapers from a single drive.

If this is an open-air system or a high airflow system, an air cooler will be better. For 12C/16C systems, I use dual towers from either Deepcool or Noctua and high end single towers (more than 4 heat pipes) for anything 6C/8C. Our climate shortens the lifespan of AIO liquid coolers, unless you're in an air-conditioned environment.

Parts shortage is something higher-end builds are plagued with, most of my systems were built after the lockdown and availability was terrible, even with inflated prices — on average I had to pay 15k per 32GB stick of memory.

7800X3D will be easy to offload when the time comes to upgrade. Hopefully, other members would have opinions on the other parts, I usually stay a generation behind since long-term stability is more important to me than raw performance, so I don't have any experience with AM5 motherboards.
The general consensus seems to be that consumer SSDs are worse when it comes down to sustained loads. I've never owned an enterprise SSD so I can't comment on this but I'm going to switch to a couple of P4510 or similar soon.

 
That is true, I just swap out drives after they've failed. I'm running some drives at over 250% wearout. I'm able to do this because I rely on ansible and backups to bring back a node quickly with a fresh drive after an unrecoverable failure.

Somewhat surprisingly, the Silicon Power A55 are the oldest drives I have that still haven't died as of today. SN570's and Kioxia's and 970 Evo's all came and went but these dinky 120GB units are still operational, ha. I bought a ten pack for 11K during lockdown from a local shop without bill/warranty and one was DOA, and I sold one to someone here, but all others are still operational.

Enterprise drives would be really nice, older 1.6TB Intel drives can be had for under 6K but these days I usually only buy parts on EMI, which is difficult on OLX.