CPU/Mobo How many max VM's i can run on my PC configuration

KreeHunter

Disciple
So decided its time to get back to studies and planning to install VM labs on my PC. its so long since i had VM labs i just wanted to check how much max VM's i can can run on my PC. Here is my configuration:

i7 4790S
16 gb RAM(8 gb Samsung + 8 GB Hynix)(my mobo only support 16 gb ram)
1660 GFX
550w bronze 80+

Note : currently i am planning the following configuration :

Hypervisor: Oracle VM / Vmware Workstation / openstack virtualbox
2x DC Server 2019 DC
1x DNS+ Reverse Proxy Server (optional)
1x firewall (optional)
at least 2x Windows clients (Win 10)
at least 3x Linux clients (RHEL/ubuntu/CentOS )

i will keep adding VM's as needed. I am primarily WIndows Server Admin and my learning curve is

1. learn and master linux admin
2. learn and master Kubernetes
3. Learn and master Kafka
4. learn and master Nginx
5 learn and master Dockers

If i can squeeze in VMware ESX Server side by side if possible
 
Sounds like all of that is easily doable on that hardware.

I've run 20 Windows 10 VM's on a 16GB machine in the past, each with 1GB of ram. I used special low-memory footprint versions of Windows from a blogspot site called TheWorldofPC.

But since you only need two windows VM's, an older version of Windows 10 with 2gb of memory configured should work just fine. I'm assuming all the other VM's are less resource intensive.

I'd recommend getting an SSD for better responsiveness, at least a 240GB one to use as the main drive for the hypervisor. Storage speed is the greatest limiting factor in my experience with running multiple VM's.

As a point of reference, I had two Debian 11 VM's with about 10 dockers (dns, nginx being two of them) on a HP Thin Client T610 with 4GB of memory and I still had 1GB available.
 
Sounds like all of that is easily doable on that hardware.

I've run 20 Windows 10 VM's on a 16GB machine in the past, each with 1GB of ram. I used special low-memory footprint versions of Windows from a blogspot site called TheWorldofPC.

But since you only need two windows VM's, an older version of Windows 10 with 2gb of memory configured should work just fine. I'm assuming all the other VM's are less resource intensive.

I'd recommend getting an SSD for better responsiveness, at least a 240GB one to use as the main drive for the hypervisor. Storage speed is the greatest limiting factor in my experience with running multiple VM's.

As a point of reference, I had two Debian 11 VM's with about 10 dockers (dns, nginx being two of them) on a HP Thin Client T610 with 4GB of memory and I still had 1GB available.

I am a SME server admin (there was a time when the word SME used to mean something :cool:)and have reached to a fork where there is no future in windows , either go for cloud (AWS/Azure) second fork is VMware and finally linux ......linux looks promising out of these


Forgot to mention have a 240GB Kingston Sata SSD, recently my mobo went kaput so got a cheap POwerX mobo which surprisingly has a m.2 slot on a LGA 1150 Slot , i was planning to get a spare ssd from friends and test it out(though i am not that optimistic about the performance), also since i have a fairly decent GPU, was wondering can Mapping physical GPUs to VMs with this GPU passthrough on ESX will improve the performance ??
 
I haven't used ESX or any other hypervisor other than Proxmox and VirtualBox, but if your application requires GPU compute then passing through the GPU will definitely help.

I did pass through nvme drives and usb controllers during the chia days, and it made a gigantic difference — to the point that passing through devices connected to the chipset PCIe lanes caused slow downs when interacting with devices that were passed through PCIe lanes connected directly from the processor, it was a completely new level of performance that I'd never seen before:


cheap POwerX mobo which surprisingly has a m.2 slot on a LGA 1150 Slot

Those are really intriguing and I wanted to try them out for so long but I decided I needed to move on from DDR3 systems for my own sanity, haha.
 
@rsaeon any thoughts on below
Licensing is indeed necessary, but there are potential workarounds available. For instance, NVIDIA's GRID used to offer a 90-day trial period, although it appears they've since reduced this duration. Furthermore, VMware Workstation currently lacks support for GPU passthrough.

Drawing from my extensive experience with VMware technologies including ESXi, vCenter, Horizon, and Workspace One (covering installation, configuration, and management aspects) over a span of nearly five years, it's worth noting that even ESXi, previously free, is undergoing changes under Broadcom's ownership and may no longer be offered for free (This was of topic).
 
Back
Top