I have a few Proxmox clusters at home, the highlight of Proxmox is in it's ability to cluster separate boxes for redundancy and high availability.
The homelab cluster has two nodes, an HP T610 Thin Client with 4GB of memory and a Pentium G4400 system with 8GB of memory. These run two DietPi VM's with pihole as well as a general purpose Windows VM since my main computer is a 15" 2012 Macbook Pro. Eventually I'll add in an HomeAssistant VM along with a few NAS ones, whenever I discover how to squeeze more hours in a day.
[ATTACH=full]117585[/ATTACH]
I have two other clusters, one I cannot share as it's Chia (XCH) farming cluster that I maintain on behalf of clients.
The other is a passive income cluster that's basically an extremely overbuilt and overspecc'ed phone farm:
[ATTACH=full]117587[/ATTACH]
On my todo list, I need to setup a Proxmox Backup Server. Everyone who uses Proxmox highly recommends it. I have no backup system in place, but the VM's are spread across enough nodes that I'm alright with the resiliency that that brings.
I have 32TB of writes over 150 days with the same drive as yours, but that's on a node with ~190 VM's. This ends up being an average of ~200GB per day. I try to keep the SWAP usage below 1GB on this particular node.
On another node with a 120GB dram-less SATA ssd, I have 8TB written over 450 days. This appears to be about average as the other nodes with the same storage configuration. SWAP is at under 1% for all of them.
Proxmox really doesn't like it when a node is offline. I've been at times unable to create VM's or even start VM's on other nodes because another node was offline.
I can't imagine there to be much cost savings on your scale. A 41.6W load would consume 1 KWh per day, that's less than the cost of a cup of chai?
Generally, when I do need to restore a VM from backup, I first destroy/remove the VM it is replacing. When I remove a node from a cluster, I find it's best not to reuse the hostname or IP address. It can be done, but the ssh keys become a mess and you'll need to reset all of them on all nodes and it's just easier to pick a different hostname and IP for the new node.