damm, time to return drives to amazon i guess.IIRC there is an module called log2RAM - not sure if it works with Proxmoxx
1 SSD for proxmox OS and VMs together..Guys what would you say is the ideal Proxmox setup?
1 SSD for Proxmox OS and 1 for VMs? Another for storage?
Found thred in German but explains the same use case that you're looking atI came across this thread -https://www.reddit.com/r/Proxmox/comments/ncg2xo/minimizing_ssd_wear_through_pve_configuration/
If you find more, do share. Log2ram I think only does it for system logs. Other Proxmox processes continue to do the normal stuff.
Depends on how the drive conks. If the smart data is readable and the drive shows usage beyond endurance limit then they wont replace it. But I read threads where samsung had replaced for some people. So its based on luck. But don't rely on luck for production work if you use it for anything serious.Found thred in German but explains the same use case that you're looking at
They used log2RAM with Folder2RAM - I came across log2RAM while running pi cluster as an experiment - as log2RAM is presented as a solution to reduce the SD card wear.
On side note, if you buy new SSD and run it to the death (before it warranty expires) - you will still receive the SSD replacement, right (correct if I'm wrong)
I had read that if you use consumer drives, separating VMs and OS splits the disk usage by half and in sata drives, gives better reads depending on workload.1 SSD for proxmox OS and VMs together..
Storage can be on a separate regular drive
There was a video on youtube to run Truenas on ProxmoxAnyone running truenas/unraid etc under proxmox? Or is it recommended to run nas stuff bare metal?
This is what I get with a cheap nvme drive hosting both the hypervisor and multiple VMs.Depends on how the drive conks. If the smart data is readable and the drive shows usage beyond endurance limit then they wont replace it. But I read threads where samsung had replaced for some people. So its based on luck. But don't rely on luck for production work if you use it for anything serious.
I had read that if you use consumer drives, separating VMs and OS splits the disk usage by half and in sata drives, gives better reads depending on workload.
There was a video on youtube to run Truenas on Proxmox
Yes I agree with you, 2% can be ignored easily. But its not 2% for everyone and varies in ways that dont make much sense.Don’t overthink it - 2% wear in a year with very decent RW speeds from a low cost nvme (with no tweaks) is more than acceptable if you ask me
Cost?Shifted to NUC 11 i5 that is nuc11pahi5 . About 30% jump in CPU power and higher boost speeds and single thread performance incase one VM needs it.
It's 32k in market. Got second hand from a friend for a good deal. Added my own ram ssd.Cost?
I got the 8th gen NUC i5 for less than 22K in December Last year. 11th gen looks like a good upgrade but waiting for 12th gen NUCs.It's 32k in market. Got second hand from a friend for a good deal. Added my own ram ssd.
Yes Amazon had an offer. But otherwise it was constantly above 30k. I had got it second hand with 32gb ram and 500gb ssd at 20kI got the 8th gen NUC i5 for less than 22K in December Last year. 11th gen looks like a good upgrade but waiting for 12th gen NUCs.
Not from Amazon. I got it from Lamington Road. Even the cheapest amazon deal was more than lamington road price. Really nice deal for the used device with 32gb ram! My bro's 8th gen nuc is hitting thermal limit constantly. He is a heavy user. I am looking for an upgrade from him. My Dad is satisfied as his use case is limited.Yes Amazon had an offer. But otherwise it was constantly above 30k. I had got it second hand with 32gb ram and 500gb ssd at 20k
For that I'll have to pass GPU to Windows VM right. Thats a whole bunch of headache. Any other linux options? Cause if I use as a container I can retain the GPU to share with other containers too.I use blueiris in windows
IIRC, BlueIris used to use CPU, especially Intel's Quick Video technology for the encoding and not GPU.For that I'll have to pass GPU to Windows VM right. Thats a whole bunch of headache. Any other linux options? Cause if I use as a container I can retain the GPU to share with other containers too.