Proxmox Thread - Home Lab / Virtualization

I came across this thread -https://www.reddit.com/r/Proxmox/comments/ncg2xo/minimizing_ssd_wear_through_pve_configuration/
If you find more, do share. Log2ram I think only does it for system logs. Other Proxmox processes continue to do the normal stuff.
Found thred in German but explains the same use case that you're looking at


They used log2RAM with Folder2RAM - I came across log2RAM while running pi cluster as an experiment - as log2RAM is presented as a solution to reduce the SD card wear.

On side note, if you buy new SSD and run it to the death (before it warranty expires) - you will still receive the SSD replacement, right (correct if I'm wrong)
 
Found thred in German but explains the same use case that you're looking at


They used log2RAM with Folder2RAM - I came across log2RAM while running pi cluster as an experiment - as log2RAM is presented as a solution to reduce the SD card wear.

On side note, if you buy new SSD and run it to the death (before it warranty expires) - you will still receive the SSD replacement, right (correct if I'm wrong)
Depends on how the drive conks. If the smart data is readable and the drive shows usage beyond endurance limit then they wont replace it. But I read threads where samsung had replaced for some people. So its based on luck. But don't rely on luck for production work if you use it for anything serious.
1 SSD for proxmox OS and VMs together..
Storage can be on a separate regular drive
I had read that if you use consumer drives, separating VMs and OS splits the disk usage by half and in sata drives, gives better reads depending on workload.
Anyone running truenas/unraid etc under proxmox? Or is it recommended to run nas stuff bare metal?
There was a video on youtube to run Truenas on Proxmox
 
  • Like
Reactions: D C
Depends on how the drive conks. If the smart data is readable and the drive shows usage beyond endurance limit then they wont replace it. But I read threads where samsung had replaced for some people. So its based on luck. But don't rely on luck for production work if you use it for anything serious.

I had read that if you use consumer drives, separating VMs and OS splits the disk usage by half and in sata drives, gives better reads depending on workload.

There was a video on youtube to run Truenas on Proxmox
This is what I get with a cheap nvme drive hosting both the hypervisor and multiple VMs.
Could this be better? Perhaps..
Does it matter for a VM, esp with 5 other VMs running full time ? not for me at least

Don’t overthink it - 2% wear in a year with very decent RW speeds from a low cost nvme (with no tweaks) is more than acceptable if you ask me



1635355292550.png
 
  • Like
Reactions: tech.monk
Yes Amazon had an offer. But otherwise it was constantly above 30k. I had got it second hand with 32gb ram and 500gb ssd at 20k :p
Not from Amazon. I got it from Lamington Road. Even the cheapest amazon deal was more than lamington road price. Really nice deal for the used device with 32gb ram! My bro's 8th gen nuc is hitting thermal limit constantly. He is a heavy user. I am looking for an upgrade from him. My Dad is satisfied as his use case is limited.
 
I was using a HP DeskJet for AI face detection to start/ stop recording on the IP cameras. As I increased the cameras I moved on to the Dell 3080 sff with an i5 10500t in Feb with twice as much power. Works like a charm. It came with three years on site warranty. Have never wondered how that would work during COVID.
 
Can anyone suggest what container/VM to use for recording surveillance video from multiple ONVIF/RSTP cameras?
It should be able to use the intel integrated graphics/quick sync for video encoding related stuff.
 
  • Like
Reactions: D C
For that I'll have to pass GPU to Windows VM right. Thats a whole bunch of headache. Any other linux options? Cause if I use as a container I can retain the GPU to share with other containers too.
IIRC, BlueIris used to use CPU, especially Intel's Quick Video technology for the encoding and not GPU.
 
Concerning SSD life, one of my nodes has a WD Green 120GB m.2 that is starting to fail. Reads and writes take forever, IO delays jump up to past 90%. This WD Green drive has had about 9 TBW over the course of around 450 days. HD Sentinel shows that everything is fine. But Clonezilla failed continuously when I tried to image the drive onto another.

This is a unexpected because I have DRAM-less SSDs across all my nodes, most of them being Silicon Power S55 drives and they haven't showed any of these signs. Typically IO delay is under 1%.

It could be the SATA controller since it was installed in an older Z97 system, but I'll investigate that another day. Today, it's easier to drop in another 120GB drive and reinstall proxmox.