Proxmox Thread - Home Lab / Virtualization

I wont go into a point by point response this time because

A) you are drawing a lot of assumptions about me which arent necessarily true (e.g. the original assumption on basing my observations with a display/displays incapable of handling proper HDR)
because thats what you gave out as examples? what do you think a layperson would assume when you say there aint much difference between HDR and HDR10/DV and I play HDR content on my HDR800 device?
This time around you have assumed that I have tested only with ATV/ not tested remuxes / not tested physical disks
And also assumed (probably) usage of a non ATMOS setup
And also assumed I am comparing remuxes with 15mbps streams (when I had clearly stated 40)
you didnt? you just said and I'll quote, "my personal take is that anything over 40mbps /hevc main10 /DV or HDR10+ gets visually indistinguishable from a physical disk.", and btw which stream have you seen with a 40mbps bitrate? On the peril of making another assumption and correcting what I said in my previous post, 40mbps is the max advertised bitrate on ATV+, and in general it can go anywhere from 15-25/26 mbps for a 4k movie but yeah sure you'll get a 40mbps stream on ATV+, the only streaming service that offers these kinds of bitrates is Sony's Bravia Core or BCore, around 80mbps and that too doest not have much content in that bitrate, and while we are at it, lets move off of bitrate comparisons, bitrates are not a good comparison for different rips, its a very rough and largely inaccurate comparison depending on quite a few assumptions to work. I shouldnt have brought it up in the first place, I think thats where this convo derailed.
On a side note, I may not have super discerning eyes but the color variation on the links you sent are so stark that even a non video enthusiast would notice that instantly during actual playback.
Which is why I mentioned earlier as well that rather than believing a post from a random unknown tester on the interwebs (e.g. the page you linked to) or Youtube, I would rather trust my own tests, equipment, configuration and eyes (or take inputs from someone i know).

There is no means for me to know if the setup tested by this unknown individual (from your links) were configured or tested correctly which i strongly believe is not the case here
(for instance the ATV frame in the general comparison is almost comically bad/blurry/low res - either intentionally or due to lack of attention to detail by the tester)
did you perchance go through the other images? on peril of another assumption, I'm gonna assume you are talking about the first pic in the links I sent which just show the color space and not the actual image, if you would go through the other images in the set you would have seen "fairer" comps, but yeah my bad, I was too lazy to go digging around forums in trackers to get other comparisons which are mapped on a frame to frame basis.
BTW FWIW the shield is not capable of handling profile 7 as intended - it processes only the base layer. In fact as things stand, even the cheap firestick 4K is arguably the better media player of the 2 than the now hopelessly outdated shield (and before you assume again, Yes, I have used a shield, )
never said Shield is the best? I said its better than the other devices listed like the firetv stick etc etc, the main USP of shield is that it has Atmos Bitstreaming support, almost all players except for Blu ray ones have trouble actually playing Atmos, either the atmos is ignored or its transcoded, Shield and Shield TV Pro have the best support for Atmos outside of traditional Blu-Ray players or something like a Zidoo but then you need to hack around Zidoo's incompatibility with Plex/Jellyfin (not sure if they got their shit together now, its been a while since I looked into Zidoo)
B) We (and I am equally guilty) are digressing massively from the core thread topic - If you do want to pursue this discussion further, happy to continue on a separate thread
Finally something we can agree on, I did derail this thread quite a lot, but yeah, feel free to create another thread and I'll give you all the comps you want once I get around to digging up old threads in my forums and getting those comps
 
Tangentially related to the thread title, I recently discovered the awesomeness that is POE!

Any simple POE switch paired with inexpensive 48V to 12V converters sends 12VDC to anywhere where a ethernet cable reaches, perfect for far off installations of ESP8266 modules/sensors and basic wifi access points, ha. Power cycling can be easily done through a smart plug for the POE switch. Ethernet is limited 100mbps though, not sure how to get around that but it's okay for now since all of the access points I have in use (7!) are 100mbit anyway.


I don't need to run AC wiring to the extremes of the house anymore, which is probably the biggest plus point.
 
Not an expert, but Proxmox Backup Server is usually the recommendation I see for this. It is also fairly easy to setup and you can automate it to do it every X days.

In my case, I do it the manual/stupid way. I turn off the server, plug in a liveusb, boot into a linux distro that supports zfs out of the box like CachyOS, then I use dd to clone the drive. I do this once a week. The advantage of this is that if the server ever fails, I can get back running with a single dd command.
My server's 7+ years old boot SSD started failing so I had the opportunity to test this dd backup solution. Long story short, it kind of works but probably don't do it.

In my case, the server refused to boot up after recovering the disk image to a new drive. I eventually discovered the reason after a few hours of troubleshooting. It was because the new drive is NVME (while the failed one was SATA), and adding it changed the address of devices on the PCI bus, which meant that the GPU passthrough which was hardcoded to a particular address stopped working. There may have been some other steps required to get it to boot too, like regenerating grub, but I'm not sure if that was really necessary. In any case, it wasn't nearly as easy as I thought.

As others have said, it's far easier to just use the builtin scheduled backups that proxmox has (Web GUI > Datacenter > Backup > Add) and back it up to a different drive/NAS or use Proxmox Backup Server from another server because that comes with deduplication support i.e incremental backups only take up additional space when a file changes. It's true that you don't get to backup the host OS with either of those but that's fine. It's simpler to reinstall Proxmox + restore backed up VMs/CTs than recovering the entire system from a disk image.
 
Something I've been meaning to do is to backup the bash shell command history since I usually forget how I configured something when it comes time to restore/reset.

A daily cron job that uploads the current history as a datestamped file to a local FTP server is what I'm thinking of. Then a daily cron job on the FTP server to delete all but keep the last 30 days of files plus one from each month. I have something similar setup for other backups:

1 0 * * * /usr/bin/find /root/backups -type f -mtime +30 ! -name '*01-*' -exec /bin/rm {} \;

The backups are named as YYYYMMDD-hostname, so all of the daily backups older than 30 days are deleted, except the ones done on the 1st of the month.
 
I have a setup in mind. Please enlighten if there is a better way to do it or if I am missing something.

I have a P330 Tiny with 3 NICs (1x built in + 2 in the PCIe card).

I was running opnsense standalone, but would like to virtualize it.

I was thinking about proxmox + opnsense.

Opnsense would be firewall + router + DHCP server for my setup.

Can I use Dual WAN + LAN while having only 3 physical ports? I am new to proxmox, have never used it.
 
Same pinch @napstersquest. I am trying a similar setup on one of my proxmox node. That node has 3x 2.5 gbps (2 onboard + external) NICs, out of which one is connected to the switch. While the remaining two will be put to use for OPNsense VM. Haven't got around to setting up OPNSense yet, and I hope to start it by the coming weekend. :D
 
Can I use Dual WAN + LAN while having only 3 physical ports? I am new to proxmox, have never used it.

If I understand this correctly, yes.

You can passthrough both WAN ports to the VM and create a bridge with the remaining port, that bridge will handle both proxmox traffic for the host and network traffic to the VM.

Effectively, the bridge becomes a three port switch: one port (virtual) to the VM, one port to proxmox (also virtual) and one port back out to your network (the actual physical port).

The bridge will need to be created after proxmox is setup and you're logged in, but before you make the VM. You'll need to remove the IP configuration from the physical port and create a bridge with the same IP configuration with the physical port specified as a 'bridge port'. Both changes need to be applied simultaneously so that you don't lose access through the webui. This assumes you have a router with a gateway configured somewhere in your network and the proxmox machine is connected to it.
 
I have a setup in mind. Please enlighten if there is a better way to do it or if I am missing something.

I have a P330 Tiny with 3 NICs (1x built in + 2 in the PCIe card).

I was running opnsense standalone, but would like to virtualize it.

I was thinking about proxmox + opnsense.

Opnsense would be firewall + router + DHCP server for my setup.

Can I use Dual WAN + LAN while having only 3 physical ports? I am new to proxmox, have never used it.
i have it running this way for many years now.. first pfsense and then opnsense
2 physical ports would be needed for your 2 X wan
the 3rd port goes to your physical lan switch.

All VMs (including opnsense) will be linked to this 3rd (primary) port over a virtual bridge (think of promox primary port as connected to a physical switch with all the VMs / containers on proxmox conected to it) and this 3rd port then hooks up physically to the rest of your LAN
 
I have an old dell 720d server as the proxmox main node, issue is power draw. I have it in cluster with a i7 system ( this one acts as ML/AI machine, plus my NVR setup as it has a 2080ti gpu)
So basically i have
1. Raspberry pi running as cluster manager, with pihole
2. 720d with a few non necessary vms running ( its ok if they go down ) -> Not on inverter
3 AI/ML pc with plex / *darr stack and camera monitoring stack ( Codeproject AI server , Home assistant , fridate )
4. another raspbery pi ( home assistant )
5. Logging node -> old laptop with grafana , Prometheus
planning on shitting my whole monitoring stack to a cm3588 , once i get a adapter for my coral edge tpu ( mini pcie )

One thing that i have noticed is, with proxmax vms, on pc restart ( we geta lot of power cuts, like at least 2 time a day, 20 min - 1.5 hr cuts ) the images/fs is not getting corrupted compared to a regular bare metal install. Not sure if anyone has seen that yet
 
I have an old dell 720d server as the proxmox main node, issue is power draw. I have it in cluster with a i7 system ( this one acts as ML/AI machine, plus my NVR setup as it has a 2080ti gpu)
So basically i have
1. Raspberry pi running as cluster manager, with pihole
2. 720d with a few non necessary vms running ( its ok if they go down ) -> Not on inverter
3 AI/ML pc with plex / *darr stack and camera monitoring stack ( Codeproject AI server , Home assistant , fridate )
4. another raspbery pi ( home assistant )
5. Logging node -> old laptop with grafana , Prometheus
planning on shitting my whole monitoring stack to a cm3588 , once i get a adapter for my coral edge tpu ( mini pcie )

One thing that i have noticed is, with proxmax vms, on pc restart ( we geta lot of power cuts, like at least 2 time a day, 20 min - 1.5 hr cuts ) the images/fs is not getting corrupted compared to a regular bare metal install. Not sure if anyone has seen that yet
Nice. Although I think you meant to say shifting. Shitting your monitoring stack sounds like it would be rather painful ;)

On another note, zfs and the data integrity features it has is probably the reason your Proxmox vms aren't getting corrupted that quickly. I run mine with sync=disabled for a significant performance boost at the risk of more corruption/data loss but I do have it connected to an inverter.
 
Nice. Although I think you meant to say shifting. Shitting your monitoring stack sounds like it would be rather painful ;)

On another note, zfs and the data integrity features it has is probably the reason your Proxmox vms aren't getting corrupted that quickly. I run mine with sync=disabled for a significant performance boost at the risk of more corruption/data loss but I do have it connected to an inverter.
haha good catch. definitely not shitting ! i will look up on how zfs migt be impacting it. inverter isn't an option as the power draw for both is too much ( already there is a inverter, but it wont last more then 1/2 hour, plus the gpu machine is too power hungry )
 

Before I knew about docker:

fivehundred.jpg


Not sure if anyone has seen that yet

I've see grub errors on proxmox boot drives after unscheduled shutdowns (power outages). The problem and solution is on this thread/post: https://forum.proxmox.com/threads/s...error-disk-lvmid-not-found.98761/#post-495756

This happened on desktop-class hardware, with both dram-less and higher-end ssds. With the 720d being enterprise-level hardware, it probably has some advanced write-caching to prevent this.

The only way to prevent this has been to immediately issue a shutdown command when there's been a power outage, I accomplish this by monitoring the battery levels on the inverter/UPS — as soon as they dip below the float voltage (indicating a power outage, since the inverter is no longer charging the batteries) a node-red flow triggers a shutdown of cluster through proxmox's http api:

Screen Shot 2024-08-02 at 3.15.40 AM.png


I'll try and summarize:
  1. mqtt flow starting with battery status: logging battery voltages reported by tasmota into influxdb

  2. mqtt flow starting with pve netwatch: updating global arrays 'pve_nodes-offline' or 'pve_nodes-online' with whether a node is online or offline, as reported by a virtualized router

  3. mqtt flow starting with deb netwatch: this section basically watches my vms, if they go offline and a shutdown isn't planned or ongoing, then they're started/restarted, depending on if they crashed or hanged — status is pulled in from proxmox's metrics that are directly sent to influxdb, the mqtt trigger is from another virtualized router

  4. flow that's run every 30 seconds: get battery voltages from influxdb and start processing the data in the battery monitor node:

    • if there's a power outage, immediately shutdown the workstation cluster

    • also if there's a outage, start logging battery voltages to telegram, all of the link-out nodes are to telegram in this flow

    • if the battery voltages fall to low, trigger shutdown of the other cluster

    • five minutes after the shutdown signal, turn off all the sockets in the "pdu" (a bunch of tasmota smart plugs)

    • after power returns and has been available for 10 minutes, trigger start-up by turning on the "pdu" (triggered by the BIOS setting of power on after ac power loss).

This was one of my earliest node-red flows so it's probably in need a lot of refinement.
.

edit: a partial clip of the staggered start-up sequence:

 
Last edited:
problem is, i have a old inverter with no monitoring. Time to cool up a esp32 cam based approach i guess. should be simple enough. I use one esp32 to monitor my power ( read electricity meter through ocr )
 
Hi everyone

I am planning to set up my own home server on proxmox by moving from Raspberry Pi 4B to a mini pc. Planning on getting the Lenovo M920X with i5-9500T and 16GB RAM and 512GB NVME to begin with. I plan to setup following services:- Pihole, TrueNAS, NextCloud, Immich, JellyFin(Maybe)) with primary focus on NAS as I want an alternative to iCloud and Apple photos. I am already running low on storage on all apple devices in my home and would want a self hosted NAS solution.

I would want atleast 1 TB storage for all Apple devices with photos. Since the M920X has 2 NVME and 1 SATA slots.

I am planning to put a 512 GB NVME as boot drive for Proxmox. This will also store all ISOs etc
Will use another 2 TB NVME for storing all VMs,etc
Will use 2 TB SATA HDD 2.5 inch as backup.

I have following questions:-
  • Can I use all the drives in RAID mode as I would want backups to be done in case of disk failures ?
  • Is the CPU and RAM sufficient to begin with ?
  • Future storage upgrade would need me to get a PCie to NVME adapter and a m.2 wifi to nvme adapter. Not sure if there would be enough space with 2.5 inch HDD added.
  • Or instead of directly jumping can I try proxmox with above setup on Raspberry Pi 4B or VirtualBox and then deciding what I need ? Not sure if Raspberry Pi would be able to handle TrueNAS VMs in proxmox.
My aim is to not pay monthly for iCloud+ , Google Drive Storage, streaming services like Netflix, prime video, hotstar, sonyliv
 
And how do you intend to replace these ? High seas ?
I am still new in this world. Can you share what do you mean by high seas ? I was thinking if there are plugins which can allow us to download content directly to local device. I am aware that torrents are a way to download content via Jellyfin+jellyseer+sonarr+radarr+qbittorrent
 
1. RAID is not a replacement for a backup, it protects against disk failures, not against you messing up the data, for example. You're probably looking for software raid, which can be done via mdadm but is also built into zfs. Not sure about the performance implications of doing a raid 1 with an hdd and an nvme ssd, but I would personally just do scheduled backups to the hdd, which you can do within Proxmox itself.

2. Processor and ram should be fine for everything you mentioned.

3. Apparently there's something called Pimox, which is an unofficial proxmox build for RPis. But proxmox doesn't support ARM officially and support for anything unofficial is probably really poor. I would just get the mini pc. Also, I think the pi would get absolutely trashed by the demands of virtualizing multiple vms.
 
Back
Top