HomeLab Upgrade

TL;DR

Looking for a suitable OS for setting up a home lab

My current home lab setup involves a Raspberry Pi 4 4GB version running PiHole, Deluge, Samba, and Plex while connected through a USB 2.0 port with a Seagate HDD dock running a 1TB 2.5-inch 5400 RPM disk.

Now I have decided to upgrade it and consolidate the disks that I have. Following is the config of new Server;

Yes please. The config of machine for this as below;
Processor Intel i3 12100
RAM G Skill 2x16 GB DDR5
SSD1 Crucial T500 Gen4 NVMe 500GB
SSD2 Samsung 970M 1TB Gen 3 NVMe
HDD1 4TB Seagate Ironwolf NAS drive
HDD2 2TB Seagate 5400RPM 3.5-inch drive
HDD3 1TB WD 5400RPM 3.5 inch drive
HDD4 1TB WD 5400RPM 2.5-inch drive

All of the above will go on the Gigabyte M760 G AX MoBo, and powering it all is CM MWE 550 V3 in CM 690II.

As the hardware is robust, the software is the tricky part. I want all of the above services running, might try Jellyfin instead of Plex, and if possible, have a Windows desktop environment available if I want to daily drive it. Now ChatGPT has suggested Proxmox for it and gives the following work plan but I want experienced members’ opinions and suggestions in this regard.

Awesome—let’s turn your box into a clean, reliable Proxmox stack with a Windows desktop VM + OMV NAS + Plex/Jellyfin + Deluge + Pi-hole. I’ll take you from zero → done, with concrete settings and copy-paste commands. You can follow this top-to-bottom, or jump by section.

---

# 0) What you’re building (at a glance)

* **Proxmox VE host** on **Crucial T500 500 GB** (fast TLC/DRAM): runs the hypervisor + main VM datastore.
* **Secondary VM datastore** on **Samsung 970 1 TB** (service VMs/containers + torrent staging).
* **OMV NAS VM** gets the **4 HDDs passed through raw**:

  * 4 TB IronWolf = SnapRAID **parity**
  * 2 TB + 1 TB (3.5") + 1 TB (2.5") = **data** (≈ 4 TB usable)
  * mergerfs (pool) + SnapRAID (parity/protection)
* **Windows 11 VM** for daily desktop.
* **LXC containers** for **Pi-hole** and **Deluge**.
* **Plex/Jellyfin** (container or VM) with **Intel iGPU hardware transcoding** (option A) or full **iGPU passthrough to Windows** (option B—mutually exclusive with A).

Your hardware:

* CPU: **Intel i3-12100 (4c/8t)**
* RAM: **32 GB DDR5**
* SSDs: **T500 500 GB (Gen4)**, **970 1 TB (Gen3)**
* HDDs: **4 TB IronWolf**, **2 TB Seagate 5400**, **1 TB WD 5400 (3.5")**, **1 TB WD 5400 (2.5")**

---

# 1) BIOS / firmware prep

1. Update motherboard BIOS if convenient.
2. Enable: **Intel VT-x** and **VT-d** (IOMMU), **UEFI boot**.
3. Disable (for simplicity): **Secure Boot**.
4. SATA mode: **AHCI**.
5. If you’ll passthrough the iGPU to a VM later, leave the **iGPU enabled**.

---

# 2) Install Proxmox VE (on T500)

1. Create a Proxmox USB installer (Rufus/Ventoy/Etcher) from the official ISO.
2. Boot from USB → Install Proxmox VE.
3. **Target disk**: choose **Crucial T500 (500 GB)**.
4. Set a strong root password + your email.
5. Set management IP (static or DHCP; you can change later).
6. Finish install, reboot, then visit the web UI:

   ```
   https://<your-proxmox-ip>:8006
   ```
7. Log in as `root` with the password you set.

**Initial host tweaks (Shell > root):**

```bash
# (Optional) use the "no-subscription" repo
sed -i 's/enterprise/no-subscription/g' /etc/apt/sources.list.d/pve-enterprise.list || true
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" > /etc/apt/sources.list.d/pve-no-subscription.list

apt update && apt -y dist-upgrade
timedatectl set-timezone Asia/Kolkata
reboot
```

---

# 3) Add the Samsung 970 (1 TB) as an LVM-thin datastore

We’ll dedicate the entire 970 to a thin pool for VM/LXC disks.

**Identify the device** (from the Proxmox shell):

```bash
lsblk -o NAME,SIZE,MODEL
# note the 1TB NVMe device, e.g. /dev/nvme1n1
```

**Partition & create LVM-thin:**

```bash
# replace /dev/nvme1n1 with your Samsung 970 device
sgdisk --zap-all /dev/nvme1n1
pvcreate /dev/nvme1n1
vgcreate vg_samsung970 /dev/nvme1n1
lvcreate -L 900G -T vg_samsung970/pool_samsung970   # leave some free VG space
# add the storage to Proxmox
pvesm add lvmthin samsung-970 --vgname vg_samsung970 --thinpool pool_samsung970 --content images,rootdir
```

You now have:

* `local` (T500) – directory storage for ISO/templates/snippets
* `local-lvm` (T500) – LVM-thin (default) for VM disks
* `samsung-970` (new) – LVM-thin for service VM/LXC disks

---

# 4) Grab ISOs & templates

In the web UI:

* **Datacenter → Storage → local → ISO Images → Upload**

  * **Windows 11 ISO** (from Microsoft)
  * **virtio-win ISO** (drivers; from Fedora project)
  * **OMV installer ISO** (latest stable)

Get LXC templates:

* **local → CT Templates → Templates → Debian 12 standard → Download**

---

# 5) Create the Windows 11 VM (daily desktop)

**Sizing** (you can tweak later):

* VMID: **110**
* vCPU: **4**
* RAM: **16 GB** (ballooning: min 12 GB, max 16 GB)
* Disk: **150–200 GB** on **local-lvm** (T500 for snappiness)

**Steps (UI)**

1. **Create VM**

   * General: VMID `110`, Name `win11`
   * OS:

     * **Use UEFI (OVMF)**, add **TPM v2**
     * ISO: Windows 11 ISO
   * System:

     * Machine: `q35`, SCSI controller: **VirtIO SCSI single**
   * Disks:

     * **SCSI** disk on **local-lvm** (T500), 150–200 GB, Discard = on, SSD emulation = on
   * CPU: **host**, 4 cores
   * Memory: 16384 MB; Ballooning on; Min 12288 MB
   * Network: VirtIO (paravirt), Bridge **vmbr0**
2. Add **CD/DVD** → mount **virtio-win ISO**.
3. Start VM → install Windows:

   * When disk not visible → **Load Driver** from virtio ISO (`vioscsi`).
   * After first boot, install **virtio NIC** driver.
   * Install **QEMU Guest Agent** (from virtio ISO).
4. In Proxmox (VM → Options): **QEMU Agent = Enabled**.

Remote access: use **SPICE** console or **RDP** from your LAN.

---

# 6) Create the OMV NAS VM + raw HDD passthrough

**VM sizing**

* VMID: **120**
* vCPU: **2**
* RAM: **6 GB** (4–8 GB fine)
* Disk: **40–60 GB** on **samsung-970** (service SSD)

**Create VM (UI)**

1. Create VM → Name `omv`

   * OS: ISO = **OMV installer** (or Debian + OMV later)
   * System: `q35`, SCSI controller **VirtIO SCSI single**
   * Disk: **SCSI** on **samsung-970** (40–60 GB), Discard on
   * CPU: host, 2 cores
   * Memory: 6144 MB
   * NIC: VirtIO, Bridge vmbr0
2. **Install OMV** in the VM normally.

**Attach raw HDDs to OMV (CLI—most reliable)**

1. On Proxmox host, list disk IDs:

   ```bash
   ls -l /dev/disk/by-id/ | grep -E 'ST|WD|Seagate|TOSHIBA|WDC|HGST' | sort
   ```

   You’ll see lines like:

   ```
   ata-ST4000VN006-2AA166_ZZZ...
   ata-ST2000DM00x_YYY...
   ata-WDC_WD10EZRX_...
   ata-WDC_WD10... (2.5")
   ```
2. Attach each as a **SCSI** disk to VMID **120**:

   ```bash
   qm set 120 -scsi1 /dev/disk/by-id/ata-ST4000VN006-2AA166_XXXXX --iothread 1
   qm set 120 -scsi2 /dev/disk/by-id/ata-ST2000XXXX_YYYYY         --iothread 1
   qm set 120 -scsi3 /dev/disk/by-id/ata-WDC_WD10EZRX_ZZZZZ       --iothread 1
   qm set 120 -scsi4 /dev/disk/by-id/ata-WDC_WD10*2_5*AAAAA       --iothread 1
   ```

   (Use your exact IDs; **order doesn’t matter**.)

**Inside OMV (after boot)**

1. **Wipe** the 3 data disks (2 TB + 1 TB + 1 TB) and create **ext4** filesystems on them.
2. Leave the **4 TB** blank for **SnapRAID parity**.
3. Install **omv-extras**, then enable the **mergerfs** and **SnapRAID** plugins.
4. Create a **mergerfs pool** from the 3 data disks (balance=most-free). Mount at `/srv/pool-media`.
5. Configure **SnapRAID**:

   * Parity = 4 TB disk
   * Data = the 3 data disks
   * Content files = put on **at least two** data disks
   * Exclude patterns: `*.tmp`, `*.!qB`, `*.part`, etc.
6. Create **Shared Folders** on the pool:

   * `Media/` (for Plex/Jellyfin)
   * `Downloads/` (for Deluge)
7. Enable **SMB** (for Windows) and **NFS** (for Linux containers/VMs) exports for those folders.

> Tip: In **OMV → Scheduled Jobs**, set **SnapRAID**:
>
> * nightly `snapraid sync` (3–4 AM)
> * weekly `snapraid scrub -p 10 -o 7` (10% per run; older than 7 days)

---

# 7) Startup order (so shares are ready before apps)

In **Proxmox UI → each guest → Options → Start/Shutdown order**:

* **OMV VM (120)**: order **1**, up delay **20s**
* **Plex/Jellyfin (131)**: order **2**, up delay **30s**
* **Deluge (132)**: order **3**, up delay **15s**
* **Pi-hole (130)**: order **4**, up delay **5s**
* **Windows 11 (110)**: order **5**, up delay **0–5s**

Enable **Start at boot** for all of the above.

---

# 8) Pi-hole LXC (static IP)

**Create container (VMID 130)**

```bash
# Download template if not yet
pveam update
pveam available | grep debian-12
pveam download local debian-12-standard_12.*_amd64.tar.zst

# Create Pi-hole container
pct create 130 local:vztmpl/debian-12-standard_12.*_amd64.tar.zst \
  -arch amd64 -hostname pihole -cores 1 -memory 512 -swap 512 \
  -rootfs samsung-970:8 \
  -net0 name=eth0,bridge=vmbr0,ip=192.168.1.2/24,gw=192.168.1.1 \
  -onboot 1 -startup order=4,up=5
```

**Start & install Pi-hole (inside the container)**

```bash
pct start 130
pct enter 130
apt update && apt -y install curl
curl -sSL https://install.pi-hole.net | bash
# Choose eth0, set your upstream DNS (e.g., 1.1.1.1), etc.
```

Then point your **router’s DHCP** to use **Pi-hole IP (192.168.1.2)** as primary DNS. Keep a secondary (e.g., 9.9.9.9) as fallback.

---

# 9) Deluge LXC (with OMV share mounted)

**Create container (VMID 132)**

```bash
pct create 132 local:vztmpl/debian-12-standard_12.*_amd64.tar.zst \
  -arch amd64 -hostname deluge -cores 2 -memory 2048 -swap 1024 \
  -rootfs samsung-970:16 \
  -net0 name=eth0,bridge=vmbr0,ip=dhcp \
  -onboot 1 -startup order=3,up=15
pct start 132
pct enter 132
```

**Inside container: install NFS + Deluge**

```bash
apt update && apt -y install nfs-common deluged deluge-web
mkdir -p /mnt/downloads
# Replace 192.168.1.10 with your OMV VM IP
echo "192.168.1.10:/export/Downloads  /mnt/downloads  nfs  defaults  0  0" >> /etc/fstab
mount -a
# Create systemd services
cat >/etc/systemd/system/deluged.service <<'EOF'
[Unit]
Description=Deluge Bittorrent Client Daemon
After=network-online.target

[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/bin/deluged -d
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

cat >/etc/systemd/system/deluge-web.service <<'EOF'
[Unit]
Description=Deluge Web UI
After=deluged.service

[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/bin/deluge-web -f
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable --now deluged deluge-web
```

Access **Deluge Web** on `http://<container-ip>:8112` (default password `deluge`).
Set **Download folder** to `/mnt/downloads`.

---

# 10) Plex or Jellyfin (LXC or VM)

## Option A (recommended): LXC + Intel iGPU Quick Sync

**Create container (VMID 131)**

```bash
pct create 131 local:vztmpl/debian-12-standard_12.*_amd64.tar.zst \
  -arch amd64 -hostname media -cores 4 -memory 8192 -swap 2048 \
  -rootfs samsung-970:32 \
  -net0 name=eth0,bridge=vmbr0,ip=dhcp \
  -onboot 1 -startup order=2,up=30
```

**Expose the iGPU to the container**
(For a **privileged** container—simplest path)

```bash
# Stop container if running
pct stop 131

# Add /dev/dri and allow DRM devices
echo "lxc.cgroup2.devices.allow: c 226:* rwm" >> /etc/pve/lxc/131.conf
echo "lxc.mount.entry: /dev/dri dev/dri none bind,create=dir 0 0" >> /etc/pve/lxc/131.conf

# (Optional) make it privileged for simplicity
sed -i 's/^unprivileged: 1/unprivileged: 0/' /etc/pve/lxc/131.conf

# Mount OMV media via NFS inside the container later OR bind-mount from host if you prefer
pct start 131
pct enter 131
```

**Inside container (Debian): install NFS + Jellyfin (or Plex)**

```bash
apt update && apt -y install nfs-common vainfo mesa-va-drivers

mkdir -p /mnt/media
echo "192.168.1.10:/export/Media  /mnt/media  nfs  defaults  0  0" >> /etc/fstab
mount -a

# Jellyfin example:
apt -y install apt-transport-https gnupg
mkdir -p /etc/apt/keyrings
curl -fsSL https://repo.jellyfin.org/debian/jellyfin_team.gpg.key -o /etc/apt/keyrings/jellyfin.gpg
echo "deb [signed-by=/etc/apt/keyrings/jellyfin.gpg] https://repo.jellyfin.org/debian bullseye main" > /etc/apt/sources.list.d/jellyfin.list
apt update && apt -y install jellyfin

# Add jellyfin service user to video/render if needed
usermod -aG video,render jellyfin

systemctl enable --now jellyfin
```

In Jellyfin: **Dashboard → Playback → Hardware Acceleration** = Intel QSV.
(For **Plex**, install the Plex repo, then enable hardware acceleration in settings—requires Plex Pass.)

> Note: Containers can share `/dev/dri` (host + other containers) in practice. Reserve heavy transcoding to this one container.

---

## Option B: Give the iGPU to **Windows VM** (GPU passthrough)

Use this only if you need GPU-accelerated Windows apps. Then **don’t** use the iGPU for media transcodes (Plex/Jellyfin will Direct Play or software transcode).

**Host setup**:

```bash
# Enable IOMMU (VT-d)
sed -i 's/GRUB_CMDLINE_LINUX_DEFAULT="/GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on iommu=pt /' /etc/default/grub
update-grub

# VFIO modules
cat >/etc/modules-load.d/vfio.conf <<EOF
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
EOF

update-initramfs -u
reboot
```

**Find the iGPU’s PCI ID**:

```bash
lspci -nn | grep -i vga
# e.g., 00:02.0 VGA compatible controller: Intel ... [8086:4680]
```

**Bind the iGPU to vfio-pci** (replace `8086:4680` with yours):

```bash
echo "options vfio-pci ids=8086:4680" > /etc/modprobe.d/vfio.conf
update-initramfs -u
reboot
```

**Attach to Windows VM (110)**:

```bash
qm set 110 -hostpci0 0000:00:02.0,pcie=1
```

Install Intel graphics driver inside Windows.
(If you later revert, remove the vfio binding and reboot.)

---

# 11) Point apps at OMV shares

* **Deluge**: set **Download** directory to `/mnt/downloads` (NFS).
* **Jellyfin/Plex**: add libraries from `/mnt/media`.
* **Windows VM**: map SMB shares: `\\omv\Media` and `\\omv\Downloads`.

---

# 12) Resource tuning

* **Windows VM (110)**: CPU weight ↑

  * UI: VM → Options → CPU limit/weight → **Weight 2000**
* **Ballooning** already set for Windows; services can be fixed RAM.
* **Plex/Jellyfin**: 4 vCPU / 6–8 GB is plenty unless heavy transcodes.
* **Deluge**: 2 vCPU / 2 GB.
* **Pi-hole**: 1 vCPU / 512 MB.

---

# 13) Backups & snapshots

**Back up VMs/CTs to OMV over NFS (recommended):**

1. In OMV: create an **NFS export** `Backups/`.
2. In Proxmox: **Datacenter → Storage → Add → NFS**

   * ID: `omv-backups`
   * Server: `192.168.1.10`
   * Export: `/export/Backups`
   * Content: **VZDump backup file**
3. **Datacenter → Backup → Add** a schedule (e.g., daily 02:30) → select VMs/CTs → target `omv-backups` → mode **Snapshot** → keep last **7–14**.

**Before upgrades**: take a **snapshot** of key guests (Windows, OMV).
**SnapRAID**: make sure sync runs nightly; scrub weekly.

---

# 14) Networking hygiene & access

* **vmbr0** bridges your physical NIC → all VMs/CTs get LAN IPs.
* Give **static IPs** (or DHCP reservations) to:

  * OMV, Pi-hole, Deluge, Media server, Windows VM.
* **Router DNS** → set to **Pi-hole** IP. Keep a secondary resolver as fallback.

**Security quick wins**

* Proxmox: **Datacenter → Firewall** = on; Node/VM rules as needed.
* Create a non-root user + **TFA** for Proxmox UI.
* Consider **ACME/Let’s Encrypt** certificate for the Proxmox UI.

---

# 15) What to monitor

* **Disks** (SMART in OMV; `pveperf` / `iostat` on host).
* **CPU/RAM** in Proxmox graphs.
* **Transcoding load** (Plex/Jellyfin dashboard).
* **Pi-hole** queries and blocklists.

---

## That’s it!

If you want, I can:

* Generate **ready-to-paste `qm create`/`pct create` blocks** with your exact IPs and a guessed disk-by-id mapping (you’ll just replace the IDs), or
* Write a **checklist PDF** you can print and tick off as you go.

Chstgpt is accurate here. Go ahead with the plan. Since you have different sized HDDs it makes sense for snapraid + mergerfs. But beware that you might have to get your hands dirty with command line to some extent with OMV to configure that. OMV should replace your samba I guess. Rest of the services you can run as proxmox lxc containers which will give you flexibility.

You may wanna run a proxmox backup server on a separate device or same proxmox machine for backups. If you add it to this machine then make sure you dont backup the pbs container or vm itself.

Bonus - Leverage your raspberrypi to keep a secondary backup option or run some monitoring/nut service for primary machine

2 Likes

That is a pretty neat plan :+1: pihole can be configured along with unbound for managing the DNS also.

You could also use the RPi 4 as a backup pihole.

2 Likes

Was gonna say that if OP’s area has power issues or like power pricing is high, might wanna consider running stuff like DNS servers on the Pi itself and not on the more beefy Proxmox machine.

2 Likes

Unbound had its own nags and our ISPs at times caused trouble with DNS resolver.

So, if you see any domain erroring out for more than once - do check if Unbound is the culprit (for any number of reasons)

2 Likes

Tagging the proxmox expert here @rsaeon

2 Likes

Proxmox is a fantastic piece of software for managing virtual machines, I attribute the entirety of my self-employed work-from-home income in the last five years to Proxmox directly.

However, I do not use it to manage storage — all of my clusters and nodes have a single host drive with no swap partition and a 8gb root partition. Leftover space is used for VM storage.

Bulk storage is handled by passing through physical drives to the VM’s, TrueNAS and OpenMediaVault. This allows me to easily transplant drives between systems, all I need to do is migrate the lightweight vm and physically connect the drives to the new host.

So I don’t use any of the ZFS features in Proxmox. Honestly it’s mainly because I don’t understand anything about ZFS and haven’t gotten around to learning.

I run containers inside a Debian VM under docker. This allows me to have a single backup file for all grouped containers, I have two base Debian VM’s in my home cluster, one for automation and one for networking. I give each container a unique network-accessible ip address so I don’t need to worry about having to change the default ports.

So I don’t run containers directly under Proxmox — these layers afford greater compatibility. If need be, I could just image the debian vm disk image on a bare drive and have bare metal up and running without much reconfiguring.

The performance overhead is impressively low, I’m able to do this on thin clients that don’t pull more than 10w.

I’d echo the suggestion to keep essential services on the Raspberry Pi, it’s far easier to keep powered on than something pulling 30w to 50w on idle. I have thin clients for essential services, they’re powered directly by a dc-dc voltage converter module that’s connected to an inverter battery, this affords me about 2 days of battery backup for internet connectivity (adblocker, router, wifi) though the longest I needed was 16 hours in the past year.

3 Likes