Proxmox Thread - Home Lab / Virtualization

For this, Create a LXC container that will run samba/adguard home(also oss and significantly easier to setup than pihole) and nfs-server. I perfer arch(but then I prefer arch for everything). Debian 11/arch are good options for this
For stuff like this(Jellyfin/torrents/...) I'd create a separate LXC container and configure gpu passthrough for hw transcoding.
How about ubuntu? Have always used that. Will read about debian vs arch vs ubuntu.

I'm still not able to figure out why the SSD is being mounted as read-only in PVE host .. Also /dev/sda2 is not appearing when trying to create directory ... I must be doing something wrong
 
How about ubuntu? Have always used that. Will read about debian vs arch vs ubuntu.

I'm still not able to figure out why the SSD is being mounted as read-only in PVE host .. Also /dev/sda2 is not appearing when trying to create directory ... I must be doing something wrong
I don't like ubuntu and will never recommend it to anyone. If you need a debian like OS, Use debian.
Looks like it maybe time to retire that drive. SSDs(micro sd cards etc) usually go in write protected/read only mode when the go bad. This is the time to copy whatever important you have on that drive and RMA it or buy another one.
 
I'm still not able to figure out why the SSD is being mounted as read-only in PVE host .. Also /dev/sda2 is not appearing when trying to create directory ... I must be doing something wrong

I've never added storage directly at the OS level with fstab, I generally use the Storage gui in Datacenter, or Disks on the node. There's also command line options with pvesm:
 
  • Like
Reactions: tech.monk
I don't like ubuntu and will never recommend it to anyone. If you need a debian like OS, Use debian.
Looks like it maybe time to retire that drive. SSDs(micro sd cards etc) usually go in write protected/read only mode when the go bad. This is the time to copy whatever important you have on that drive and RMA it or buy another one.

:( This is a new WD drive I got last week. Works ok in windows .. and ubuntu vm ( via sata-usb connector )

1639492095853.png


I've never added storage directly at the OS level with fstab, I generally use the Storage gui in Datacenter, or Disks on the node. There's also command line options with pvesm:
I tried that. Can you provide some reference.

PVE > Disk show this ( need to mount /dev/sda2 )
1639492279562.png

OK .. I refreshed the nuc as it was just a stalemate situation .. plus I wanted to know status of SSD.
Got win10 installed . plus WD dashboard. It shows the disk is healthy with .02% wear tear . SATA transfer speeds are 450+ rw .

Capture1.PNG
Capture3.PNG
Capture2.PNG


So now I need to go back to basics of proxmox on how to use this disk partially as NFS and partially as media server source.
Appreciate inputs I received so far .
 
Last edited:
Can you provide some reference.

I remember now that It's not straightforward so here is a step by step way that I mounted a usb flash drive:

Screen Shot 2021-12-14 at 10.13.10 PM.png

It's a 32GB drive, sdi. Here it is recognized in the disks page of that node:

Screen Shot 2021-12-14 at 10.14.28 PM.png

At this point, you can attach it directly to a VM, like here as a USB device:

Screen Shot 2021-12-14 at 10.18.20 PM.png

And it'll show up as if it was physically plugged in:

Screen Shot 2021-12-14 at 10.20.14 PM.png

Or you could choose to pass the disk directly through it's UUID:

Screen Shot 2021-12-14 at 10.29.47 PM.png

The UUID is usually very long, here it's just eight characters separated into two with a hyphen.

I prefer the UUID approach over serial number or any other designator. You can then attach it with a simple qm set command:

Screen Shot 2021-12-14 at 10.34.10 PM.png

999 is just the VMID.

I'm using ide1 as the designator because it's a windows vm, for linux scsi is a better choice. The number can be any number that isn't already in use. Here, I've already got ide0 as the boot drive, so I chose ide1:

Screen Shot 2021-12-14 at 10.34.35 PM.png

This is how you'd normally attach drives permanently. If the drive is not present on the host, then the VM won't start unless if you remove it from the hardware page. Here for this USB drive that I attached by UUID, windows sees it as a physical local disk:

Screen Shot 2021-12-14 at 10.40.27 PM.png

Previously, when it was attached at USB device, this is how windows saw it:

Screen Shot 2021-12-14 at 10.50.24 PM.png

If your goal is to setup a VM for media sharing, then this is the cleanest way to attach a drive that already has data, and to have that data accessible to the VM to be further shared on your home network. The VM would also be able to present the drive to your other computers as a network storage location.

There are other ways to attach storage, for example if you want it available across all nodes in a cluster. But that would be on the host side of Proxmox, and not available to any hosted VM's or containers. And if you want to use a single drive for multiple VM's, then you'll need to format the drive and have Proxmox manage the drive.

References:

 
Last edited:
Thanks. Nicely explained.. will go through


There are other ways to attach storage, for example if you want it available across all nodes in a cluster. But that would be on the host side of Proxmox, and not available to any hosted VM's or containers. And if you want to use a single drive for multiple VM's, then you'll need to format the drive and have Proxmox manage the drive.
I still do not know the reason for why the filesystem is mounted RO .. but for the rest the scenario seems to as above. I'm trying to mount the disk to host and then make it available to VM . I guess what I thought was that the disk will get 'added' as a storage disk in PVE .. then I will be able to map/bind folders to respective VM ( media to jellyfin, documents to NFS etc ) . Maybe that is feasible but I'm learning as I go.
 
Last edited:
If it's an NTFS drive, you'll need to install the driver with apt install ntfs-3g. You may need to do an apt get update first. You should get write access with ntfs-3g installed.

Proxmox is a hypervisor in that it gives block level access to hardware to the virtual machines it hosts. Proxmox would have no idea what's inside a VM's disk image. The setup you're thinking of sounds more like a NAS, which you can do with Proxmox. I have Proxmox pass through a few drives to an OpenMediaVault VM, which then pools the drives, sets up redundancy, and provides network shares for different kinds of media.
 
If I pass through the disk to OMV and set up NAS folders and then what is a good approach to use them in the media server ( say Jellyfin ) . Does the folders need to be added to PVE>Disk or DC>Storage and then supplied to JF ?

Also does this kind of setup induce any latency (despite everything residing on the same hardware ) and is there a better approach ?

The intent is to have
NVME on NUC -> PVE + VM + ISO
SATA SSD on NUC -> OMV + Next cloud shares + JF Media shares + .. persistent storage folders for future VMs ( ex python codes for dev env, git etc )


EDIT :
Found the issue behind the SSD mounted as read-only
NTFS partition are mounted as read-only by default. (if using auto or ntfs type ). In order to make them RW , need to install fuse + ntfs-3g packages and then mount as ntfs-3g.
 
Last edited:
Once a VM has a drive passed through it, Proxmox doesn't have access to the data on it. So you'll need to share the data through the network, over to other VM's. So the OMV VM can have a network share of the media folder for the JF VM to access.

Proxmox virtualizes a network interface for each VM with a unique mac address (so each VM can be assigned a unique IP address by your router's DHCP server), and each virtual interface is bridged with the physical network interface of the host. This bridge doesn't appear to be speed constrained, here's a quick iperf betweeen OMV and a DietPi virtual machine on the same host:

Screen Shot 2021-12-15 at 10.03.24 AM.png

So throughput isn't an issue, and neither is latency (ping):

Screen Shot 2021-12-15 at 10.07.04 AM.png

But of course both VM's will be available at the actual link speed of the physical interface to any computer on the network:

Screen Shot 2021-12-15 at 10.13.55 AM.png

The host has a 1Gbit link to a switch, which then has a 100Mbit link to a wifi access point, to which a laptop is connected.

Here it is with a computer connected to the same switch at 1Gbit:

Screen Shot 2021-12-15 at 10.19.24 AM.png
 
  • Like
Reactions: D C and mk76
do you know where to buy a hdmi to csi-2 bridge for cheap ? otherwise i would simply get a pi 4 and a hdmi capture card
I remember now that It's not straightforward so here is a step by step way that I mounted a usb flash drive:

View attachment 121989

It's a 32GB drive, sdi. Here it is recognized in the disks page of that node:

View attachment 121990

At this point, you can attach it directly to a VM, like here as a USB device:

View attachment 121991

And it'll show up as if it was physically plugged in:

View attachment 121992

Or you could choose to pass the disk directly through it's UUID:

View attachment 121993

The UUID is usually very long, here it's just eight characters separated into two with a hyphen.

I prefer the UUID approach over serial number or any other designator. You can then attach it with a simple qm set command:

View attachment 121994

999 is just the VMID.

I'm using ide1 as the designator because it's a windows vm, for linux scsi is a better choice. The number can be any number that isn't already in use. Here, I've already got ide0 as the boot drive, so I chose ide1:

View attachment 121995

This is how you'd normally attach drives permanently. If the drive is not present on the host, then the VM won't start unless if you remove it from the hardware page. Here for this USB drive that I attached by UUID, windows sees it as a physical local disk:

View attachment 121996

Previously, when it was attached at USB device, this is how windows saw it:

View attachment 121998

If your goal is to setup a VM for media sharing, then this is the cleanest way to attach a drive that already has data, and to have that data accessible to the VM to be further shared on your home network. The VM would also be able to present the drive to your other computers as a network storage location.

There are other ways to attach storage, for example if you want it available across all nodes in a cluster. But that would be on the host side of Proxmox, and not available to any hosted VM's or containers. And if you want to use a single drive for multiple VM's, then you'll need to format the drive and have Proxmox manage the drive.

References:

is there a similar way to attach sata/internal drives via the gui ?
 
Last edited:
If I pass through the disk to OMV and set up NAS folders and then what is a good approach to use them in the media server ( say Jellyfin ) . Does the folders need to be added to PVE>Disk or DC>Storage and then supplied to JF ?

Also does this kind of setup induce any latency (despite everything residing on the same hardware ) and is there a better approach ?

The intent is to have
NVME on NUC -> PVE + VM + ISO
SATA SSD on NUC -> OMV + Next cloud shares + JF Media shares + .. persistent storage folders for future VMs ( ex python codes for dev env, git etc )


EDIT :
Found the issue behind the SSD mounted as read-only
Bacukup the data and Format the drive to Ext. You dont want to use NTFS on Linux. Reliability issues and speed issues too.
 
is there a similar way to attach sata/internal drives via the gui ?

There is, but the drives should be completely clean with no data or partitions, then you can manage them through GUI. This way the disk will be available for Proxmox to use on the hyperviso/hostr side. You can create an Volume Group or Thinpool with the clean drive, which then can be used to store disk images used by VM's.

Screen Shot 2021-12-16 at 12.27.59 PM.png

Screen Shot 2021-12-16 at 12.28.19 PM.png

The 'Add Storage' checkbox makes the drive visible in Storage page of Datacenter.
 
There is, but the drives should be completely clean with no data or partitions, then you can manage them through GUI. This way the disk will be available for Proxmox to use on the hyperviso/hostr side. You can create an Volume Group or Thinpool with the clean drive, which then can be used to store disk images used by VM's.

View attachment 122110

View attachment 122111

The 'Add Storage' checkbox makes the drive visible in Storage page of Datacenter.
oh no what i want to do is pass through whole disk on vm like esxi or hyperv
 
oh no what i want to do is pass through whole disk on vm like esxi or hyperv
I used this guide


@rsaeon @ishanjain28, @tech.monk and others. Thanks a lot for your inputs. Things are in better shape now
1639658710808.png

I set up OMV and attached sata ssd to it via pass through. Set up NFS and SMB shares, users etc.
Next I set up jellyfin in LXC ( turnkey core ) and mounted the NFS media shares .

Overall this has been exciting so far . Learnt a lot and miles to go. My next stop is to setup a monitoring node ( may be TIG stack ) , python dev env etc

Bacukup the data and Format the drive to Ext. You dont want to use NTFS on Linux. Reliability issues and speed issues too.
If I do .. will I be able to attach the disk ( via USB/sata ) to windows in order to copy the data . 200 GB will take a lot of transfer over the network.
Another query is will this format work with SMB share ?
 
Last edited:
I used this guide


@rsaeon @ishanjain28, @tech.monk and others. Thanks a lot for your inputs. Things are in better shape now
View attachment 122158
I set up OMV and attached sata ssd to it via pass through. Set up NFS and SMB shares, users etc.
Next I set up jellyfin in LXC ( turnkey core ) and mounted the NFS media shares .

Overall this has been exciting so far . Learnt a lot and miles to go. My next stop is to setup a monitoring node ( may be TIG stack ) , python dev env etc


If I do .. will I be able to attach the disk ( via USB/sata ) to windows in order to copy the data . 200 GB will take a lot of transfer over the network.
Another query is will this format work with SMB share ?
You can even use something that installs in proxmox server itself, setup as smb share and just browse inside the VM to network. The proxmox machine should be available in network pcs and shared drive through it. You will have to enable the network sharing part. Seems like the easiest way to do it considering there's no passing disk to a vm involved.

Or you can just install OMV and pass disk to that. Remember you might have to format disk to ext3 in OMV for it to reach full speed in transfers. Then just take it out, copy data and add back and see. Share via smb on omv.

More ideas here -
 
Last edited:
Thanks @Party Monger

While I agree that technically we can have SMB/NFS installed on PVE itself, most of the guides advice against it. I'm trying to follow the recommended approach. What I'm finding as I learn is that the flexibility hypervisors provide is sufficient to achieve most of the use cases. Just that underlying concepts need to be understood.

If not proxmox, OMV is also sufficient as a standalone server. With docker plugin the expansion scenarios are endless.

I'm following /r/proxmox and /r/homelabs. This is a whole new world to me and presents many new concepts to learn.
 
  • Like
Reactions: Party Monger
Guys I want install an NVR software on Proxmox. Prefer something latest and hassle free that can install as a container so it can use the intel graphics on my NUC without the need for any passthrough. Cameras are 2 TAPO C100 and 1 Tapo C310. Need it to record 24/7. It was be recording to a 2tb WD Purple disk connected using a Harddisk dock to USB 3 port.

1) Primary need - Flawless recording (no need to even process) and saving it on the drive. 1 month footage or whatever space allows.
2) Secondary need - Object detection (humans in particular). I dont mind not having this for the first month as I don't have much time to wire up and setup everything.

TAPO cams will also be recording to the sd card so will use both Tapo app and NVR to view it.

Was considering Shinobi. Any other easy and new options? Dont suggest Blue iris as I dont want to do this on windows VM. Strictly container for aforementioned reasons.

If you have installed Shinobi, any guides you used would be good to have.
 
Guys I want install an NVR software on Proxmox. Prefer something latest and hassle free that can install as a container so it can use the intel graphics on my NUC without the need for any passthrough. Cameras are 2 TAPO C100 and 1 Tapo C310. Need it to record 24/7. It was be recording to a 2tb WD Purple disk connected using a Harddisk dock to USB 3 port.

1) Primary need - Flawless recording (no need to even process) and saving it on the drive. 1 month footage or whatever space allows.
2) Secondary need - Object detection (humans in particular). I dont mind not having this for the first month as I don't have much time to wire up and setup everything.

TAPO cams will also be recording to the sd card so will use both Tapo app and NVR to view it.

Was considering Shinobi. Any other easy and new options? Dont suggest Blue iris as I dont want to do this on windows VM. Strictly container for aforementioned reasons.

If you have installed Shinobi, any guides you used would be good to have.
Take a look at this.

Motioneyeos has docker support too.