NFS Share in a Proxmox LXC

I’ve successfully mounted an NFS share (served by an Ubuntu Server) to an Ubuntu 20.04 LXC container in Proxmox :grinning:

Anyone know how to export a NFS share from within an Ubuntu 20.04 LXC container in Proxmox ? I was unable to do it :disappointed_relieved:

Welcome to the forum!

Have you checked the Nested Container and NFS checkboxes in your CT properties?

Yes. Before I checked it, even the nfs client mount didn’t work. After I checked it, I was indeed able to mount successfully a NFS share into the container.

My challenge now is to export a NFS share from the container to anther Proxmox VM

I can’t really look it up right now. I believe you need to install nfs-kernel-server on the host and reboot (or modprobe nfs), then do the same on the client. If that doesn’t work, you may need to try getting the container unconfined.

But I know one thing that usually works: sshfs. You could install sshfs or fuse3 or fuse-sshfs (one of those packages depending on your distro) to the target client and just run a SSH server on the container you want to share from. Then it’s as easy as scp, but instead, mounting a certain path. But do take note sshfs may be slower than nfs, but I always use this and I never took notice, it’s good enough for streaming uncompressed videos on the same LAN.

I read (link) that Proxmox does not support NFS exports… anyone know if this is a limitation of Proxmox VE ?

It could be a limitation of LXC itself, not necessarily Proxmox. It wouldn’t surprise me, since NFS is dependent on the kernel. So, while unsecure, you may be able to install nfs-kernel-server on Proxmox, reboot, then uncheck the “unprivileged” checkbox on the container to give it unlimited power :zap::zap: and thus allow it to hook into the kernel to export NFS shares.

But, as others pointed out in that link, if the NFS export hangs, it could lead to the whole proxmox host hanging. So sshfs, while slower and maybe not as compatible, is a somewhat safer bet.

What exactly are you trying to achieve with this? If you don’t mind me asking.

I’ve migrated my Plex server from a VM to a container, its running well and with much fewer resources.

I’m trying to export NFS share the Media so it’s easy to load additional media from other devices on the LAN

Ok, got it now. Seems reasonable, but I’m not entirely sure how we would go to achieve that. I can think of 2 workarounds for that, but both involve some obvious overhead. Maybe a 3rd lighter alternative.

The first two involve mounting the local file system from the container to another device using sshfs and using that device to export the NFS share. But then you would be connecting to that secondary device, even though the media would actually be getting uploaded to the container.

The first option would be to install NFS on Proxmox, then mount via sshfs a location from the client, like /home/user/media/ on Proxmox in /mnt/plex. From there, you would share from Proxmox /mnt/plex/media as a NFS share. I would advice you mount a folder in na location and you only NFS share a subfolder within that mount point, just so that Proxmox won’t export its own FS, which would be empty.

The second option would be basically the same thing, but instead making a VM with 128 MB of RAM and 2 vCPUs running Alpine Linux (a super lightweight distro).

The third option would be to skip the NFS altogether and transfer files using rsync, scp or SFTP, with the later being probably the easiest to do, as there are a lot of SFTP clients available for a multitude of OS.

Here’s what I found on the Proxmox forums for ProxmoxVE 5.1:

# Installing NFS inside LXC Container on Proxmox 5.1

## Host Setup:

Create LXC Container as usual, but do not start it yet.

bash
# Install NFS-Kernel on Host
apt install nfs-kernel-server

# Create a new AppArmor file:
touch /etc/apparmor.d/lxc/lxc-default-with-nfsd

# Write Profile:
cat > /etc/apparmor.d/lxc/lxc-default-with-nfsd << 'EOF'
# Do not load this file. Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-with-nfsd flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/lxc/container-base>

# the container may never be allowed to mount devpts. If it does, it
# will remount the host's devpts. We could allow it to do it with
# the newinstance option (but, right now, we don't).
deny mount fstype=devpts,
mount fstype=nfsd,
mount fstype=rpc_pipefs,
mount fstype=cgroup -> /sys/fs/cgroup/**,
}
EOF

# Activate the new Profile:
apparmor_parser -r /etc/apparmor.d/lxc-containers

# Add Profile to Container:
# (in this case: id = 200)
echo 'lxc.apparmor.profile = lxc-container-default-with-nfsd' \
>> /etc/pve/nodes/sniebel/lxc/200.conf

# As well as to it's config:
echo 'lxc.apparmor.profile = lxc-container-default-with-nfsd' \
>> /var/lib/lxc/200/config

# Also add your mountpoint to the container:
# If you have a cluster setup:
echo 'mp0: /mnt/host_storage,mp=/mnt/container_storage' \
>> /etc/pve/nodes/cluster_node/lxc/200.conf

# If you have a single node setup:
echo 'mp0: /mnt/host_storage,mp=/mnt/container_storage' \
>> /etc/pve/lxc/200.conf

# Finall start the container:
lxc-start -n 200


## Container Setup:

ssh into the container or do a simple `lxc-attach -n 200` on your host (where 200 is the id).


# Install nfs
apt update
apt install nfs-kernel-server

# Edit Exports
nano /etc/exports

# or append like so (example):
echo '/mnt/container_storage 192.168.0.0/16(rw,async,insecure,no_subtree_check,all_squash,anonuid=501,anongid=100,fsid=1)' \
>> /etc/exports

# disconnect from the container

# Restart it:


## Host again:

Back on the Host restart the container:

bash
lxc-stop -n 200
lxc-start -n 200


Because the nfs-kernel is on the host, the container cannot access it's status.
`service nfsd status` therefore shows as 'not running' inside the container.
.. this seems to be normal (?)


Further useful commands:

bash
nfsstat # list NFS statistics

Unfortunately, I couldn’t find any other resource on this. I found something on the LXD forums mentioning that you may probably only run a single NFS container on a host, due to how NFS works (which makes sense), so keep that in mind.

This all seems backward to me? Why wouldn’t you let your Plex container mount the media share from your fileserver, so that your data is stored and managed on (and backed up) from the fileserver. That’s how we have our network set up, and it makes managing things way easier and more efficient.

3 Likes

I didn’t feel comfortable w/the first two. I’m not educated enough to fully understand why Proxmox LXDs support mounting NFS share but not exporting NFS shares, if you have the patience, I’d love to learn

Third path intuitively feels like a healthier scheme. It works great - Thank you :pray:, I really appreciate the help

1 Like

I don’t have a dedicated fileserver / I’ve always been mounting the Plex media content disk to the VM running the Plex server. New/old media is loaded and removed via the network not very frequently

Sounds like you have a lot more data being store on your fileserver, certainly makes sense.

What fileserver do you use?

Ours are Synology ones, but it would be same setup if running TrueNAS, QNAP, UnRaid, or even NFS exports from a PC running Linux with just a bunch of disks (JBOD).

1 Like

Technically, without giving more permissions to the container, it supports neither. NFS is deeply ingrained in the kernel. There could be userland NFS servers out there that may work with LXC, but I don’t know of any. The default NFS package makes use of some kernel features.

LXC as a technology is just a virtualized OS, using features of the kernel of the host, in this case Proxmox. So to give access to NFS, you need to give your containers access to parts of your hosts’ kernel. If you aren’t working in an enterprise environment and you don’t go around plugging random USB sticks found on the street on your devices, you should be fine with disabling the unprivileged container option. But if SFTP works for you, then that removes a lot of headaches IMO.

Also, +1 for running media on a separate server, or even a separate VM, it makes it so that, if Plex ever dies, or if you want to migrate to Jellyfin or if you want to move your media server to another hardware, you don’t need to move your data with it, which is pretty nifty. I should probably learn to ask more fundamental questions, than just what people may try to achieve with their given situation. I.e. I should think outside the box more.

1 Like

Do LXC containers have the ability to map local host directories? Docker can, which is how we have our Gitlab Docker container working (it’s hosted on one of our Synologys). That way both the Synology backup as well as Gitlab’s backup back up our repos, plus Gitlab’s backup also backs up the database, etc.

If LXC can, then that should work nice plus if your host can NFS share the directory too so you can add/edit/delete media files.

1 Like

Quick question re SFTP: If I have user:group plex:plex need by the Plex Server, how do I add the plex user to SFTP so its read/writable from a network SFTP client?

1 Like

They do. You’re a genius!
https://en.opensuse.org/User:Tsu2/LXC_mount_shared_directory

Yeah, that could work. Again, if it’s nothing that requires insane tin-foil hat security, then giving the container the privilege to mount a certain directory from the host onto its own path should work.

I would assume it shouldn’t be any different than any other user.

usermod -s /bin/bash plex

That is to enable a shell for the user. Usually users like plex, mysql, postgres, gitlab, www and so on, don’t have access to a shell, so their default shell would be configured as /bin/false or /bin/nologin. Changing that will allow you to use said user to login into the Linux box / VM / container.

Then, all you need to to is change its password and you’re all set.

passwd plex

Now you’ll be able to SSH, SCP, or SFTP using Plex and your new password from any SFTP clients.

Why recreate the wheel? Why not just use this LXC container? File Server | TurnKey GNU/Linux