NFS over VPN with Encrypted LUKS Volumes: Design, Build, and Operations

Audience: Linux systems engineers and SREs 
Tested on
: Ubuntu 22.04/24.04, NFSv4.2 
Scope: Single NFS server exporting multiple encrypted volumes to VPN-attached clients

 

Why this design

I wanted a storage tier that is private by default, simple to operate day-to-day, and fast enough for workstation workflows.

The result is an NFSv4.2 stack that sits entirely behind a VPN. Data at rest is protected with LUKS, data in transit runs only inside the VPN, and clients mount over TCP with conservative NFS options that prioritise consistency and recoverability over noise benchmarks.

Topology

+-------------------------+             VPN (private CIDR)              +---------------------------+
|  NFS Server (Linux)     |-------------------------------------------->|   Clients (Linux)         |
|  - LUKS volumes (XFS)   |                                             |   - laptops / workstations|
|  - /srv/nfs4 export     |<-- ufw allow 2049/tcp (and optional) ------>|   - systemd automounts    |
|  - nfs-kernel-server    |                                             |   - helper scripts        |
+-------------------------+                                             +---------------------------+

Important: There is no WAN exposure. Only the VPN CIDR is allowed to talk to NFS.

Encrypted volumes layout (server)

Each dataset lives on a LUKS container that is unlocked at service bring-up:

# Example mapping - replace with your own device IDs
/dev/disk/by-uuid/<UUID_DATA>   → /dev/mapper/DATALUKS    → /media/storage/DATA
/dev/disk/by-uuid/<UUID_ARCH>   → /dev/mapper/ARCHLUKS    → /media/storage/ARCHIVE
/dev/disk/by-uuid/<UUID_ALBUM>  → /dev/mapper/ALBUMLUKS   → /media/storage/ALBUM
/dev/disk/by-uuid/<UUID_MEDIA1> → /dev/mapper/MEDIA1LUKS  → /media/storage/MEDIA1
/dev/disk/by-uuid/<UUID_MEDIA2> → /dev/mapper/MEDIA2LUKS  → /media/storage/MEDIA2
# Filesystem: XFS or ext4, mounted with noatime,nofail

Export root is /srv/nfs4 (NFSv4 requirement). Real datasets are bind-mounted into that root,
then exported. This keeps NFS paths stable while you manage disks underneath.

NFSv4 exports (server)

Bind the backing mounts into the NFS root and export with a single fsid root:

# /etc/fstab (bind mounts)
#/media/storage/DATA     /srv/nfs4/DATANFS     none bind 0 0
#/media/storage/ARCHIVE  /srv/nfs4/ARCHIVENFS  none bind 0 0
#/media/storage/ALBUM    /srv/nfs4/ALBUMNFS    none bind 0 0
#/media/storage/MEDIA1   /srv/nfs4/MEDIA1NFS   none bind 0 0
#/media/storage/MEDIA2   /srv/nfs4/MEDIA2NFS   none bind 0 0
# /etc/exports (sanitised - replace CIDR/hosts)
/srv/nfs4              <VPN_CIDR>(rw,fsid=0,crossmnt,no_subtree_check,sync)
/srv/nfs4/DATANFS      <VPN_CIDR>(rw,no_subtree_check,sync)
/srv/nfs4/ARCHIVENFS   <VPN_CIDR>(rw,no_subtree_check,sync)
/srv/nfs4/ALBUMNFS     <VPN_CIDR>(rw,no_subtree_check,sync)
/srv/nfs4/MEDIA1NFS    <VPN_CIDR>(rw,no_subtree_check,sync)
/srv/nfs4/MEDIA2NFS    <VPN_CIDR>(rw,no_subtree_check,sync)

Default root_squash is safer. If you must run privileged rsync that preserves owners, use
no_root_squash only for the minimal set of trusted clients inside the VPN and document it.

Networking and firewall

  • NFSv4 uses TCP 2049. That is usually all you need.
  • Pinning mountd/statd to fixed ports is optional. If you do, open those specifically, not “any”.
  • Restrict inbound rules to the exact VPN subnet or host IPs.
# ufw example (replace CIDR)
/usr/sbin/ufw allow in from <VPN_CIDR> to any port 2049 proto tcp
# If you fix mountd at a static port, allow that port too

Clients (VPN-attached)

Clients mount over NFSv4.2 with conservative defaults that handle flap and latency without corrupting IO:

# Example options
-o nfsvers=4.2,proto=tcp,_netdev,noatime,hard,timeo=600,retrans=2
# On a client
sudo mount -t nfs4 -o nfsvers=4.2,proto=tcp,_netdev,noatime,hard,timeo=600,retrans=2 \
  nfs-gateway.vpn:/DATANFS  /home/<user>/NFS/DATANFS

For persistent mounts, prefer systemd automounts to avoid blocking boot:

# /etc/fstab (client)
/* nfs root not needed here - mount datasets directly */
nfs-gateway.vpn:/DATANFS   /home/<user>/NFS/DATANFS  nfs4  _netdev,noatime,x-systemd.automount,x-systemd.idle-timeout=600,nfsvers=4.2,hard,timeo=600,retrans=2  0 0

Operational helper scripts

The following helpers encapsulate the bring-up, teardown and client mounts. They are sanitised versions of my production scripts. Replace placeholders before use.

Server bring-up: unlock, mount, bind, export

#!/usr/bin/env bash
# nfs_bringup_server - unlock LUKS, mount filesystems, bind into /srv/nfs4, export shares
set -euo pipefail

log(){ printf "%s %s\n" "$(date -Is)" "$*" ;}
run(){ log "+ $*"; "$@"; }

# --- config: replace with your device IDs and mountpoints ---
declare -A MAP=(
  [DATALUKS]="/dev/disk/by-uuid/<UUID_DATA>:/media/storage/DATA:/srv/nfs4/DATANFS"
  [ARCHLUKS]="/dev/disk/by-uuid/<UUID_ARCH>:/media/storage/ARCHIVE:/srv/nfs4/ARCHIVENFS"
  [ALBUMLUKS]="/dev/disk/by-uuid/<UUID_ALBUM>:/media/storage/ALBUM:/srv/nfs4/ALBUMNFS"
  [MEDIA1LUKS]="/dev/disk/by-uuid/<UUID_MEDIA1>:/media/storage/MEDIA1:/srv/nfs4/MEDIA1NFS"
  [MEDIA2LUKS]="/dev/disk/by-uuid/<UUID_MEDIA2>:/media/storage/MEDIA2:/srv/nfs4/MEDIA2NFS"
)

run install -d -m 0755 /srv/nfs4

for name in "${!MAP[@]}"; do
  IFS=: read -r dev mp bindmp <<<"${MAP[$name]}"
  # Unlock if not mapped
  if [[ ! -e "/dev/mapper/$name" ]]; then
    # Prompts for passphrase or uses keyfile if configured at /root/keys/<name>.key
    if [[ -f "/root/keys/${name}.key" ]]; then
      run cryptsetup open --key-file "/root/keys/${name}.key" "$dev" "$name"
    else
      run cryptsetup open "$dev" "$name"
    fi
  fi
  run install -d -m 0755 "$mp" "$bindmp"
  # Mount backing FS
  mountpoint -q "$mp" || run mount -o noatime "/dev/mapper/$name" "$mp"
  # Bind into NFS root
  mountpoint -q "$bindmp" || run mount --bind "$mp" "$bindmp"
done

# Export and start NFS
run systemctl enable --now nfs-server
run exportfs -ra
log "OK - NFS bring-up complete"

Server teardown: unexport, unbind, umount, lock

#!/usr/bin/env bash
# nfs_teardown_server - unexport NFS, unbind mounts, close LUKS
set -euo pipefail
log(){ printf "%s %s\n" "$(date -Is)" "$*" ;}
run(){ log "+ $*"; "$@"; }

# Order matters: unexport, then unbind, then umount, then close
run exportfs -ua || true
run systemctl stop nfs-server || true

BIND_DIRS=(/srv/nfs4/DATANFS /srv/nfs4/ARCHIVENFS /srv/nfs4/ALBUMNFS /srv/nfs4/MEDIA1NFS /srv/nfs4/MEDIA2NFS)
for d in "${BIND_DIRS[@]}"; do mountpoint -q "$d" && run umount "$d" || true; done

BACKS=(/media/storage/DATA /media/storage/ARCHIVE /media/storage/ALBUM /media/storage/MEDIA1 /media/storage/MEDIA2)
for m in "${BACKS[@]}"; do mountpoint -q "$m" && run umount "$m" || true; done

for name in DATALUKS ARCHLUKS ALBUMLUKS MEDIA1LUKS MEDIA2LUKS; do
  [[ -e "/dev/mapper/$name" ]] && run cryptsetup close "$name" || true
done

log "OK - NFS teardown complete"

Client: mount a single share (laptops only gate)

#!/usr/bin/env bash
# mount_vpn_nfs <SHARE>  where SHARE ∈ {DATANFS|ARCHIVENFS|ALBUMNFS|MEDIA1NFS|MEDIA2NFS}
set -euo pipefail

# --- host gate (optional sanity) ---
case "$(hostname -s)" in asus|hplaptop) ;; * ) echo "Refusing: laptop-only helper"; exit 10;; esac

SERVER="nfs-gateway.vpn"  # replace with your VPN DNS name
OPTS="nfsvers=4.2,proto=tcp,_netdev,noatime,hard,timeo=600,retrans=2"

SHARE="${1:-}"
case "$SHARE" in DATANFS|ARCHIVENFS|ALBUMNFS|MEDIA1NFS|MEDIA2NFS) ;; * ) echo "Usage: $0 <SHARE>"; exit 2;; esac

MP="$HOME/NFS/$SHARE"
install -d -m 0755 "$MP"

# Pre-checks: DNS + port 2049
getent ahostsv4 "$SERVER" >/dev/null || { echo "cannot resolve $SERVER" >&2; exit 1; }
( exec 3<>/dev/tcp/"$SERVER"/2049 ) 2>/dev/null && exec 3>&- 3<&- || echo "warn: 2049 not reachable - continuing"

mountpoint -q "$MP" || sudo mount -t nfs4 -o "$OPTS" "$SERVER:/$SHARE" "$MP"

Client: mount all shares

#!/usr/bin/env bash
# mount_all_nfs - iterate standard share set
set -euo pipefail
SHARES=(DATANFS ARCHIVENFS ALBUMNFS MEDIA1NFS MEDIA2NFS)
for s in "${SHARES[@]}"; do
  "$HOME/LOCAL/shellscripts/mount_vpn_nfs" "$s"
done

Client: unmount helpers

#!/usr/bin/env bash
# umount_all_nfs
set -euo pipefail
SHARES=(DATANFS ARCHIVENFS ALBUMNFS MEDIA1NFS MEDIA2NFS)
for s in "${SHARES[@]}"; do
  MP="$HOME/NFS/$s"
  mountpoint -q "$MP" && sudo umount "$MP" || true
done

Runbook

  1. Server bring-up: run nfs_bringup_server. Verify with exportfs -v.
  2. Client mounts: run mount_all_nfs or the single-share helper.
  3. Server teardown: run nfs_teardown_server when maintenance requires it.

Observability and verification

# Server
nfsstat -s
exportfs -v
showmount -e localhost   # v4 still shows exports if fsid root is defined

# Client
nfsstat -c
mount | grep nfs4

Troubleshooting

rsync chown fails with “operation not permitted”
That is root_squash doing its job. Either run rsync unprivileged with appropriate uid/gid mapping, or enable no_root_squash narrowly for the rsync host inside the VPN. Document the exception.
Clients hang at boot waiting for NFS
Use systemd automounts via x-systemd.automount and nofail. Do not hard-code static mounts that block boot if the VPN is not up.
NFS is up but mounts are empty after reboot
You likely forgot to restore the bind mounts. Ensure the bind lines are in /etc/fstab or run the bring-up helper so the bind phase reattaches /media/storage/* into /srv/nfs4/*.

Security posture

  • Traffic restricted to VPN only. No WAN exposure. Firewall scoped to VPN CIDR.
  • Data at rest under LUKS. Containers are locked during teardown or when the host is down.
  • NFS auth is POSIX uid/gid (sec=sys). For higher assurance in multi-tenant environments, consider krb5p.
  • Audit any no_root_squash exceptions and time-box them.

Appendix: Key config references

# Check NFS versions enabled
cat /proc/fs/nfsd/versions   # expect "-2 +3 +4 +4.1 +4.2"

# Service status
systemctl status nfs-server

# Re-export after /etc/exports change
exportfs -ra