Tags Archives: nfs

Using mount – – bind

A bind mount provides an alternate view or mapping of a directory tree.

 

Usually mounting a device presents a view of that storage device in the form of a directory tree.

 

 

A bind mount on the other hand takes an existing directory tree and replicates it under a different mount point.

 

 

The directories and files in the bind mount are the same as the original directory tree from that point.

 

 

Any modification on one side will be reflected on the other side, since the two views depict the same data tree.

 

 

 

They can be especially useful if you want to allow only partial access to a section of a directory tree on a device.

 

Bind mounts are simple to implement.

 

Instead of mounting a device on a particular path, you are mounting a specific path of the file system from that device onto another path.

 

 

 

In Linux, bind mounts are available as a kernel feature. However, they can also be implemented using other methods, such as fusermount or bindfs (not covered here). The examples below cover solely the Linux kernel version (mount – – bind).

 

 

 

Example:

 

(note the  correct syntax is:

 

mount – – bind  (ie with two – – dashes  – as this webpage may not be displaying this correctly it is NOT *one* dash but two) 

 

mount –bind /media/kevin/PRIMARY_MEDIA /srv/nfs4/PRIMARY_MEDIA

 

this mounts /media/kevin/PRIMARY_MEDIA on /srv/nfs4/PRIMARY_MEDIA

 

it is the same as

 

mount -t ext4 /media/kevin/PRIMARY_MEDIA on /srv/nfs4/PRIMARY_MEDIA

 

 

we can also mount a directory on another directory. We do this by using the mount command with the –bind parameter. think of the bind mount as an alias.

 

Like with the mount command, using mount –bind you can mount a specific path from that device’s file system to a specific new path:

 

 

root@asus:/# mount –bind /media/kevin/PRIMARY_MEDIA/MEDIA/IT/ /mnt
root@asus:/#
root@asus:/#

 

 

 

the command findmnt –real displays the actual mounts:

 

root@asus:/# findmnt –real
TARGET SOURCE FSTYPE OPTIONS
/ /dev/nvme0n1p4 ext4 rw,relatime,errors=remount-ro
├─/run/user/1000/doc portal fuse.porta rw,nosuid,nodev,relatime,user_id=1000,group_id=10
├─/boot/efi /dev/nvme0n1p1 vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,io
├─/media/kevin/PRIMARY_BACKUP /dev/sdc2 ext4 rw,relatime,errors=remount-ro
├─/srv/nfs4/PRIMARY_MEDIA /dev/sda1 ext4 rw,relatime
├─/media/kevin/SECONDARY_MEDIA /dev/sdb1 ext4 rw,nosuid,nodev,relatime
├─/media/kevin/DATAVOLUMELUKS /dev/mapper/DATAVOLUMELUKS
│ ext4 rw,relatime
├─/mnt /dev/sdc1[/MEDIA/IT] ext4 rw,relatime,errors=remount-ro
├─/media/kevin/GEMINI geminivpn:/ nfs4 rw,relatime,vers=4.0,rsize=131072,wsize=131072,na
│ └─/media/kevin/GEMINI/DATA geminivpn:/DATA nfs4 rw,relatime,vers=4.0,rsize=131072,wsize=131072,na
└─/media/kevin/PRIMARY_MEDIA /dev/sdc1 ext4 rw,relatime,errors=remount-ro
root@asus:/#

 

 

this means that the file system under /mnt contains the contents of /media/kevin/PRIMARY_MEDIA/MEDIA/IT (in other words using the device name: /dev/sdc1[/MEDIA/IT]

 

 

and not the entire /media/kevin/PRIMARY_MEDIA

 

(the entire device is in this case also mounted under └─/media/kevin/PRIMARY_MEDIA /dev/sdc1 and also as mount –bind at ─/srv/nfs4/PRIMARY_MEDIA /dev/sda1

 

note the different device names sda1 and sdc1 !

 

root@asus:/#
root@asus:/# mount | grep PRI
/dev/sda1 on /srv/nfs4/PRIMARY_MEDIA type ext4 (rw,relatime)
/dev/sdc2 on /media/kevin/PRIMARY_BACKUP type ext4 (rw,relatime,errors=remount-ro)
/dev/sdc1 on /media/kevin/PRIMARY_MEDIA type ext4 (rw,relatime,errors=remount-ro)
root@asus:/#

 

in this case the sda1 does not really exist, the actual device is sdc1

 

root@asus:/# fdisk -l /dev/sda
fdisk: cannot open /dev/sda: No such file or directory
root@asus:/#

 

root@asus:/# fdisk -l /dev/sdc
Disk /dev/sdc: 4,55 TiB, 5000947302400 bytes, 9767475200 sectors
Disk model: My Passport 2627
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F8D14D49-BA2C-C244-A469-26B4B26E63D0

 

Device Start End Sectors Size Type
/dev/sdc1 2048 4194306047 4194304000 2T Linux filesystem
/dev/sdc2 4194306048 6291458047 2097152000 1000G Linux filesystem
root@asus:/#

 

 

You can place bind mount entries in the /etc/fstab.

 

Simply use the bind command in the options, together with any other options you want to include.

 

The “device” is the existing tree. The filesystem column can be empty ie “none” or “bind” (it will be ignored, but using a filesystem name would cause confusion). For example:

 

/somefolder/somewhere /readonly/somewhere none bind,ro

 

 

Continue Reading

NFS mounts do not work: Error message: mount: bad option

NFS mounts do not work and you receive the error message: 

 

mount: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.

 

 

 

Example of this Problem

 

root@len:/home/kevin# mount -v -t nfs -o proto=tcp,vers=4,nolock geminivpn:/home/kevin/DATA /media/kevin/DATA
mount: /media/kevin/DATA: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 783M 1.6M 781M 1% /run
/dev/sda12 156G 69G 79G 47% /
tmpfs 3.9G 120M 3.8G 4% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda10 512M 4.0K 512M 1% /boot/efi
tmpfs 783M 60K 783M 1% /run/user/1000
/dev/sda11 4.0G 1.8G 2.2G 45% /media/kevin/LENOVO
/dev/mapper/DATAVOLUMELUKS 12G 7.4G 3.8G 67% /media/kevin/DATAVOLUMELUKS
root@asusvpn:/home/kevin/LOCAL 395G 313G 63G 84% /mnt

 

 

Cause of the Problem and Solution

 

This problem occurs because the system requires the systemd service nfs-common to be installed and running as a service.

 

The solution is to install nfs-common and start the service

 

root@len:/home/kevin# apt install nfs-common
Reading package lists… Done
Building dependency tree
Reading state information… Done

 

Preparing to unpack …/5-nfs-common_1%3a1.3.4-2.5ubuntu3.4_amd64.deb …
Unpacking nfs-common (1:1.3.4-2.5ubuntu3.4) …
Setting up libtirpc-common (1.2.5-1) …
Setting up keyutils (1.6-6ubuntu1) …
Setting up libnfsidmap2:amd64 (0.25-5.1ubuntu1) …
Setting up libtirpc3:amd64 (1.2.5-1) …
Setting up rpcbind (1.2.5-8) …
Created symlink /etc/systemd/system/multi-user.target.wants/rpcbind.service → /lib/systemd/system/rpcbind.service.
Created symlink /etc/systemd/system/sockets.target.wants/rpcbind.socket → /lib/systemd/system/rpcbind.socket.
Setting up nfs-common (1:1.3.4-2.5ubuntu3.4) …
.. .. .. ..
.. .. .. ..
Creating config file /etc/idmapd.conf with new version
Adding system user `statd’ (UID 125) …
Adding new user `statd’ (UID 125) with group `nogroup’ …
Not creating home directory `/var/lib/nfs’.
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-client.target → /lib/systemd/system/nfs-client.target.
Created symlink /etc/systemd/system/remote-fs.target.wants/nfs-client.target → /lib/systemd/system/nfs-client.target.
nfs-utils.service is a disabled or a static unit, not starting it.
Processing triggers for systemd (245.4-4ubuntu3.16) …
Processing triggers for man-db (2.9.1-1) …
Processing triggers for libc-bin (2.31-0ubuntu9.7) …
root@len:/home/kevin# mountnfsgemini

 

 

The nfs mounts then work ok

 

Continue Reading

Installing and Configuring NFS

How to Install NFS on Ubuntu

 

t@len:/#
root@len:/#
root@len:/# apt install nfs-kernel-server
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following NEW packages will be installed
nfs-kernel-server
0 to upgrade, 1 to newly install, 0 to remove and 0 not to upgrade.
Need to get 98.9 kB of archives.
After this operation, 420 kB of additional disk space will be used.
Get:1 http://gb.archive.ubuntu.com/ubuntu focal-updates/main amd64 nfs-kernel-server amd64 1:1.3.4-2.5ubuntu3.4 [98.9 kB]
Fetched 98.9 kB in 0s (871 kB/s)
Selecting previously unselected package nfs-kernel-server.
(Reading database … 213177 files and directories currently installed.)
Preparing to unpack …/nfs-kernel-server_1%3a1.3.4-2.5ubuntu3.4_amd64.deb …
Unpacking nfs-kernel-server (1:1.3.4-2.5ubuntu3.4) …
Setting up nfs-kernel-server (1:1.3.4-2.5ubuntu3.4) …
Created symlink /etc/systemd/system/multi-user.target.wants/nfs-server.service → /lib/systemd/system/nfs-server.service.
Job for nfs-server.service canceled.

Creating config file /etc/exports with new version

Creating config file /etc/default/nfs-kernel-server with new version
Processing triggers for man-db (2.9.1-1) …
Processing triggers for systemd (245.4-4ubuntu3.16) …
root@len:/#

 

 

On Ubuntu 20.04, NFS version 2 is disabled. Versions 3 and 4 are enabled.

Verify by running:

 

sudo cat /proc/fs/nfsd/versions

 

root@len:~# cat /proc/fs/nfsd/versions
-2 +3 +4 +4.1 +4.2
root@len:~#

 

NFS server configuration is defined in /etc/default/nfs-kernel-server and /etc/default/nfs-common files.

The default settings are adequate for most environments.

 

NFS Version 4 uses a global root directory, where exported directories are relative to this directory.

 

You link the share mountpoint to the directories you want to export by using bind mounts.

 

For example:

 

first set the /srv/nfs4 directory as NFS root.

 

We will share two directories (/var/www and /opt/backups) with different settings.

 

/var/www/ is owned by user www-data,

 

while /opt/backups is owned by root.

 

First we create the root directory and the share mountpoints:

 

sudo mkdir -p /srv/nfs4/backups
sudo mkdir -p /srv/nfs4/www

 

Bind the NFS Mount Points

 

 

 

MAKE SURE YOU INCLUDE THE BIND COMMAND – AND – ADD THIS TO THE /etc/fstab if it should be automatically activated on reboots!

 

Next we bind mount the directories to the share mountpoints:

 

sudo mount –bind /opt/backups /srv/nfs4/backups
sudo mount –bind /var/www /srv/nfs4/www

 

 

To make the bind mounts permanent across reboots, add the following to the /etc/fstab file:

 

/etc/fstab
/opt/backups /srv/nfs4/backups none bind 0 0
/var/www /srv/nfs4/www none bind 0 0

 

This is important – otherwise the NFS mounts will not be connected from /srv/nfs4 to their respective server mounts!

 

 

then export the file systems

 

We do this by adding the file systems to be exported and the clients to be permitted access to those shares to the /etc/exports file:

 

Each line for an exported file system looks like this:

 

export host(options)

 

for our example, we could have something like this, for various networks and client machines:

/srv/nfs4 192.168.10.0/24(rw,sync,no_subtree_check,crossmnt,fsid=0)
/srv/nfs4/backups 192.168.10.0/24(ro,sync,no_subtree_check) 192.168.20.5(rw,sync,no_subtree_check)
/srv/nfs4/www 192.168.10.30(rw,sync,no_subtree_check)

 

The first line contains the fsid=0 option to define the NFS root directory (here it is /srv/nfs4).

 

Access to this NFS volume is permitted solely to the clients from subnet 192.168.10.0/24.

 

The crossmnt option allows us to share directories that are sub-directories of an exported directory.

 

The second line demonstrates how to specify multiple export rules for one specific filesystem. Read access is granted to subnet 192.168.10.0/24 range, and both read and write access only for the 192.168.20.5 client machine.

 

Finally the sync option tells NFS to write changes to the disk before responding.

 

After saving the file, export the shares by running:

 

exportfs -ar

 

Whenever you modify the /etc/exports file this command must be executed so that the file is re-read by the NFS server.

 

 

Practical example:

 

 

root@len:/srv#
root@len:/srv#
root@len:/srv# mkdir nfs4
root@len:/srv# cd nfs4/
root@len:/srv/nfs4# ls
root@len:/srv/nfs4# mkdir PRIMARY_MEDIA
root@len:/srv/nfs4# mkdir PRIMARY_BACKUP

 

mount –bind /media/kevin/PRIMARY_MEDIA /srv/nfs4/PRIMARY_MEDIA

 

mount –bind /media/kevin/PRIMARY_BACKUP /srv/nfs4/PRIMARY_BACKUP

 

root@len:/srv/nfs4# mount –bind /media/kevin/PRIMARY_MEDIA /srv/nfs4/PRIMARY_MEDIA
root@len:/srv/nfs4# mount –bind /media/kevin/PRIMARY_BACKUP /srv/nfs4/PRIMARY_BACKUP

 

 

verify with:

 

df

 

/dev/sdb1 2063187344 1504043404 454269956 77% /srv/nfs4/PRIMARY_MEDIA
/dev/sdb2 1031069848 326633048 651991616 34% /srv/nfs4/PRIMARY_BACKUP
root@len:/srv/nfs4#

 

 

then enter in the /etc/exports:

 

 

root@len:/srv/nfs4# cat /etc/exports
# /etc/exports: the access control list for filesystems which may be exported
# to NFS clients. See exports(5).
#
# Example for NFSv2 and NFSv3:
# /srv/homes hostname1(rw,sync,no_subtree_check) hostname2(ro,sync,no_subtree_check)
#
# Example for NFSv4:
# /srv/nfs4 gss/krb5i(rw,sync,fsid=0,crossmnt,no_subtree_check)
# /srv/nfs4/homes gss/krb5i(rw,sync,no_subtree_check)
#

 

# allow only asusvpn to mount:

 

/srv/nfs4 10.147.18.14(rw,sync,fsid=0,crossmnt,no_subtree_check)
/srv/nfs4/PRIMARY_MEDIA 10.147.18.14(rw,sync,no_subtree_check)

/srv/nfs4/PRIMARY_BACKUP 10.147.18.14(rw,sync,no_subtree_check)

root@len:/srv/nfs4#

 

 

root@len:/srv/nfs4# exportfs -va
exporting 10.147.18.14:/srv/nfs4/PRIMARY_BACKUP
exporting 10.147.18.14:/srv/nfs4/PRIMARY_MEDIA
exporting 10.147.18.14:/srv/nfs4
root@len:/srv/nfs4#

 

systemd service nfs-kernel-server has to be running:

 

root@len:/srv/nfs4# systemctl status nfs-kernel-server
● nfs-server.service – NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2022-04-29 23:43:12 BST; 10min ago
Process: 272163 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 272164 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 272164 (code=exited, status=0/SUCCESS)

Apr 29 23:43:11 len systemd[1]: Starting NFS server and services…
Apr 29 23:43:12 len systemd[1]: Finished NFS server and services.
root@len:/srv/nfs4#

 

 

you can then mount on the client

 

 

 

How To Display NFS Version

 

 

NFS Server version:

nfsstat -s

 

NFS Client version:

nfsstat -c

 

 

 

root@len:/srv/nfs4# nfsstat –help
Usage: nfsstat [OPTION]…

 

-m, –mounts Show statistics on mounted NFS filesystems
-c, –client Show NFS client statistics
-s, –server Show NFS server statistics
-2 Show NFS version 2 statistics
-3 Show NFS version 3 statistics
-4 Show NFS version 4 statistics
-o [facility] Show statistics on particular facilities.
nfs NFS protocol information
rpc General RPC information
net Network layer statistics
fh Usage information on the server’s file handle cache
io Usage information on the server’s io statistics
ra Usage information on the server’s read ahead cache
rc Usage information on the server’s request reply cache
all Select all of the above
-v, –verbose, –all Same as ‘-o all’
-r, –rpc Show RPC statistics
-n, –nfs Show NFS statistics
-Z[#], –sleep[=#] Collects stats until interrupted.
Cumulative stats are then printed
If # is provided, stats will be output every
# seconds.
-S, –since file Shows difference between current stats and those in ‘file’
-l, –list Prints stats in list format
–version Show program version
–help What you just did

 

root@len:/srv/nfs4#

 

 

Firewalling for NFS

 

rpcinfo -p | grep nfs

 

Port 111 (TCP and UDP) and 2049 (TCP and UDP) for the NFS server.

 

 

This will give a list of all ports used by all NFS-related program:

 

rpcinfo -p | awk ‘{print $3″ “$4}’ | sort -k2n | uniq

root@intel:/media/kevin# rpcinfo -p | awk '{print $3" "$4}' | sort -k2n | uniq
proto port
tcp 111
udp 111
tcp 2049
udp 2049
tcp 36705
tcp 39599
udp 39774
udp 40836
tcp 44743
udp 48795
tcp 49095
udp 58224
root@intel:/media/kevin#

 

NFS Ports

 

need to open following ports:

 

ufw allow in from 10.147.18.0/24 to any port 111
ufw allow in from 10.147.18.0/24 to any port 2049
ufw allow in from 10.147.18.0/24 to any port 33333

 

root@intel:/home/kevin# ufw allow in from 10.147.18.0/24 to any port 111 
Rule added
root@intel:/home/kevin# 
root@intel:/home/kevin# ufw allow in from 10.147.18.0/24 to any port 2049
Rule added
root@intel:/home/kevin# ufw allow in from 10.147.18.0/24 to any port 33333
Rule added
root@intel:/home/kevin#

 

then do:

 

root@intel:/home/kevin# iptables-save > /etc/iptables.rules
root@intel:/home/kevin#

 

 

also make sure the exportfs -ra is run else there wont be any nfs volumes to export!

 

root@intel:/# cat /etc/exports

 

/media/kevin/PRIMARY_MEDIA 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash) 
/media/kevin/PRIMARY_BACKUP 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash)

 

and restart nfs-kernel-server:

 

systemctl restart nfs-kernel-server

 

root@intel:~# systemctl status nfs-kernel-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Fri 2021-06-04 20:08:31 CEST; 1h 11min ago
Process: 25565 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 25566 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 25566 (code=exited, status=0/SUCCESS)

Jun 04 20:08:30 intel systemd[1]: Starting NFS server and services...
Jun 04 20:08:31 intel systemd[1]: Finished NFS server and services.
root@intel:~#



 

Error Message: chown: operation not permitted

 

By default the root_squash export option is set, this means NFS does not allow a root user from a connecting nfs client to perform operations as root on the nfs server.

 

rsync: [receiver] chown "/home/kevin/file.txt" failed: Operation not permitted (1)

To resolve this, set the no_root_squash option for the share in the /etc/exports file

 

(rw,insecure,sync,no_subtree_check,no_root_squash)

 

root@intel:/# cat /etc/exports



/media/kevin/PRIMARY_MEDIA 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash) 
/media/kevin/PRIMARY_BACKUP 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash)

 

 

 

Showmount -e 

 

root@len:/srv/nfs4# showmount -e
Export list for len:
/srv/nfs4/PRIMARY_BACKUP 10.147.18.14
/srv/nfs4/PRIMARY_MEDIA 10.147.18.14
/srv/nfs4 10.147.18.14
root@len:/srv/nfs4#

 

 

 

root@gemini:~#
root@gemini:~# rpcinfo | egrep “service|nfs”
program version netid address service owner
100003 3 tcp 0.0.0.0.8.1 nfs superuser
100003 4 tcp 0.0.0.0.8.1 nfs superuser
100003 3 udp 0.0.0.0.8.1 nfs superuser
100003 3 tcp6 ::.8.1 nfs superuser
100003 4 tcp6 ::.8.1 nfs superuser
100003 3 udp6 ::.8.1 nfs superuser
root@gemini:~#

 

 

To export the Root NFS tree

 

For security reasons, NFS shares should be defined using the NFS root directory definition.

 

 

For example with the following definitions in /etc/exports:

 

 

/srv/nfs4 10.147.18.0/24(rw,fsid=0,insecure,no_subtree_check,async)
/srv/nfs4/Downloads 10.147.18.0/24(rw,nohide,insecure,no_subtree_check,async)

/srv/nfs4/DATA 10.147.18.0/24(rw,sync,no_subtree_check)
/srv/nfs4/NEXTCLOUD 10.147.18.0/24(rw,sync,no_subtree_check)

 

In this case the first line defines /srv/nfs4 as the NFS root

 

remember to run exportfs  -ra after editing the /etc/exports file so that the directives are read by the NFS server.

 

 

Then, to mount the NFS root directory from client do:

 

mount -v -t nfs4 geminivpn:/ /media/kevin/nfs4

 

You can then access the shares under /media/kevin/nfs4 by simply cd’ing to the desired directory share.

 

eg

 

cd Downloads

 

 

 

Continue Reading

Configuring NFS

Firewalling for NFS

 

rpcinfo -p | grep nfs

 

Port 111 (TCP and UDP) and 2049 (TCP and UDP) for the NFS server.

 

 

This will give a list of all ports used by all NFS-related program:

 

rpcinfo -p | awk ‘{print $3″ “$4}’ | sort -k2n | uniq

root@intel:/media/kevin# rpcinfo -p | awk '{print $3" "$4}' | sort -k2n | uniq
proto port
tcp 111
udp 111
tcp 2049
udp 2049
tcp 36705
tcp 39599
udp 39774
udp 40836
tcp 44743
udp 48795
tcp 49095
udp 58224
root@intel:/media/kevin#

 

NFS Ports

 

need to open following ports:

 

ufw allow in from 10.147.18.0/24 to any port 111
ufw allow in from 10.147.18.0/24 to any port 2049
ufw allow in from 10.147.18.0/24 to any port 33333

 

root@intel:/home/kevin# ufw allow in from 10.147.18.0/24 to any port 111 
Rule added
root@intel:/home/kevin# 
root@intel:/home/kevin# ufw allow in from 10.147.18.0/24 to any port 2049
Rule added
root@intel:/home/kevin# ufw allow in from 10.147.18.0/24 to any port 33333
Rule added
root@intel:/home/kevin#

 

then do:

 

root@intel:/home/kevin# iptables-save > /etc/iptables.rules
root@intel:/home/kevin#

 

 

also make sure the exportfs -ra is run else there wont be any nfs volumes to export!

 

root@intel:/# cat /etc/exports

 

/media/kevin/PRIMARY_MEDIA 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash) 
/media/kevin/PRIMARY_BACKUP 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash)

 

and restart nfs-kernel-server:

 

systemctl restart nfs-kernel-server

 

root@intel:~# systemctl status nfs-kernel-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Fri 2021-06-04 20:08:31 CEST; 1h 11min ago
Process: 25565 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 25566 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 25566 (code=exited, status=0/SUCCESS)

Jun 04 20:08:30 intel systemd[1]: Starting NFS server and services...
Jun 04 20:08:31 intel systemd[1]: Finished NFS server and services.
root@intel:~#



 

Error Message: chown: operation not permitted

 

By default the root_squash export option is set, this means NFS does not allow a root user from a connecting nfs client to perform operations as root on the nfs server.

 

rsync: [receiver] chown "/home/kevin/file.txt" failed: Operation not permitted (1)

To resolve this, set the no_root_squash option for the share in the /etc/exports file

 

(rw,insecure,sync,no_subtree_check,no_root_squash)

 


root@intel:/# cat /etc/exports



/media/kevin/PRIMARY_MEDIA 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash) 
/media/kevin/PRIMARY_BACKUP 10.147.18.0/24(rw,insecure,sync,no_subtree_check,no_root_squash)

 

 

Continue Reading