ZFS is a file system created by Sun Microsystems, first shipped with Solaris but now available for other LINUX and UNIX operating systems.
ZFS uses virtual storage pools known as zpools.
A conventional RAID array is an abstraction layer that sits between the filesystem and a set of disks. This system presents the entire array as a virtual “disk” device which from the filesystem’s perspective is indistinguishable from an actual real single disk.
ZFS goes much further than this, including functionality that you normally require two or three separate software or operational layers in a Linux operating system.
ZFS is effectively a logical volume manager, a RAID system, and a filesystem all combined together in the one filesystem.
ZFS is designed to handle large amounts of storage and also to prevent data corruption. ZFS can handle up to 256 quadrillion Zettabytes of storage – the Z in ZFS stands for Zettabyte File System).
ZFS can in fact handle files up to 16 exabytes in size.
ZFS features
High storage capacity
Data integrity
Protection against data corruption
Efficient data protection
Date compression
A traditional RAID is separate from the filesystem. In traditional systems, one can mix and match RAID levels and filesystems.
Traditional RAID can be implemented in hardware. However, no hardware controller is required to implement RAIDZ.
Note that RAIDZ is integrated with ZFS. It cannot be used with any other filesystem.
Installing zfs on Centos 7
[root@centos7vm1 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@centos7vm1 ~]#
[root@centos7vm1 ~]#
yum install http://download.zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm
[root@centos7vm1 ~]#
[root@centos7vm1 ~]# yum install http://download.zfsonlinux.org/epel/zfs-release.el7_9.noarch.rpm
Loaded plugins: fastestmirror, langpacks
zfs-release.el7_9.noarch.rpm | 5.3 kB 00:00:00
Examining /var/tmp/yum-root-JLgnzc/zfs-release.el7_9.noarch.rpm: zfs-release-1-7.9.noarch
Marking /var/tmp/yum-root-JLgnzc/zfs-release.el7_9.noarch.rpm to be installed
Resolving Dependencies
–> Running transaction check
—> Package zfs-release.noarch 0:1-7.9 will be installed
–> Finished Dependency Resolution
base/7/x86_64
updates/7/x86_64/primary_db | 15 MB 00:00:09
Dependencies Resolved
=======================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================
Installing:
zfs-release noarch 1-7.9 /zfs-release.el7_9.noarch 2.9 k
Transaction Summary
=======================================================================================================================================
Install 1 Package
Total size: 2.9 k
Installed size: 2.9 k
Is this ok [y/d/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : zfs-release-1-7.9.noarch 1/1
Verifying : zfs-release-1-7.9.noarch 1/1
Installed:
zfs-release.noarch 0:1-7.9
Complete!
[root@centos7vm1 ~]#
There are two ways ZFS module can be loaded to the kernel, DKMS and kABI. The difference between these is that if you install DKMS based ZFS module, and then for some reason you update the kernel of your operating system, the ZFS kernel module must be recompiled again.
Otherwise it won’t work. But the kABI based ZFS module has the upper hand in that it doesn’t require recompilation if the kernel of the operating system is updated.
In this lab I will install the kABI based ZFS kernel module.
When you install the ZFS repository on CentOS 7, the DKMS based repository is enabled by default. So you have to disable DKMS based repository and enable the kABI based repository.
To disable the DKMS based ZFS repository and enable kABI based ZFS repository, first open the yum configuration file of ZFS with a text editor with the following command:
nano /etc/yum.repos.d/zfs.repo
for DKMS based ZFS repository:
First change the enabled=1 to enabled=0 to disable the DKMS based ZFS repository.
and for kABI based ZFS repository:
change the enabled=0 to enabled=1 to enable the kABI based ZFS repository.
Now you can install ZFS File System on your CentOS 7 with the following command:
yum install zfs
[root@centos7vm1 ~]# yum install zfs
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
* base: mirror.pulsant.com
* centos-ceph-nautilus: mirror.bytemark.co.uk
* centos-nfs-ganesha28: mirror.netweaver.uk
* epel: ftp.nluug.nl
* extras: mirror.netweaver.uk
* updates: mirrors.vinters.com
zfs-kmod | 2.9 kB 00:00:00
zfs-kmod/x86_64/primary_db | 175 kB 00:00:01
ceph-noarch 184/184
Resolving Dependencies
–> Running transaction check
—> Package zfs.x86_64 0:2.0.7-1.el7 will be installed
–> Processing Dependency: zfs-kmod = 2.0.7 for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libzpool4 = 2.0.7 for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libzfs4 = 2.0.7 for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libuutil3 = 2.0.7 for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libnvpair3 = 2.0.7 for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libzpool.so.4()(64bit) for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libzfs_core.so.3()(64bit) for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libzfs.so.4()(64bit) for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libuutil.so.3()(64bit) for package: zfs-2.0.7-1.el7.x86_64
–> Processing Dependency: libnvpair.so.3()(64bit) for package: zfs-2.0.7-1.el7.x86_64
–> Running transaction check
—> Package kmod-zfs.x86_64 0:2.0.7-1.el7 will be installed
Downloading packages:
(1/6): libnvpair3-2.0.7-1.el7.x86_64.rpm | 32 kB 00:00:00
(2/6): libuutil3-2.0.7-1.el7.x86_64.rpm | 26 kB 00:00:00
(3/6): libzfs4-2.0.7-1.el7.x86_64.rpm | 219 kB 00:00:01
(4/6): kmod-zfs-2.0.7-1.el7.x86_64.rpm | 1.4 MB 00:00:02
(5/6): zfs-2.0.7-1.el7.x86_64.rpm | 595 kB 00:00:00
(6/6): libzpool4-2.0.7-1.el7.x86_64.rpm | 1.2 MB 00:00:02
—————————————————————————————————————————————
Total 861 kB/s | 3.5 MB 00:00:04
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : libnvpair3-2.0.7-1.el7.x86_64 1/6
Installing : libuutil3-2.0.7-1.el7.x86_64 2/6
Installing : libzfs4-2.0.7-1.el7.x86_64 3/6
Installing : libzpool4-2.0.7-1.el7.x86_64 4/6
Installing : kmod-zfs-2.0.7-1.el7.x86_64 5/6
Installing : zfs-2.0.7-1.el7.x86_64 6/6
Verifying : kmod-zfs-2.0.7-1.el7.x86_64 1/6
Verifying : zfs-2.0.7-1.el7.x86_64 2/6
Verifying : libuutil3-2.0.7-1.el7.x86_64 3/6
Verifying : libzpool4-2.0.7-1.el7.x86_64 4/6
Verifying : libzfs4-2.0.7-1.el7.x86_64 5/6
Verifying : libnvpair3-2.0.7-1.el7.x86_64 6/6
Installed:
zfs.x86_64 0:2.0.7-1.el7
Dependency Installed:
kmod-zfs.x86_64 0:2.0.7-1.el7 libnvpair3.x86_64 0:2.0.7-1.el7 libuutil3.x86_64 0:2.0.7-1.el7 libzfs4.x86_64 0:2.0.7-1.el7
libzpool4.x86_64 0:2.0.7-1.el7
Complete!
[root@centos7vm1 ~]#
next do a reboot and then run the following command to check whether ZFS kernel module is loaded.
lsmod | grep zfs
If you don’t see any output, then ZFS kernel module is not loaded. In that case, run the following command to load the ZFS kernel module manually.
modprobe zfs
next add some disks
using fdisk, I added a 5GB virtual SCSI disk /dev/sda
and then created 10 partitions of 400MB each.
You can check what disks you have with the following command:
$ sudo lsblk
or with fdisk -l
Disk /dev/sda: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@centos7vm1 ~]# lsmod | grep zfs
[root@centos7vm1 ~]# modprobe zfs
[root@centos7vm1 ~]# lsmod | grep zfs
zfs 4224878 0
zunicode 331170 1 zfs
zzstd 460780 1 zfs
zlua 151526 1 zfs
zcommon 94285 1 zfs
znvpair 94388 2 zfs,zcommon
zavl 15698 1 zfs
icp 301775 1 zfs
spl 96750 6 icp,zfs,zavl,zzstd,zcommon,znvpair
[root@centos7vm1 ~]#
[root@centos7vm1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 5G 0 disk
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
[root@centos7vm1 ~]#
Next I am going to partition the /dev/sda scsi disk of 5gb that I just added, into 11 partitions of 400MB each:
[root@centos7vm1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 5G 0 disk
├─sda1 8:1 0 400M 0 part
├─sda2 8:2 0 400M 0 part
├─sda3 8:3 0 400M 0 part
├─sda4 8:4 0 1K 0 part
├─sda5 8:5 0 400M 0 part
├─sda6 8:6 0 400M 0 part
├─sda7 8:7 0 400M 0 part
├─sda8 8:8 0 400M 0 part
├─sda9 8:9 0 400M 0 part
├─sda10 8:10 0 400M 0 part
├─sda11 8:11 0 400M 0 part
└─sda12 8:12 0 400M 0 part
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
[root@centos7vm1 ~]#
fdisk -l /dev/sda
[root@centos7vm1 ~]#
[root@centos7vm1 ~]# fdisk -l /dev/sda
Disk /dev/sda: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x7157d815
Device Boot Start End Blocks Id System
/dev/sda1 2048 821247 409600 83 Linux
/dev/sda2 821248 1640447 409600 83 Linux
/dev/sda3 1640448 2459647 409600 83 Linux
/dev/sda4 2459648 10485759 4013056 5 Extended
/dev/sda5 2461696 3280895 409600 83 Linux
/dev/sda6 3282944 4102143 409600 83 Linux
/dev/sda7 4104192 4923391 409600 83 Linux
/dev/sda8 4925440 5744639 409600 83 Linux
/dev/sda9 5746688 6565887 409600 83 Linux
/dev/sda10 6567936 7387135 409600 83 Linux
/dev/sda11 7389184 8208383 409600 83 Linux
/dev/sda12 8210432 9029631 409600 83 Linux
[root@centos7vm1 ~]#
Create a ZFS Pool
A ZFS pool combines drives together to perform single storage. Pools should always be created on disks which are currently not in use.
So when the storage needs to be expanded simply add drives to the pool to increase overall storage capacity.
You can create a ZFS pool using different devices:
using whole disks
using disk slices
using files
You can name your ZFS pool anything you wish.
A new directory with the same name as your ZFS pool will be created in the / directory.
You also specify your storage devices or disk drives when you create a ZFS pool.
let’s create an initial pool with the name FILES
zpool create FILES /dev/sda1 /dev/sda2
[root@centos7vm1 ~]# zpool create FILES /dev/sda1 /dev/sda2
[root@centos7vm1 ~]# cd /
[root@centos7vm1 /]# ls -l
drwxr-xr-x 2 root root 2 Apr 12 13:35 FILES
You can run the following command to list all the available ZFS pool of your system:
zpool list
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
FILES 768M 105K 768M – – 0% 0% 1.00x ONLINE –
[root@centos7vm1 /]#
just copied some files across to /FILES from another machine using rsync
[root@centos7vm1 IT_BOOKS]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
FILES 768M 581M 187M – – 52% 75% 1.00x ONLINE –
[root@centos7vm1 IT_BOOKS]#
By default, a ZFS pool is only writeable by the root user. If you as an ordinary user want to write to the ZFS pool, you have to change the permission of the ZFS pool.
You can run the following command to change the permission of your ZFS pool:
chown -Rfv kevin:kevin /FILES
3 Types of ZFS Storage Pools
There are three types of pools that can be created in ZFS:
Striped Pool
Mirrored Pool
Raid Pool
Each offers its own sets of advantages and disadvantages.
It is important to decide which type of pool is going to be used since once the pool is created it cannot be undone.
In order to change pool type, a new pool would need to be created, then all data migrated from the old pool to the new pool, then deleting the old pool.
Creating a Striped Pool
This is the basis ZFS storage pool where incoming data is dynamically striped across all disks in the pool. Although this offers maximum write performance, it also comes with a price. Any single failed drive will make the pool completely unusable and data loss will occur.
Besides the performance, the biggest advantage of striped pools is total storage capacity is equal to the total size of all disks. We can use the following command to create a ZFS striped pool:
$ zpool create /dev/sdX /dev/sdX
To increase the size of the striped pool, we can simply add a drive using the following command:
$ zpool add /dev/sdX
It is important to note here that, when a new disk is added to a striped pool, ZFS will not redistribute existing data over to the new disk, but will favour the newly added disk for new incoming data. The only way to redistribute existing data is to delete, then recopy the data in which case data will be stripped on all disks.
Creating a Mirrored Pool
As the name suggests, this pool consists of mirrored disks.
There are no restrictions on how the mirror can be formed. The main caveat when using the mirrored pool is that we lose 50% of total disk capacity due to the mirroring.
To create a mirror pool of just two disks:
zpool create mirror /dev/sda /dev/sdb
To expand a mirror pool we simply need to add another group of the mirrored disk:
zpool add mirror /dev/sdd /dev/sde /dev/sdf
Dynamic strip – is the very basic pool which can be created with a single disk or a concatenation of disk. We have already seen zpool creation using a single disk in the example of creating zpool with disks. Lets see how we can create concatenated zfs pool.
When adding another mirror group, data is striped on to the new mirrored group of the disk. Although it is rare, it is also possible to create a mirror of more than two disks:
zpool create FILES /dev/sda1 /dev/sda2
zpool list
[root@centos7vm1 ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
FILES 768M 111K 768M – – 0% 0% 1.00x ONLINE –
[root@centos7vm1 ~]#
after copying some files to the pool /FILES, it now looks like this:
[root@centos7vm1 ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
FILES 768M 581M 187M – – 65% 75% 1.00x ONLINE –
[root@centos7vm1 ~]#
This configuration does not provide any redundancy. Hence any disk failure will result in a data loss.
Also note that once a disk is added in this fashion to a zfs pool may not be removed from the pool again.
The only way to free the disk is to destroy entire pool. This happens due to the dynamic striping nature of the pool which uses both disk to store the data.
the next stage up from a striped pool is a mirrored pool:
Mirrored pool – 2 way mirror
A mirrored pool provides you the redundancy which enables us to store multiple copies of data on different disks.
Here you can also detach a disk from the pool as the data will be available on the another disks.
zpool create MIRROREDPOOL mirror /dev/sda5 /dev/sda6
ls -l
drwxr-xr-x 2 root root 2 Apr 12 14:08 MIRROREDPOOL
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
FILES 768M 581M 187M – – 65% 75% 1.00x ONLINE –
MIRROREDPOOL 384M 100K 384M – – 0% 0% 1.00x ONLINE –
[root@centos7vm1 /]#
3 way mirrored pool
this has 3 disks:
zpool create MIRROR3DISKPOOL mirror /dev/sda7 /dev/sda8 /dev/sda9
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
FILES 768M 581M 187M – – 64% 75% 1.00x ONLINE –
MIRROR3DISKPOOL 384M 102K 384M – – 0% 0% 1.00x ONLINE –
MIRROREDPOOL 384M 130M 254M – – 3% 33% 1.00x ONLINE –
[root@centos7vm1 /]#
Creating a ZFS File System
ZFS filesystems are created inside ZFS storage pools using the zfs create command. The create subcommand takes a single argument: the name of the filesystem to be created.
The filesystem name is specified as a path name beginning from the name of the pool:
pool-name/[filesystem-name/]filesystem-name
The pool name and initial filesystem names in the path define the location where the new filesystem will be created. The last name in the path specifies the name of the filesystem to be created.
example:
zfs create RAIDZ1/filesystem1
df
RAIDZ1 256M 130M 127M 51% /RAIDZ1
MIRROR3DISKPOOL 256M 130M 127M 51% /MIRROR3DISKPOOL
RAIDZ2 255M 130M 126M 51% /RAIDZ2
RAIDZ3 256M 130M 127M 51% /RAIDZ3
tmpfs 150M 0 150M 0% /run/user/0
RAIDZ1/filesystem1 127M 128K 127M 1% /RAIDZ1/filesystem1
[root@centos7vm1 RAIDZ1]#
To destroy a pool
To destroy a ZFS filesystem, use the zfs destroy command. The specified filesystem will then be automatically unmounted, unshared and deleted.
NOTE that if the filesystem to be destroyed is busy and can’t be unmounted, then the zfs destroy command will fail. To destroy an active filesystem, use the -f option.
[root@centos7vm1 MIRROR3DISKPOOL]# zpool destroy FILES
[root@centos7vm1 MIRROR3DISKPOOL]# zpool destroy MIRROREDPOOL
[root@centos7vm1 MIRROR3DISKPOOL]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MIRROR3DISKPOOL 384M 130M 254M – – 2% 33% 1.00x ONLINE –
[root@centos7vm1 MIRROR3DISKPOOL]#
[root@centos7vm1 MIRROR3DISKPOOL]#
[root@centos7vm1 MIRROR3DISKPOOL]#
[root@centos7vm1 RAIDZ1]# zfs destroy RAIDZ1/filesystem1
[root@centos7vm1 RAIDZ1]#
df
RAIDZ1 262016 132736 129280 51% /RAIDZ1
MIRROR3DISKPOOL 262016 132736 129280 51% /MIRROR3DISKPOOL
RAIDZ2 260480 132224 128256 51% /RAIDZ2
RAIDZ3 262016 132736 129280 51% /RAIDZ3
tmpfs 153076 0 153076 0% /run/user/0
[root@centos7vm1 RAIDZ1]#
Creating RAID-Z pools
Now we can also have a pool similar to a RAID-5 configuration called as RAID-Z. RAID-Z are of 3 types raidz1 (single parity) and raidz2 (double parity) and rzidz3 (triple parity). Lets us see how we can configure each type.
Minimum disk requirements for each type
Minimum disks required for each type of RAID-Z
raidz1 – 2 disks
raidz2 – 3 disks
raidz3 – 4 disks
Creating a Raidz1 Pool
to create a raidz1:
zpool create <pool name> raidz disk1 disk2
eg
(sda7,8,9 are in use for MIRRORED3DISKPOOL)
zpool create RAIDZ1 raidz /dev/sda1 /dev/sda2
Creating a raidz2 Pool
eg
zpool create RAIDZ2 raidz2 /dev/sda3 /dev/sda5 /dev/sda6
[root@centos7vm1 /]# zpool destroy RAIDZ1
[root@centos7vm1 /]# zpool destroy RAIDZ2
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MIRROR3DISKPOOL 384M 130M 254M – – 2% 33% 1.00x ONLINE –
[root@centos7vm1 /]# zpool create RAIDZ1 raidz /dev/sda1 /dev/sda2
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MIRROR3DISKPOOL 384M 130M 254M – – 2% 33% 1.00x ONLINE –
RAIDZ1 768M 240K 768M – – 0% 0% 1.00x ONLINE –
[root@centos7vm1 /]# zpool create RAIDZ2 raidz2 /dev/sda3 /dev/sda5 /dev/sda6
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MIRROR3DISKPOOL 384M 130M 254M – – 2% 33% 1.00x ONLINE –
RAIDZ1 768M 201K 768M – – 0% 0% 1.00x ONLINE –
RAIDZ2 1.12G 360K 1.12G – – 0% 0% 1.00x ONLINE –
[root@centos7vm1 /]#
Creating a raidz3 Pool
zpool create RAIDZ3 raidz3 /dev/sda10 /dev/sda11 /dev/sda12 /dev/sda13
[root@centos7vm1 ~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MIRROR3DISKPOOL 384M 130M 254M – – 2% 33% 1.00x ONLINE –
RAIDZ1 768M 228K 768M – – 0% 0% 1.00x ONLINE –
RAIDZ2 1.12G 405K 1.12G – – 0% 0% 1.00x ONLINE –
[root@centos7vm1 ~]# zpool create RAIDZ3 raidz3 /dev/sda10 /dev/sda11 /dev/sda12 /dev/sda13
[root@centos7vm1 ~]#
[root@centos7vm1 /]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 736M 0 736M 0% /dev
tmpfs 748M 0 748M 0% /dev/shm
tmpfs 748M 8.7M 739M 2% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/mapper/centos-root 8.0G 2.8G 5.3G 35% /
/dev/vda1 1014M 202M 813M 20% /boot
MIRROR3DISKPOOL 256M 130M 127M 51% /MIRROR3DISKPOOL
RAIDZ1 256M 130M 127M 51% /RAIDZ1
RAIDZ2 255M 130M 126M 51% /RAIDZ2
RAIDZ3 256M 130M 127M 51% /RAIDZ3
tmpfs 150M 0 150M 0% /run/user/0
RAIDZ1/fs1 127M 128K 127M 1% /RAIDZ1/fs1
[root@centos7vm1 /]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
MIRROR3DISKPOOL 384M 130M 254M – – 2% 33% 1.00x ONLINE –
RAIDZ1 768M 259M 509M – – 0% 33% 1.00x ONLINE –
RAIDZ2 1.12G 389M 763M – – 0% 33% 1.00x ONLINE –
RAIDZ3 1.50G 519M 1017M – – 0% 33% 1.00x ONLINE –
[root@centos7vm1 /]#
[root@centos7vm1 /]#
Basic ZFS Commands
zfs list
[root@centos7vm1 RAIDZ1]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
MIRROR3DISKPOOL 130M 126M 130M /MIRROR3DISKPOOL
RAIDZ1 130M 126M 130M /RAIDZ1
RAIDZ2 129M 125M 129M /RAIDZ2
RAIDZ3 130M 126M 130M /RAIDZ3
[root@centos7vm1 RAIDZ1]#
[root@centos7vm1 RAIDZ1]# zfs list -o name,sharenfs,mountpoint
NAME SHARENFS MOUNTPOINT
MIRROR3DISKPOOL off /MIRROR3DISKPOOL
RAIDZ1 off /RAIDZ1
RAIDZ2 off /RAIDZ2
RAIDZ3 off /RAIDZ3
[root@centos7vm1 RAIDZ1]#
[root@centos7vm1 RAIDZ1]# zfs get mountpoint RAIDZ1
NAME PROPERTY VALUE SOURCE
RAIDZ1 mountpoint /RAIDZ1 default
[root@centos7vm1 RAIDZ1]#
zpool status
root@centos7vm1 RAIDZ1]# zpool status
pool: MIRROR3DISKPOOL
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
MIRROR3DISKPOOL ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sda7 ONLINE 0 0 0
sda8 ONLINE 0 0 0
sda9 ONLINE 0 0 0
errors: No known data errors
pool: RAIDZ1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
RAIDZ1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sda2 ONLINE 0 0 0
errors: No known data errors
pool: RAIDZ2
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
RAIDZ2 ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda3 ONLINE 0 0 0
sda5 ONLINE 0 0 0
sda6 ONLINE 0 0 0
errors: No known data errors
pool: RAIDZ3
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
RAIDZ3 ONLINE 0 0 0
raidz3-0 ONLINE 0 0 0
sda10 ONLINE 0 0 0
sda11 ONLINE 0 0 0
sda12 ONLINE 0 0 0
sda13 ONLINE 0 0 0
errors: No known data errors
[root@centos7vm1 RAIDZ1]#
Mounting ZFS Filesystems
ZFS automatically mounts filesystems when filesystems are created or when the system boots.
Use of the zfs mount command is necessary only when you need to change mount options, or explicitly mount or unmount filesystems.
[root@centos7vm1 RAIDZ1]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753628 0 753628 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8912 756468 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 3127172 5247100 38% /
/dev/vda1 1038336 240956 797380 24% /boot
RAIDZ1 261888 132736 129152 51% /RAIDZ1
MIRROR3DISKPOOL 262016 132736 129280 51% /MIRROR3DISKPOOL
RAIDZ2 260480 132224 128256 51% /RAIDZ2
RAIDZ3 262016 132736 129280 51% /RAIDZ3
tmpfs 153076 0 153076 0% /run/user/0
[root@centos7vm1 RAIDZ1]# zfs mount RAIDZ1/filesystem1
[root@centos7vm1 RAIDZ1]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753628 0 753628 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8912 756468 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 3127172 5247100 38% /
/dev/vda1 1038336 240956 797380 24% /boot
RAIDZ1 261888 132736 129152 51% /RAIDZ1
MIRROR3DISKPOOL 262016 132736 129280 51% /MIRROR3DISKPOOL
RAIDZ2 260480 132224 128256 51% /RAIDZ2
RAIDZ3 262016 132736 129280 51% /RAIDZ3
tmpfs 153076 0 153076 0% /run/user/0
RAIDZ1/filesystem1 129280 128 129152 1% /RAIDZ1/filesystem1
[root@centos7vm1 RAIDZ1]#
Note: zfs list – this will still show the filesystem even if it is unmounted. You need to do df in order to check if it is unmounted.
zfs list merely shows whether the filesystem exists or not, not whether it is mounted or not.
Sharing ZFS Filesystems
By default, all ZFS filesystems are unshared. To share a new filesystem via NFS, you first must set the zfs share option such as follows:
zfs set sharenfs=on RAIDZ1/filesystem1
or for read-write for an entire pool:
zfs set sharenfs=’rw’ RAIDZ1
You can also share all ZFS file systems on the system by using the -a option.
zfs share -a
You can also set sharing only for a specific computer:
zfs set sharenfs=’rw=@192.168.122.0/24′ RAIDZ1
You can use the colon (:) symbol to allow access to the ZFS pool pool1 from multiple network subnets or IP addresses:
eg
zfs set sharenfs=’rw=@192.168.122.0/24:@192.168.132.0/24′ RAIDZ1
You can verify whether the sharenfs property is correctly set on the ZFS pool RAIDZ1:
zfs get sharenfs RAIDZ1
[root@centos7vm1 RAIDZ1]# zfs set sharenfs=’rw’ RAIDZ1
[root@centos7vm1 RAIDZ1]# zfs get sharenfs RAIDZ1
NAME PROPERTY VALUE SOURCE
RAIDZ1 sharenfs rw local
[root@centos7vm1 RAIDZ1]#
Unsharing ZFS File Systems
ZFS file systems are automatically shared or unshared during boot, creation, and destruction, but file systems sometimes need to be explicitly unshared.
For this use the zfs unshare command. For example:
zfs unshare RAIDZ1/filesystem1
Failing a ZFS Drive
for this test, we are going to fail a drive in the 3-drive pool RAIDZ1:
# zpool status RAIDZ1
pool: RAIDZ1
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
RAIDZ1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sda1 ONLINE 0 0 0
sda2 ONLINE 0 0 0
errors: No known data errors
[root@centos7vm1 RAIDZ1]#
Listing ZFS Properties
[root@centos7vm1 ~]# zpool get all RAIDZ1
NAME PROPERTY VALUE SOURCE
RAIDZ1 size 768M –
RAIDZ1 capacity 33% –
RAIDZ1 altroot – default
RAIDZ1 health ONLINE –
RAIDZ1 guid 17775474607569600445 –
RAIDZ1 version – default
RAIDZ1 bootfs – default
RAIDZ1 delegation on default
RAIDZ1 autoreplace off default
RAIDZ1 cachefile – default
RAIDZ1 failmode wait default
RAIDZ1 listsnapshots off default
RAIDZ1 autoexpand off default
RAIDZ1 dedupratio 1.00x –
RAIDZ1 free 508M –
RAIDZ1 allocated 260M –
RAIDZ1 readonly off –
RAIDZ1 ashift 0 default
RAIDZ1 comment – default
RAIDZ1 expandsize – –
RAIDZ1 freeing 0 –
RAIDZ1 fragmentation 0% –
RAIDZ1 leaked 0 –
RAIDZ1 multihost off default
RAIDZ1 checkpoint – –
RAIDZ1 load_guid 4470113783087180867 –
RAIDZ1 autotrim off default
.. this is just a short extract of the output from the command…
[root@centos7vm1 ~]#