Tags Archives: GlusterFS

GlusterFS Lab on Centos7

Replicated GlusterFS Cluster with 3 Nodes

 

First, we have a 3 node gluster cluster consisting of:

 

glusterfs1
glusterfs2
glusterfs3

 

 

# GlusterFS VMs
192.168.122.70 glusterfs1
192.168.122.71 glusterfs2
192.168.122.72 glusterfs3

 

Brick – is the basic storage unit (directory) on a server in the trusted storage pool.

 

Volume – is a logical collection of bricks.

 

Volumes:  The GlusterFS volume is the collection of bricks. Most of the gluster operations such as reading and writing are done on the volume.

 

 

GlusterFS supports different types of volumes, for scaling the storage size or improving the performance or for both.

 

 

In this lab we will configure a replicated GlusterFS volume on CentOS7.

 

Replicated Glusterfs Volume is similar to RAID 1. The volume maintains exact copies of the data on all bricks.

 

You can set the number of replicas when creating the volume.

 

 

You need to have at least two bricks to create a volume, with two replicas or three bricks to create a volume of 3 replicas.

 

 

I created 3 local disks /dev/vdb on each machine, 200MB each, and with 1 partition 100% vdb1

 

then created /STORAGE/BRICK1 on each machine as local mountpoint.

 

and did

 

mkfs.ext4 /dev/vdb1 on each node.

 

then added to the fstab:

 

[root@glusterfs1 STORAGE]# echo ‘/dev/vdb1 /STORAGE/BRICK1 ext4 defaults 1 2’ >> /etc/fstab
[root@glusterfs1 STORAGE]#

 

 

next firewalling….

 

The gluster processes on the nodes need to be able to communicate with each other. To simplify this setup, configure the firewall on each node to accept all traffic from the other node.

 

# iptables -I INPUT -p all -s <ip-address> -j ACCEPT

 

where ip-address is the address of the other node.

 

 

Then configure the trusted pool

 

From “server1”

 

# gluster peer probe server2
# gluster peer probe server3

 

Note: When using hostnames, the first server needs to be probed from one other server to set its hostname.

 

From “server2”

 

# gluster peer probe server1

Note: Once this pool has been established, only trusted members may probe new servers into the pool. A new server cannot probe the pool, it must be probed from the pool.

 

 

so in our case we do:

 

 

[root@glusterfs1 etc]# gluster peer probe glusterfs2
peer probe: success
[root@glusterfs1 etc]# gluster peer probe glusterfs3
peer probe: success
[root@glusterfs1 etc]#

 

[root@glusterfs2 STORAGE]# gluster peer probe glusterfs1
peer probe: Host glusterfs1 port 24007 already in peer list
[root@glusterfs2 STORAGE]# gluster peer probe glusterfs2
peer probe: Probe on localhost not needed
[root@glusterfs2 STORAGE]#

 

[root@glusterfs3 STORAGE]# gluster peer probe glusterfs1
peer probe: Host glusterfs1 port 24007 already in peer list
[root@glusterfs3 STORAGE]# gluster peer probe glusterfs2
peer probe: Host glusterfs2 port 24007 already in peer list
[root@glusterfs3 STORAGE]#

 

 

Note that once this pool has been established, only trusted members can place or probe new servers into the pool.

 

A new server cannot probe the pool, it has to be probed from the pool.

 

Check the peer status on server1

 

# gluster peer status

 

[root@glusterfs1 etc]# gluster peer status
Number of Peers: 2

 

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)

 

Hostname: glusterfs3
Uuid: 28a7bf8e-e2b9-4509-a45f-a95198139a24
State: Peer in Cluster (Connected)
[root@glusterfs1 etc]#

 

 

next, we set up a GlusterFS volume

 

 

On all servers do:

 

# mkdir -p /data/brick1/gv0

From any single server:

 

# gluster volume create gv0 replica 3 server1:/data/brick1/gv0 server2:/data/brick1/gv0 server3:/data/brick1/gv0
volume create: gv0: success: please start the volume to access data
# gluster volume start gv0
volume start: gv0: success

 

Confirm that the volume shows “Started”:

 

# gluster volume info

 

on each machine:

 

 

mkdir -p /STORAGE/BRICK1/GV0

 

 

then on ONE gluster node ONLY:

 

 

gluster volume create GV0 replica 3 glusterfs1:/STORAGE/BRICK1/GV0 glusterfs2:/STORAGE/BRICK1/GV0 glusterfs3:/STORAGE/BRICK1/GV0

 

 

[root@glusterfs1 etc]# gluster volume create GV0 replica 3 glusterfs1:/STORAGE/BRICK1/GV0 glusterfs2:/STORAGE/BRICK1/GV0 glusterfs3:/STORAGE/BRICK1/GV0
volume create: GV0: success: please start the volume to access data
[root@glusterfs1 etc]# gluster volume info

 

Volume Name: GV0
Type: Replicate
Volume ID: c0dc91d5-05da-4451-ba5e-91df44f21057
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/STORAGE/BRICK1/GV0
Brick2: glusterfs2:/STORAGE/BRICK1/GV0
Brick3: glusterfs3:/STORAGE/BRICK1/GV0
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@glusterfs1 etc]#

 

Note: If the volume does not show “Started”, the files under /var/log/glusterfs/glusterd.log should be checked in order to debug and diagnose the situation. These logs can be looked at on one or, all the servers configured.

 

 

# gluster volume start gv0
volume start: gv0: success

 

 

gluster volume start GV0

 

 

[root@glusterfs1 glusterfs]# gluster volume start GV0
volume start: GV0: success
[root@glusterfs1 glusterfs]#

 

 

 

[root@glusterfs1 glusterfs]# gluster volume start GV0
volume start: GV0: success
[root@glusterfs1 glusterfs]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 49152 0 Y 1933
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1820
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1523
Self-heal Daemon on localhost N/A N/A Y 1950
Self-heal Daemon on glusterfs2 N/A N/A Y 1837
Self-heal Daemon on glusterfs3 N/A N/A Y 1540

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

 

[root@glusterfs1 glusterfs]#

 

 

[root@glusterfs2 /]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 49152 0 Y 1933
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1820
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1523
Self-heal Daemon on localhost N/A N/A Y 1837
Self-heal Daemon on glusterfs1 N/A N/A Y 1950
Self-heal Daemon on glusterfs3 N/A N/A Y 1540

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs2 /]#

 

[root@glusterfs3 STORAGE]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 49152 0 Y 1933
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1820
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1523
Self-heal Daemon on localhost N/A N/A Y 1540
Self-heal Daemon on glusterfs2 N/A N/A Y 1837
Self-heal Daemon on glusterfs1 N/A N/A Y 1950

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

 

[root@glusterfs3 STORAGE]#
[root@glusterfs3 STORAGE]#
[root@glusterfs3 STORAGE]#
[root@glusterfs3 STORAGE]#

 

 

you only need to run the gluster volume start command from ONE node!

 

 

and it starts automatically on each node.

 

 

Testing the GlusterFS volume

 

We will use one of the servers to mount the volume. Typically you would do this from an external machine, ie a “client”. Since using this method requires additional packages to be installed on the client machine, we will instead use one of the servers to test, as if it were an actual separate client machine.

 

 

[root@glusterfs1 glusterfs]# mount -t glusterfs glusterfs2:/GV0 /mnt
[root@glusterfs1 glusterfs]#

 

 

# mount -t glusterfs server1:/gv0 /mnt

# for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done

First, check the client mount point:

 

# ls -lA /mnt/copy* | wc -l

 

You should see 100 files returned. Next, check the GlusterFS brick mount points on each server:

 

# ls -lA /data/brick1/gv0/copy*

 

You should see 100 files on each server using the method above.  Without replication, with a distribute-only volume (not detailed here), you would instead see about 33 files on each machine.

 

 

kevin@asus:~$ sudo su
root@asus:/home/kevin# ssh glusterfs1
^C

 

glusterfs1 is not yet booted… so let’s have a look at the glusterfs system before we boot the 3rd machine:

 

root@asus:/home/kevin# ssh glusterfs2
Last login: Wed May 4 18:04:05 2022 from asus
[root@glusterfs2 ~]#
[root@glusterfs2 ~]#
[root@glusterfs2 ~]#
[root@glusterfs2 ~]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1114
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1227
Self-heal Daemon on localhost N/A N/A Y 1129
Self-heal Daemon on glusterfs3 N/A N/A Y 1238

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

 

third machine glusterfs1 is now booted and live:

 

[root@glusterfs2 ~]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 N/A N/A N N/A
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1114
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1227
Self-heal Daemon on localhost N/A N/A Y 1129
Self-heal Daemon on glusterfs1 N/A N/A Y 1122
Self-heal Daemon on glusterfs3 N/A N/A Y 1238

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

 

[root@glusterfs2 ~]#

 

 

a little while later….

[root@glusterfs2 ~]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 49152 0 Y 1106
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1114
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1227
Self-heal Daemon on localhost N/A N/A Y 1129
Self-heal Daemon on glusterfs3 N/A N/A Y 1238
Self-heal Daemon on glusterfs1 N/A N/A Y 1122

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs2 ~]#

 

 

testing…

 

[root@glusterfs2 ~]# mount -t glusterfs glusterfs2:/GV0 /mnt
[root@glusterfs2 ~]#
[root@glusterfs2 ~]#
[root@glusterfs2 ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753612 0 753612 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8860 756520 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2421908 5952364 29% /
/dev/vda1 1038336 269012 769324 26% /boot
/dev/vdb1 197996 2084 181382 2% /STORAGE/BRICK1
tmpfs 153076 0 153076 0% /run/user/0
glusterfs2:/GV0 197996 4064 181382 3% /mnt
[root@glusterfs2 ~]# cd /mnt
[root@glusterfs2 mnt]# ls
[root@glusterfs2 mnt]#
[root@glusterfs2 mnt]# for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
[root@glusterfs2 mnt]#
[root@glusterfs2 mnt]#
[root@glusterfs2 mnt]# ls -l
total 30800
-rw——- 1 root root 315122 May 4 19:41 copy-test-001
-rw——- 1 root root 315122 May 4 19:41 copy-test-002
-rw——- 1 root root 315122 May 4 19:41 copy-test-003
-rw——- 1 root root 315122 May 4 19:41 copy-test-004
-rw——- 1 root root 315122 May 4 19:41 copy-test-005

.. .. ..
.. .. ..

-rw——- 1 root root 315122 May 4 19:41 copy-test-098
-rw——- 1 root root 315122 May 4 19:41 copy-test-099
-rw——- 1 root root 315122 May 4 19:41 copy-test-100
[root@glusterfs2 mnt]#

You should see 100 files returned.

 

Next, check the GlusterFS brick mount points on each server:

 

ls -lA /STORAGE/BRICK1/GV0/copy*

 

You should see 100 files on each server using the method we listed here. Without replication, in a distribute only volume (not detailed here), you should see about 33 files on each one.

 

sure enough, we have 100 files on each server

 

 

Adding a New Brick To Gluster 

 

I then added a new brick on just one node, glusterfs1:

Device Boot Start End Blocks Id System
/dev/vdc1 2048 419431 208692 83 Linux

 

 

[root@glusterfs1 ~]# mkfs.ext4 /dev/vdc1
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
52208 inodes, 208692 blocks
10434 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=33816576
26 block groups
8192 blocks per group, 8192 fragments per group
2008 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

[root@glusterfs1 ~]#

 

then create mount point and add to fstab:

 

mkdir -p /STORAGE/BRICK2

and then

then added to the fstab:

 

[root@glusterfs1 STORAGE]# echo ‘/dev/vdc1 /STORAGE/BRICK2 ext4 defaults 1 2’ >> /etc/fstab

[root@glusterfs1 etc]# cat fstab

#
# /etc/fstab
# Created by anaconda on Mon Apr 26 14:28:43 2021
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk’
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=e8756f1e-4d97-4a5b-bac2-f61a9d49d0f6 /boot xfs defaults 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
/dev/vdb1 /STORAGE/BRICK1 ext4 defaults 1 2
/dev/vdc1 /STORAGE/BRICK2 ext4 defaults 1 2
[root@glusterfs1 etc]#

 

 

next you need to mount the new brick manually for this session (unless you reboot)

 

 

mount -a

 

 

the filesystem is now mounted:

 

[root@glusterfs1 etc]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753612 0 753612 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8908 756472 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2422224 5952048 29% /
/dev/vda1 1038336 269012 769324 26% /boot
/dev/vdb1 197996 27225 156241 15% /STORAGE/BRICK1
tmpfs 153076 0 153076 0% /run/user/0
/dev/vdc1 197996 1806 181660 1% /STORAGE/BRICK2
[root@glusterfs1 etc]#

 

 

next we need to add the brick to the gluster volume:

 

volume add-brick <VOLNAME> <NEW-BRICK> …

Add the specified brick to the specified volume.

 

gluster volume add-brick GV0 /STORAGE/BRICK2

 

[root@glusterfs1 etc]# gluster volume add-brick GV0 /STORAGE/BRICK2
Wrong brick type: /STORAGE/BRICK2, use <HOSTNAME>:<export-dir-abs-path>

 

Usage:
volume add-brick <VOLNAME> [<stripe|replica> <COUNT> [arbiter <COUNT>]] <NEW-BRICK> … [force]

[root@glusterfs1 etc]#

 

gluster volume add-brick GV0 replica /STORAGE/BRICK2

 

 

[root@glusterfs1 BRICK1]# gluster volume add-brick GV0 replica 4 glusterfs1:/STORAGE/BRICK2/
volume add-brick: failed: The brick glusterfs1:/STORAGE/BRICK2 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use ‘force’ at the end of the command if you want to override this behavior.
[root@glusterfs1 BRICK1]#

 

 

[root@glusterfs1 BRICK2]# mkdir GV0
[root@glusterfs1 BRICK2]#
[root@glusterfs1 BRICK2]#
[root@glusterfs1 BRICK2]# gluster volume add-brick GV0 replica 4 glusterfs1:/STORAGE/BRICK2/
volume add-brick: failed: The brick glusterfs1:/STORAGE/BRICK2 is a mount point. Please create a sub-directory under the mount point and use that as the brick directory. Or use ‘force’ at the end of the command if you want to override this behavior.
[root@glusterfs1 BRICK2]#
[root@glusterfs1 BRICK2]# gluster volume add-brick GV0 replica 4 glusterfs1:/STORAGE/BRICK2/GV0
volume add-brick: success
[root@glusterfs1 BRICK2]#

 

 

we now have four bricks in the volume GV0:

 

[root@glusterfs2 mnt]# gluster volume info

Volume Name: GV0
Type: Replicate
Volume ID: c0dc91d5-05da-4451-ba5e-91df44f21057
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/STORAGE/BRICK1/GV0
Brick2: glusterfs2:/STORAGE/BRICK1/GV0
Brick3: glusterfs3:/STORAGE/BRICK1/GV0
Brick4: glusterfs1:/STORAGE/BRICK2/GV0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
[root@glusterfs2 mnt]#

 

[root@glusterfs1 etc]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 49152 0 Y 1221
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1298
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1220
Brick glusterfs1:/STORAGE/BRICK2/GV0 49153 0 Y 1598
Self-heal Daemon on localhost N/A N/A Y 1615
Self-heal Daemon on glusterfs3 N/A N/A Y 1498
Self-heal Daemon on glusterfs2 N/A N/A Y 1717

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs1 etc]#

 

 

you cant unmount while they belong to the gluster volume:

 

[root@glusterfs1 etc]# cd ..
[root@glusterfs1 /]# umount /STORAGE/BRICK1
umount: /STORAGE/BRICK1: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
[root@glusterfs1 /]# umount /STORAGE/BRICK2
umount: /STORAGE/BRICK2: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
[root@glusterfs1 /]#

 

 

Another example of adding a new brick to gluster:

 

 

gluster volume add-brick REPVOL replica 4 glusterfs4:/DISK2/BRICK

[root@glusterfs2 DISK2]# gluster volume add-brick REPVOL replica 4 glusterfs4:/DISK2/BRICK
volume add-brick: success
[root@glusterfs2 DISK2]#

[root@glusterfs2 DISK2]# gluster volume status
Status of volume: DDVOL
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/DISK1/EXPORT1 49152 0 Y 1239
Brick glusterfs2:/DISK1/EXPORT1 49152 0 Y 1022
Brick glusterfs3:/DISK1/EXPORT1 49152 0 Y 1097
Self-heal Daemon on localhost N/A N/A Y 1039
Self-heal Daemon on glusterfs4 N/A N/A Y 1307
Self-heal Daemon on glusterfs3 N/A N/A Y 1123
Self-heal Daemon on glusterfs1 N/A N/A Y 1261

Task Status of Volume DDVOL
——————————————————————————
There are no active volume tasks

Status of volume: REPVOL
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/DISK2/BRICK 49153 0 Y 1250
Brick glusterfs2:/DISK2/BRICK 49153 0 Y 1029
Brick glusterfs3:/DISK2/BRICK 49153 0 Y 1108
Brick glusterfs4:/DISK2/BRICK 49152 0 Y 1446
Self-heal Daemon on localhost N/A N/A Y 1039
Self-heal Daemon on glusterfs4 N/A N/A Y 1307
Self-heal Daemon on glusterfs3 N/A N/A Y 1123
Self-heal Daemon on glusterfs1 N/A N/A Y 1261

Task Status of Volume REPVOL
——————————————————————————
There are no active volume tasks

[root@glusterfs2 DISK2]#

 

 

Detaching a Peer From Gluster

 

 

[root@glusterfs3 ~]# gluster peer help

 

gluster peer commands
======================

 

peer detach { <HOSTNAME> | <IP-address> } [force] – detach peer specified by <HOSTNAME>
peer help – display help for peer commands
peer probe { <HOSTNAME> | <IP-address> } – probe peer specified by <HOSTNAME>
peer status – list status of peers
pool list – list all the nodes in the pool (including localhost)

 

 

[root@glusterfs2 ~]#
[root@glusterfs2 ~]# gluster pool list
UUID Hostname State
02855654-335a-4be3-b80f-c1863006c31d glusterfs1 Connected
28a7bf8e-e2b9-4509-a45f-a95198139a24 glusterfs3 Connected
5fd324e4-9415-441c-afea-4df61141c896 localhost Connected
[root@glusterfs2 ~]#

 

peer detach <HOSTNAME>
Detach the specified peer.

 

gluster peer detach glusterfs1

 

[root@glusterfs2 ~]# gluster peer detach glusterfs1

 

All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y

 

peer detach: failed: Peer glusterfs1 hosts one or more bricks. If the peer is in not recoverable state then use either replace-brick or remove-brick command with force to remove all bricks from the peer and attempt the peer detach again.

 

[root@glusterfs2 ~]#

 

 

[root@glusterfs3 ~]# gluster peer detach glusterfs4
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root@glusterfs3 ~]#

 

 

[root@glusterfs3 ~]# gluster peer status
Number of Peers: 2

 

Hostname: glusterfs1
Uuid: 02855654-335a-4be3-b80f-c1863006c31d
State: Peer in Cluster (Connected)

 

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)

[root@glusterfs3 ~]#

 

[root@glusterfs3 ~]# gluster pool list
UUID Hostname State
02855654-335a-4be3-b80f-c1863006c31d glusterfs1 Connected
5fd324e4-9415-441c-afea-4df61141c896 glusterfs2 Connected
28a7bf8e-e2b9-4509-a45f-a95198139a24 localhost Connected
[root@glusterfs3 ~]#

 

 

 

Adding a Node to a Trusted Storage Pool

 

 

[root@glusterfs3 ~]#
[root@glusterfs3 ~]# gluster peer probe glusterfs4
peer probe: success

[root@glusterfs3 ~]#

[root@glusterfs3 ~]# gluster pool list
UUID Hostname State
02855654-335a-4be3-b80f-c1863006c31d glusterfs1 Connected
5fd324e4-9415-441c-afea-4df61141c896 glusterfs2 Connected
2bfe642f-7dfe-4072-ac48-238859599564 glusterfs4 Connected
28a7bf8e-e2b9-4509-a45f-a95198139a24 localhost Connected

[root@glusterfs3 ~]#

[root@glusterfs3 ~]# gluster peer status
Number of Peers: 3

 

Hostname: glusterfs1
Uuid: 02855654-335a-4be3-b80f-c1863006c31d
State: Peer in Cluster (Connected)

 

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)

 

Hostname: glusterfs4
Uuid: 2bfe642f-7dfe-4072-ac48-238859599564
State: Peer in Cluster (Connected)
[root@glusterfs3 ~]#

 

 

 

 

Removing a Brick

 

 

 

volume remove-brick <VOLNAME> <BRICK> …

 

 

[root@glusterfs1 etc]# gluster volume remove-brick DRVOL 1 glusterfs1:/STORAGE/EXPORT1 stop
wrong brick type: 1, use <HOSTNAME>:<export-dir-abs-path>

 

Usage:
volume remove-brick <VOLNAME> [replica <COUNT>] <BRICK> … <start|stop|status|commit|force>

 

[root@glusterfs1 etc]# gluster volume remove-brick DRVOL glusterfs1:/STORAGE/EXPORT1 stop
volume remove-brick stop: failed: Volume DRVOL needs to be started to perform rebalance
[root@glusterfs1 etc]#

 

 

[root@glusterfs1 etc]# gluster volume remove-brick DRVOL glusterfs1:/STORAGE/EXPORT1 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) n
[root@glusterfs1 etc]# gluster volume rebalance

 

Usage:
volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}

 

[root@glusterfs1 etc]# gluster volume rebalance start

 

Usage:
volume rebalance <VOLNAME> {{fix-layout start} | {start [force]|stop|status}}

 

[root@glusterfs1 etc]#
[root@glusterfs1 etc]# gluster volume rebalance DRVOL start
volume rebalance: DRVOL: success: Rebalance on DRVOL has been started successfully. Use rebalance status command to check status of the rebalance process.
ID: 939c3ec2-7634-46b4-a1ad-9e99e6da7bf2
[root@glusterfs1 etc]#

 

 

 

 

I then shutdown glusterfs1 and glusterfs2 nodes

 

[root@glusterfs3 ~]#
[root@glusterfs3 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs1
Uuid: 02855654-335a-4be3-b80f-c1863006c31d
State: Peer in Cluster (Disconnected)

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Disconnected)
[root@glusterfs3 ~]#

 

 

 

this means we now just have

 

[root@glusterfs3 ~]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1220
Self-heal Daemon on localhost N/A N/A Y 1498

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs3 ~]#

 

 

 

and tried on glusterfs3 to mount the volume GV0:

 

 

[root@glusterfs3 ~]# mount -t glusterfs glusterfs3:/GV0 /mnt
Mount failed. Check the log file for more details.
[root@glusterfs3 ~]#
[root@glusterfs3 ~]#

 

 

I then restarted just one more node ie glusterfs1:
[root@glusterfs3 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs1
Uuid: 02855654-335a-4be3-b80f-c1863006c31d
State: Peer in Cluster (Connected)

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Disconnected)
[root@glusterfs3 ~]# gluster volume info

Volume Name: GV0
Type: Replicate
Volume ID: c0dc91d5-05da-4451-ba5e-91df44f21057
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 4 = 4
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/STORAGE/BRICK1/GV0
Brick2: glusterfs2:/STORAGE/BRICK1/GV0
Brick3: glusterfs3:/STORAGE/BRICK1/GV0
Brick4: glusterfs1:/STORAGE/BRICK2/GV0
Options Reconfigured:
performance.client-io-threads: off
nfs.disable: on
transport.address-family: inet
storage.fips-mode-rchecksum: on
cluster.granular-entry-heal: on
[root@glusterfs3 ~]# mount -t glusterfs glusterfs3:/GV0 /mnt
[root@glusterfs3 ~]#

 

 

I was then able to mount the glusterfs volume:

 

glusterfs3:/GV0 197996 29211 156235 16% /mnt
[root@glusterfs3 ~]#

 

[root@glusterfs3 ~]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/BRICK1/GV0 49152 0 Y 1235
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1220
Brick glusterfs1:/STORAGE/BRICK2/GV0 49153 0 Y 1243
Self-heal Daemon on localhost N/A N/A Y 1498
Self-heal Daemon on glusterfs1 N/A N/A Y 1256

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs3 ~]#

 

 

I then shutdown glusterfs1 as it has 2 bricks, and started up glusterfs2 which has only 1 brick:

 

[root@glusterfs3 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs1
Uuid: 02855654-335a-4be3-b80f-c1863006c31d
State: Peer in Cluster (Disconnected)

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)
[root@glusterfs3 ~]#

 

 

[root@glusterfs3 ~]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1093
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1220
Self-heal Daemon on localhost N/A N/A Y 1498
Self-heal Daemon on glusterfs2 N/A N/A Y 1108

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs3 ~]#
[root@glusterfs3 ~]#

 

I removed one brick from glusterfs1 (which has 2 bricks):

 

[root@glusterfs1 /]# gluster volume remove-brick GV0 replica 3 glusterfs1:/STORAGE/BRICK1/GV0 force
Remove-brick force will not migrate files from the removed bricks, so they will no longer be available on the volume.
Do you want to continue? (y/n) y
volume remove-brick commit force: success
[root@glusterfs1 /]#

 

 

it now looks like this:

 

[root@glusterfs1 /]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1018
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1098
Brick glusterfs1:/STORAGE/BRICK2/GV0 49153 0 Y 1249
Self-heal Daemon on localhost N/A N/A Y 1262
Self-heal Daemon on glusterfs3 N/A N/A Y 1114
Self-heal Daemon on glusterfs2 N/A N/A Y 1028

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs1 /]#

 

 

note you have to include the full path ie /STORAGE/BRICK1/GV0 and not just /STORAGE/BRICK1 else it wont work.

 

also you have to set the new brick count – in this case now 3 instead of the old 4.

 

 

 

 

To Stop and Start a Gluster Volume

 

To stop a volume:

 

gluster volume stop GV0

 

[root@glusterfs1 /]# gluster volume stop GV0
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: GV0: success
[root@glusterfs1 /]#

 

[root@glusterfs2 /]# gluster volume status
Volume GV0 is not started

[root@glusterfs2 /]#

 

 

to start a volume:

[root@glusterfs1 /]# gluster volume start GV0
volume start: GV0: success
[root@glusterfs1 /]#

 

[root@glusterfs2 /]# gluster volume status
Status of volume: GV0
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs2:/STORAGE/BRICK1/GV0 49152 0 Y 1730
Brick glusterfs3:/STORAGE/BRICK1/GV0 49152 0 Y 1788
Brick glusterfs1:/STORAGE/BRICK2/GV0 49152 0 Y 2532
Self-heal Daemon on localhost N/A N/A Y 1747
Self-heal Daemon on glusterfs1 N/A N/A Y 2549
Self-heal Daemon on glusterfs3 N/A N/A Y 1805

Task Status of Volume GV0
——————————————————————————
There are no active volume tasks

[root@glusterfs2 /]#

 

Deleting a Gluster Volume 

 

to delete a volume:

 

[root@glusterfs1 etc]#
[root@glusterfs1 etc]# gluster volume delete GV0
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: GV0: failed: Volume GV0 has been started.Volume needs to be stopped before deletion.
[root@glusterfs1 etc]#

 

 

[root@glusterfs1 etc]# gluster volume stop GV0
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
volume stop: GV0: success
[root@glusterfs1 etc]#

 

[root@glusterfs1 etc]# gluster volume delete GV0
Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y
volume delete: GV0: success
[root@glusterfs1 etc]#

[root@glusterfs1 etc]#
[root@glusterfs1 etc]# gluster volume status
No volumes present
[root@glusterfs1 etc]#

 

 

note we still have our gluster cluster with 3 nodes, but no gluster volume anymore:

 

[root@glusterfs1 etc]# gluster peer status
Number of Peers: 2

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)

Hostname: glusterfs3
Uuid: 28a7bf8e-e2b9-4509-a45f-a95198139a24
State: Peer in Cluster (Connected)
[root@glusterfs1 etc]#

 

 

 

Creating a Distributed Replicated Gluster Volume

 

 

Next, we want to build a distributed replicated volume:

 

first we will add another virtual machine to the gluster cluster:

 

glusterfs4

 

to make this process quicker we will clone glusterfs1 in KVM:

 

first we switch off glusterfs1, then clone it with the name glusterfs4 with the same hardware config as glusterfs1:

 

and then switch on glusterfs4

 

glusterfs4 needs to be given an IP, and definition added to /etc/hosts files on all machines and distributed: scp /etc/hosts <machine>

 

[root@glusterfs4 ~]# gluster pool list
UUID Hostname State
5fd324e4-9415-441c-afea-4df61141c896 glusterfs2 Connected
28a7bf8e-e2b9-4509-a45f-a95198139a24 glusterfs3 Connected
02855654-335a-4be3-b80f-c1863006c31d localhost Connected
[root@glusterfs4 ~]#

 

we first have to get this machine to join the gluster pool ie the cluster

 

BUT we have a problem – the UUID through the cloning is the same as for glusterfs1!

 

[root@glusterfs1 ~]# gluster system:: uuid get
UUID: 02855654-335a-4be3-b80f-c1863006c31d
[root@glusterfs1 ~]#

 

[root@glusterfs4 /]# gluster system:: uuid get
UUID: 02855654-335a-4be3-b80f-c1863006c31d
[root@glusterfs4 /]#

 

 

so first we have to change this and generate a new uuid for glusterfs4:

 

Use the ‘gluster system:: uuid reset’ command to reset the UUID of the local glusterd of the machine, and then ‘peer probe’ will run ok.

 

 

[root@glusterfs4 /]# gluster system:: uuid reset
Resetting uuid changes the uuid of local glusterd. Do you want to continue? (y/n) y
trusted storage pool has been already formed. Please detach this peer from the pool and reset its uuid.
[root@glusterfs4 /]#

 

 

this was a bit complicated, because

 

the new machine glusterfs4 had the same uuid as glusterfs1… we had to detach it in gluster but we could only do that if it was renamed glusterfs1 temporarily, and also temporarily editing the /etc/hosts files on all gluster nodes so they pointed to the glusterfs4 renamed temporarily as glusterfs1… then we could go to another machine and then remove the “glusterfs1” from the cluster – in reality of course our new glusterfs4 machine.

 

see below

5fd324e4-9415-441c-afea-4df61141c896 localhost Connected
[root@glusterfs2 etc]# gluster peer detach glusterfs1
All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y
peer detach: success
[root@glusterfs2 etc]#
[root@glusterfs2 etc]#
[root@glusterfs2 etc]#

 

 

then, having done that, we create a new uuid for the node:

 

[root@glusterfs1 ~]# gluster system:: uuid reset
Resetting uuid changes the uuid of local glusterd. Do you want to continue? (y/n) y
resetting the peer uuid has been successful
[root@glusterfs1 ~]#

 

we now have a new unique uuid for this machine:

 

[root@glusterfs1 ~]# cat /var/lib/glusterd/glusterd.info
UUID=2bfe642f-7dfe-4072-ac48-238859599564
operating-version=90000
[root@glusterfs1 ~]#

 

 

then, we can switch the name and host file definitions back to glusterfs4 for this machine:

 

 

 

and then we can do:

 

[root@glusterfs2 etc]#
[root@glusterfs2 etc]# gluster peer probe glusterfs1
peer probe: success
[root@glusterfs2 etc]# gluster peer probe glusterfs4
peer probe: success
[root@glusterfs2 etc]# gluster peer probe glusterfs3
peer probe: Host glusterfs3 port 24007 already in peer list
[root@glusterfs2 etc]#

 

[root@glusterfs2 etc]# gluster pool list
UUID Hostname State
28a7bf8e-e2b9-4509-a45f-a95198139a24 glusterfs3 Connected
02855654-335a-4be3-b80f-c1863006c31d glusterfs1 Connected
2bfe642f-7dfe-4072-ac48-238859599564 glusterfs4 Connected
5fd324e4-9415-441c-afea-4df61141c896 localhost Connected
[root@glusterfs2 etc]#

 

and we now have a 4-node gluster cluster.

 

Note from Redhat:

 

Support for two-way replication is planned for deprecation and removal in future versions of Red Hat Gluster Storage. This will affect both replicated and distributed-replicated volumes.

 

Support is being removed because two-way replication does not provide adequate protection from split-brain conditions. While a dummy node can be used as an interim solution for this problem, Red Hat recommends that all volumes that currently use two-way replication are migrated to use either arbitrated replication or three-way replication.

 

 

NOTE:  Make sure you start your volumes before you try to mount them or else client operations after the mount will hang.

 

GlusterFS will fail to create a replicate volume if more than one brick of a replica set is present on the same peer. For eg. a four node replicated volume where more than one brick of a replica set is present on the same peer.

 

BUT NOTE!! you can use an “Arbiter brick”….

 

Arbiter configuration for replica volumes

Arbiter volumes are replica 3 volumes where the 3rd brick acts as the arbiter brick. This configuration has mechanisms that prevent occurrence of split-brains.

 

It can be created with the following command:

 

`# gluster volume create <VOLNAME> replica 2 arbiter 1 host1:brick1 host2:brick2 host3:brick3`

 

 

 

Note: The number of bricks for a distributed-replicated Gluster volume should be a multiple of the replica count.

 

Also, the order in which bricks are specified has an effect on data protection.

 

Each replica_count consecutive bricks in the list you give will form a replica set, with all replica sets combined into a volume-wide distribute set.

 

To make sure that replica-set members are not placed on the same node, list the first brick on every server, then the second brick on every server in the same order, and so on.

 

 

example

 

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2 server3:/exp3 server4:/exp4
Creation of test-volume has been successful
Please start the volume to access data.

 

 

compared with ordinary replicated:

 

# gluster volume create test-volume replica 2 transport tcp server1:/exp1 server2:/exp2
Creation of test-volume has been successful
Please start the volume to access data.

 

 

[root@glusterfs3 mnt]# gluster volume status
No volumes present
[root@glusterfs3 mnt]#

 

 

so, now we add 2 more peers to the trusted pool:

 

glusterfs1 and glusterfs2

 

[root@glusterfs3 mnt]#
[root@glusterfs3 mnt]# gluster peer probe glusterfs1
peer probe: success
[root@glusterfs3 mnt]# gluster peer probe glusterfs2
peer probe: success
[root@glusterfs3 mnt]# gluster peer status
Number of Peers: 3

 

Hostname: glusterfs4
Uuid: 2bfe642f-7dfe-4072-ac48-238859599564
State: Peer in Cluster (Connected)

 

Hostname: glusterfs1
Uuid: 02855654-335a-4be3-b80f-c1863006c31d
State: Peer in Cluster (Connected)

 

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)
[root@glusterfs3 mnt]#

 

so we now have a 4 node trusted pool consisting of glusterfs1,2,3 & 4.

 

 

Next, we can create our distributed replicated volume across the 4 nodes:

 

 

gluster volume create DRVOL replica 2 transport tcp glusterfs1:/STORAGE/EXPORT1 glusterfs2:/STORAGE/EXPORT2 glusterfs3:/STORAGE/EXPORT3 glusterfs4:/STORAGE/EXPORT4

 

[root@glusterfs1 ~]# gluster volume create DRVOL replica 2 transport tcp glusterfs1:/STORAGE/EXPORT1 glusterfs2:/STORAGE/EXPORT2 glusterfs3:/STORAGE/EXPORT3 glusterfs4:/STORAGE/EXPORT4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: DRVOL: failed: /STORAGE/EXPORT1 is already part of a volume
[root@glusterfs1 ~]# gluster volume status
No volumes present
[root@glusterfs1 ~]#

 

REASON for this error is that you have the brick directories already created ie existing before you run the volume create command (from our earlier lab exercises). These directories contain a .glusterfs subdirectory and this is blocking the creation of  bricks with these names.

 

Solution: remove the subdirectories under /STORAGE/ on each node. ie /EXPORTn/.glusterfs

 

eg (on all machines!)

 

[root@glusterfs3 STORAGE]# rm -r -f EXPORT3/
[root@glusterfs3 STORAGE]#

 

then run the command again:

 

[root@glusterfs1 ~]# gluster volume create DRVOL replica 2 transport tcp glusterfs1:/STORAGE/EXPORT1 glusterfs2:/STORAGE/EXPORT2 glusterfs3:/STORAGE/EXPORT3 glusterfs4:/STORAGE/EXPORT4
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: DRVOL: success: please start the volume to access data
[root@glusterfs1 ~]#

 

 

(ideally you should have at least 6 nodes ie a 3-way to avoid split-brain, but we will just go with 4 nodes for this example).

 

 

so, now successfully created:

 

[root@glusterfs3 STORAGE]# gluster volume status
Status of volume: DRVOL
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/STORAGE/EXPORT1 49152 0 Y 1719
Brick glusterfs2:/STORAGE/EXPORT2 49152 0 Y 1645
Brick glusterfs3:/STORAGE/EXPORT3 49152 0 Y 2054
Brick glusterfs4:/STORAGE/EXPORT4 49152 0 Y 2014
Self-heal Daemon on localhost N/A N/A Y 2071
Self-heal Daemon on glusterfs4 N/A N/A Y 2031
Self-heal Daemon on glusterfs1 N/A N/A Y 1736
Self-heal Daemon on glusterfs2 N/A N/A Y 1662

Task Status of Volume DRVOL
——————————————————————————
There are no active volume tasks

[root@glusterfs3 STORAGE]#

 

 

[root@glusterfs3 STORAGE]# gluster volume info

Volume Name: DRVOL
Type: Distributed-Replicate
Volume ID: 570cdad3-39c3-4fb4-bce6-cc8030fe8a65
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/STORAGE/EXPORT1
Brick2: glusterfs2:/STORAGE/EXPORT2
Brick3: glusterfs3:/STORAGE/EXPORT3
Brick4: glusterfs4:/STORAGE/EXPORT4
Options Reconfigured:
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@glusterfs3 STORAGE]#

 

 

 

Mounting Gluster Volumes on Clients

The volume must first be started on the Gluster.

 

(and of course the respective bricks must also be mounted on all participating node servers in the Gluster).

 

For this example we can use one of our Gluster servers to mount the volume.

 

Usually you would mount on a Gluster client machine. Since using this method requires additional packages to be installed on the client machine, we will instead use one of the servers to test, as if it were an actual separate client machine.

 

for our example, we will use mount glusterfs1 on glusterfs1 (but we could mount the glusterfs2,3 or 4 on glusterfs1 if we wanted):

 

mount -t glusterfs glusterfs1:/DRVOL /mnt

 

Note that we mount the volume by its Gluster volume name – NOT the underlying brick directory!

 

 

[root@glusterfs1 /]# mount -t glusterfs glusterfs1:/DRVOL /mnt
[root@glusterfs1 /]#
[root@glusterfs1 /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753612 0 753612 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8912 756468 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2424712 5949560 29% /
/dev/vda1 1038336 269012 769324 26% /boot
/dev/vdb1 197996 2084 181382 2% /STORAGE
tmpfs 153076 0 153076 0% /run/user/0
glusterfs1:/DRVOL 395992 8128 362764 3% /mnt
[root@glusterfs1 /]#

 

 

To Stop and Start a Gluster Volume

 

check volume status with:

 

gluster volume status

 

list available volumes with:

 

gluster volume info

 

 

[root@glusterfs1 ~]# gluster volume info all
 
 
Volume Name: DDVOL
Type: Disperse
Volume ID: 37d79a1a-3d24-4086-952e-2342c8744aa4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: glusterfs1:/DISK1/EXPORT1
Brick2: glusterfs2:/DISK1/EXPORT1
Brick3: glusterfs3:/DISK1/EXPORT1
Options Reconfigured:
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
[root@glusterfs1 ~]# 

 

 

 

check the peers with:

 

gluster peer status

 

[root@glusterfs1 ~]# gluster peer status
Number of Peers: 3

 

Hostname: glusterfs3
Uuid: 28a7bf8e-e2b9-4509-a45f-a95198139a24
State: Peer in Cluster (Connected)

 

Hostname: glusterfs4
Uuid: 2bfe642f-7dfe-4072-ac48-238859599564
State: Peer in Cluster (Disconnected)

 

Hostname: glusterfs2
Uuid: 5fd324e4-9415-441c-afea-4df61141c896
State: Peer in Cluster (Connected)
[root@glusterfs1 ~]#

 

 

 

gluster volume status all

 

[root@glusterfs1 ~]# gluster volume status all
Status of volume: DDVOL
Gluster process TCP Port RDMA Port Online Pid
——————————————————————————
Brick glusterfs1:/DISK1/EXPORT1 49152 0 Y 1403
Brick glusterfs2:/DISK1/EXPORT1 49152 0 Y 1298
Brick glusterfs3:/DISK1/EXPORT1 49152 0 Y 1299
Self-heal Daemon on localhost N/A N/A Y 1420
Self-heal Daemon on glusterfs2 N/A N/A Y 1315
Self-heal Daemon on glusterfs3 N/A N/A Y 1316

Task Status of Volume DDVOL
——————————————————————————
There are no active volume tasks

[root@glusterfs1 ~]#

 

 

 

to stop a gluster volume:

 

gluster volume stop <volname>

 

to start a gluster volume:

 

gluster volume start <volname>

 

 

To stop the Gluster system:

 

systemctl stop glusterd

 

 

[root@glusterfs1 ~]# systemctl status glusterd
● glusterd.service – GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2022-05-13 18:11:19 CEST; 13min ago
Docs: man:glusterd(8)
Process: 967 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid –log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 974 (glusterd)
CGroup: /system.slice/glusterd.service
└─974 /usr/sbin/glusterd -p /var/run/glusterd.pid –log-level INFO

 

May 13 18:11:18 glusterfs1 systemd[1]: Starting GlusterFS, a clustered file-system server…
May 13 18:11:19 glusterfs1 systemd[1]: Started GlusterFS, a clustered file-system server.
[root@glusterfs1 ~]#

 

 

 

 

[root@glusterfs1 ~]# systemctl stop glusterd
[root@glusterfs1 ~]# systemctl status glusterd
● glusterd.service – GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
Active: inactive (dead) since Fri 2022-05-13 18:24:59 CEST; 2s ago
Docs: man:glusterd(8)
Process: 967 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid –log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 974 (code=exited, status=15)

 

May 13 18:11:18 glusterfs1 systemd[1]: Starting GlusterFS, a clustered file-system server…
May 13 18:11:19 glusterfs1 systemd[1]: Started GlusterFS, a clustered file-system server.
May 13 18:24:59 glusterfs1 systemd[1]: Stopping GlusterFS, a clustered file-system server…
May 13 18:24:59 glusterfs1 systemd[1]: Stopped GlusterFS, a clustered file-system server.
[root@glusterfs1 ~]#

 

 

If there are still problems, do:

 

systemctl stop glusterd

 

mv /var/lib/glusterd/glusterd.info /tmp/.
rm -rf /var/lib/glusterd/*
mv /tmp/glusterd.info /var/lib/glusterd/.

systemctl start glusterd

 

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: GlusterFS Configuration on Centos

How To Install GlusterFS on Centos7

 

Choose a package source: either the CentOS Storage SIG or Gluster.org

 

Using CentOS Storage SIG Packages

 

 

yum search centos-release-gluster

 

yum install centos-release-gluster37

 

yum install centos-release-gluster37

 

yum install glusterfs gluster-cli glusterfs-libs glusterfs-server

 

 

 

[root@glusterfs1 ~]# yum search centos-release-gluster
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
================================================= N/S matched: centos-release-gluster =================================================
centos-release-gluster-legacy.noarch : Disable unmaintained Gluster repositories from the CentOS Storage SIG
centos-release-gluster40.x86_64 : Gluster 4.0 (Short Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster41.noarch : Gluster 4.1 (Long Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster5.noarch : Gluster 5 packages from the CentOS Storage SIG repository
centos-release-gluster6.noarch : Gluster 6 packages from the CentOS Storage SIG repository
centos-release-gluster7.noarch : Gluster 7 packages from the CentOS Storage SIG repository
centos-release-gluster8.noarch : Gluster 8 packages from the CentOS Storage SIG repository
centos-release-gluster9.noarch : Gluster 9 packages from the CentOS Storage SIG repository

Name and summary matches only, use “search all” for everything.
[root@glusterfs1 ~]#

 

 

Alternatively, using Gluster.org Packages

 

# yum update -y

 

 

Download the latest glusterfs-epel repository from gluster.org:

 

yum install wget -y

 

 

[root@glusterfs1 ~]# yum install wget -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
Package wget-1.14-18.el7_6.1.x86_64 already installed and latest version
Nothing to do
[root@glusterfs1 ~]#

 

 

 

wget -P /etc/yum.repos.d/ http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

 

Also install the latest EPEL repository from fedoraproject.org to resolve all dependencies:

 

yum install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

 

 

[root@glusterfs1 ~]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
repo id repo name status
base/7/x86_64 CentOS-7 – Base 10,072
centos-ceph-nautilus/7/x86_64 CentOS-7 – Ceph Nautilus 609
centos-nfs-ganesha28/7/x86_64 CentOS-7 – NFS Ganesha 2.8 153
ceph-noarch Ceph noarch packages 184
epel/x86_64 Extra Packages for Enterprise Linux 7 – x86_64 13,638
extras/7/x86_64 CentOS-7 – Extras 498
updates/7/x86_64 CentOS-7 – Updates 2,579
repolist: 27,733
[root@glusterfs1 ~]#

 

 

Then install GlusterFS Server on all glusterfs storage cluster nodes.

[root@glusterfs1 ~]# yum install glusterfs gluster-cli glusterfs-libs glusterfs-server

 

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
No package gluster-cli available.
No package glusterfs-server available.
Resolving Dependencies
–> Running transaction check
—> Package glusterfs.x86_64 0:6.0-49.1.el7 will be installed
—> Package glusterfs-libs.x86_64 0:6.0-49.1.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================
Installing:
glusterfs x86_64 6.0-49.1.el7 updates 622 k
glusterfs-libs x86_64 6.0-49.1.el7 updates 398 k

Transaction Summary
=======================================================================================================================================
Install 2 Packages

Total download size: 1.0 M
Installed size: 4.3 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): glusterfs-libs-6.0-49.1.el7.x86_64.rpm | 398 kB 00:00:00
(2/2): glusterfs-6.0-49.1.el7.x86_64.rpm | 622 kB 00:00:00
—————————————————————————————————————————————
Total 2.8 MB/s | 1.0 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : glusterfs-libs-6.0-49.1.el7.x86_64 1/2
Installing : glusterfs-6.0-49.1.el7.x86_64 2/2
Verifying : glusterfs-6.0-49.1.el7.x86_64 1/2
Verifying : glusterfs-libs-6.0-49.1.el7.x86_64 2/2

Installed:
glusterfs.x86_64 0:6.0-49.1.el7 glusterfs-libs.x86_64 0:6.0-49.1.el7

Complete!
[root@glusterfs1 ~]#

 

 

 

 

 

Continue Reading

LPIC3-306 COURSE NOTES: GlusterFS

 

These are my notes on GlusterFS made as part of my LPIC3 Diploma course in Linux Clustering.  They are in “rough format”, presented as they were written.

 

 

Gluster FS

 

GlusterFS is a free GNU/GPL licensed scalable network-attached storage file system now offered and developed by RedHat.

 

Servers are implemented as storage “bricks” which can be added, removed and migrated usually without interrupting service provision. Each GlusterFS server or node runs a glusterfsd daemon which exports a local file system as a Gluster volume.

 

GlusterFS provides for file-based mirroring, file replication, file-based distribution and file-striping, file-based load balancing,geo-replication, storage quotas, volume failover, disk caching and volume snapshots.

 

Types of Glusterfs Volumes

 

 

Gluster provides different types of file storage volumes which can be deployed according to the requirements of the environment.

 

Broadly these are:

 

Distributed Volumes: (suitable for scalable storage but has no data redundancy)
Replicated Volumes: (offers better reliability and also data redundancy)
Distributed-Replicated Volumes: (High availability of data through redundancy and scalable storage)

 

Distributed Glusterfs Volume

 

Distributed is the default in Gluster if no volume type is specified. Files are distributed across bricks in the volume. This means that eg file1 would be stored only on brick1 or brick2 – but not on both.

 

Thus there is no data redundancy.

 

The sole advantage of distributed is providing a lower cost and simpler means to increase the total volume storage capacity. But it does not protect against data loss.

 

In the majority of cases, this volume type is NOT advisable. Repicated volumes are much safer.

 

Replicated Glusterfs Volume

 

Replicated reduces the risk of data loss that exists with distributed volumes.

 

Copies of the data are maintained on all bricks. The number of replicas in the volume are set when creating the volume.

 

The number of bricks must be equal to the replica count for a replicated volume. In order to protect against server and disk failures, the bricks of the volume should be located on different servers.

 

A major advantage of replicated is that if one brick fails on a two node cluster, or if two bricks fail on a three node cluster, then  the data can still be accessed from the other replicated brick/s.

 

It is possible to create a replicated Glusterfs volume with two nodes, but this is not recommended because a split-brain cluster situation can develop. For this reason, a replicated volume should be used with at least three nodes.

 

Distributed Striped Glusterfs Volumes

 

Distributed striped volumes stripes files across two or more Gluster server nodes. Distributed striped volumes should be deployed where scalable storage is important and where access to very large files is required.

 

The number of bricks must be a multiple of the stripe count for a distributed striped volume.

 

Creating Distributed Replicated Glusterfs Volumes

 

Distributed Replicated distributes files across replicated bricks in the volume.

 

It should be deployed in environments which require both highly scalable storage and high-reliability. Distributed replicated volumes can also provide for better file read performance.

 

The number of bricks deployed needs to be a multiple of the replica count for a distributed replicated volume.

 

The order bricks are specified also affects data protection.

 

Each replica_count consecutive bricks in the list forms a replica set. All the replica sets are then combined into a volume-wide distributed set.

 

To ensure replica-sets are not located on the same node, you should list the first brick on each server, then list the second brick on each server, continuing in the same order.

 

Gluster also provides for:

 

Dispersed Glusterfs Volumes

 

Dispersed volumes are based on erasure codes. Erasure code (EC) stripes the encoded data of files, adds redundancy information and saves the blocks across multiple bricks in the volume.

 

This is especially suitable where high level of reliability is required with minimum space waste.

 

The number of redundant bricks in the volume determines how many bricks can be lost without any interruption in the operation of the volume.

 

Distributed Dispersed Glusterfs Volumes

 

Distributed dispersed volumes are the equivalent to distributed replicated volumes, but they use dispersed subvolumes rather than replicated volumes. The purpose is to easily scale volume size and distribute the data across multiple bricks.

 

The number of bricks must be a multiple of the first subvolume.

 

 

Which GlusterFS Volume System Is Best?

 

Before installing GlusterFS, you first need to decide what type of volume is best for your environment.  

 

Replicated volume

 

This type of volume provides file replication across multiple bricks in a cluster.

 

This is the best choice for environments which require high availability, high reliability, and also scalable storage.

 

It is especially suited if you want to be able to self-mount the GlusterFS volume. This could be for example, the web server document root at /var/www/  where all files need to be accessible on that node.

 

The value passed to replica will be the same as the number of nodes in the volume.

 

Files are copied to each GlusterFS brick in the volume, rather like with RAID 1.

 

However, you can also have three or more bricks in the cluster. The usable space will be equivalent to the size of one brick, with all files written to one brick being replicated to all the other bricks in the cluster. 

 

Replicated volumes offer improved read performance for most environments and they are the most common type of volume used when clients accessing the cluster are external to the GlusterFS nodes.

 

Distributed-replicated volume

As with a RAID-10, an even number of GlusterFS bricks must be used. The usable space in this case is the size of the combined bricks passed to the replica value.

 

As an example, if there are four bricks of 20 GB each and you pass replica 2 to the creation, then files are distributed to two nodes (40 GB) and are also  replicated to two nodes.

 

With a GlusterFS system of six bricks of 20 GB and replica 3, files are distributed to three nodes (60 GB) and also replicated to three nodes.

 

And if you used replica 2, then files are distributed to two nodes (40 GB) and are replicated to four GlusterFS nodes in pairs.

 

This distribution and replication system is useful when your clients are external to the cluster, ie are not locally self-mounted.

 

 

 

An Overview of How to Install and Configure Gluster

 

 

First you require a server cluster, preferably with three nodes.

 

Install the gluster system on each node.

 

Decide which kind of gluster volume system you wish to implement – distributed, replicated, distributed-replicated etc.

 

Usually replicated will be preferred as a minimum. Distributed is not generally recommended for production-level environments due to the higher risk of data lost involved compared to the other options.

 

Next create a trusted pool. This needs to be done on just one of the nodes.

 

Then add the disk/s for the gluster storage on each node. These consitute the storage “bricks”.

 

Format and mount the storage bricks.

 

Activate the gluster volume

 

For detailed explanation and examples of the installation and configuration process of gluster refer to my LAB page on gluster.

 

Also see https://docs.gluster.org/en/v3/Quick-Start-Guide/Quickstart/

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: GlusterFS Configuration on Ubuntu

LAB for installing and configuring GlusterFS on Ubuntu

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

 

Overview

 

The cluster comprises three nodes (ubuntu31, ubuntu32, ubuntu33) installed with Ubuntu Version 20 LTS and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

each node has a 1gb scsi disk called /dev/sda (the root system disk is /dev/vda)
 
brick1
brick2
brick3
 
respectively (these are NOT host definitions, just gluster identities)
 

on each machine:
 

88 wget -O- https://download.gluster.org/pub/gluster/glusterfs/3.12/rsa.pub | apt-key add –
89 sudo add-apt-repository ppa:gluster/glusterfs-3.12
90 apt install glusterfs-server -y
91 systemctl start glusterd
92 systemctl enable glusterd

 
 
Created a trusted pool. This is done on ubuntu31 with the command:
 
gluster peer probe ubuntu32
 
You should immediately see peer probe: success.

 

root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
 

You can check the status of peers with the command:
 
gluster peer status

 

We want the trusted pool to include all three bricks. So we do:

 
root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer probe ubuntu33
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer status
Number of Peers: 2
 
Hostname: ubuntu32
Uuid: 6b4ca918-e77c-40d9-821c-e24fe7130afa
State: Peer in Cluster (Connected)
 
Hostname: ubuntu33
Uuid: e3b02490-9a14-45a3-ad0d-fcc66dd1c731
State: Peer in Cluster (Connected)
root@ubuntu31:/home/kevin#

 

Add the disk for the gluster storage on each machine:

 
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xffb101f9
 
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 2097151 2095104 1023M 83 Linux
 
NOTE: on these ubuntu cluster nodes the root system partition is on /dev/vda – hence the next free scsi disk is sda!

 
Format and mount the bricks
 
Perform this step on all the nodes
 
Note: We are going to use the XFS filesystem for the backend bricks.
 
But Gluster is designed to work on top of any filesystem, which supports extended attributes.
 
The following examples assume that the brick will be residing on /dev/sda1.

 

mkfs.xfs -i size=512 /dev/sda1
mkdir -p /gluster
echo ‘/dev/sda1 /gluster/brick1 xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount
 

You should now see sda1 mounted at /gluster

 

root@ubuntu31:/home/kevin# mkfs.xfs -i size=512 /dev/sda1
meta-data=/dev/sda1 isize=512 agcount=4, agsize=65472 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=261888, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1566, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root@ubuntu31:/home/kevin#

 

do the same on the other two nodes, using
 
/gluster and /gluster respectively
 
echo ‘/dev/sda1 /gluster xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount

 

/dev/sda1 on /gluster type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
root@ubuntu31:/home/kevin# d
 
/dev/sda1 1041288 40296 1000992 4% /gluster
root@ubuntu31:/home/kevin
 

root@ubuntu31:/home/kevin# gluster pool list
UUID Hostname State
6b4ca918-e77c-40d9-821c-e24fe7130afa ubuntu32 Connected
e3b02490-9a14-45a3-ad0d-fcc66dd1c731 ubuntu33 Connected
2eb4eca2-11e4-40ef-9b70-43bfa551121c localhost Connected
root@ubuntu31:/home/kevin#

 
on ubuntu31, ubuntu32, ubuntu33:
 
mkdir -p /gluster/brick

 

replica n is the number of nodes in the gluster
 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/glusterfs/distributed ubuntu32:/glusterfs/distributed ubuntu33:/glusterfs/distributed

 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick

 
root@ubuntu31:/home/kevin# gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick
volume create: glustervol1: success: please start the volume to access data
root@ubuntu31:/home/kevin#
 

Now we’ve created the distributed volume ‘glustervol1’ – start the ‘glustervol1’ and check the volume info.
 
gluster volume start glustervol1
gluster volume info glustervol1

 
root@ubuntu31:/home/kevin# gluster volume start glustervol1
volume start: glustervol1: success
root@ubuntu31:/home/kevin#

 
root@ubuntu31:/home/kevin# gluster volume info glustervol1
 
Volume Name: glustervol1
Type: Replicate
Volume ID: 9335962f-342e-423e-aefc-a87777a5b081
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ubuntu31:/gluster/brick
Brick2: ubuntu32:/gluster/brick
Brick3: ubuntu33:/gluster/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
root@ubuntu31:/home/kevin#

 

on the client machines:

 

Install glusterfs-client to the Ubuntu system using the apt command.
 
sudo apt install glusterfs-client -y
 
Now create a new directory ‘/mnt/glusterfs’ when the glusterfs-client installation is complete.
 
mkdir -p /mnt/glusterfs

And mount the distributed glusterfs volume to the ‘/mnt/glusterfs’ directory.

 

mount -t glusterfs ubuntu31:/glustervol1 /mnt/glusterfs

 

ubuntu31:/glustervol1 1041288 50808 990480 5% /mnt/glusterfs
root@yoga:/home/kevin#

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: GlusterFS

LAB for installing and configuring GlusterFS on Ubuntu

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

 

Overview

 

The cluster comprises three nodes (ubuntu31, ubuntu32, ubuntu33) installed with Ubuntu Version 20 LTS and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

each node has a 1gb scsi disk called /dev/sda (the root system disk is /dev/vda)
 
brick1
brick2
brick3
 
respectively (these are NOT host definitions, just gluster identities)
 

on each machine:
 

88 wget -O- https://download.gluster.org/pub/gluster/glusterfs/3.12/rsa.pub | apt-key add –
89 sudo add-apt-repository ppa:gluster/glusterfs-3.12
90 apt install glusterfs-server -y
91 systemctl start glusterd
92 systemctl enable glusterd

 
 
Created a trusted pool. This is done on ubuntu31 with the command:
 
gluster peer probe ubuntu32
 
You should immediately see peer probe: success.

 

root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
 

You can check the status of peers with the command:
 
gluster peer status

 

We want the trusted pool to include all three bricks. So we do:

 
root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer probe ubuntu33
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer status
Number of Peers: 2
 
Hostname: ubuntu32
Uuid: 6b4ca918-e77c-40d9-821c-e24fe7130afa
State: Peer in Cluster (Connected)
 
Hostname: ubuntu33
Uuid: e3b02490-9a14-45a3-ad0d-fcc66dd1c731
State: Peer in Cluster (Connected)
root@ubuntu31:/home/kevin#

 

Add the disk for the gluster storage on each machine:

 
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xffb101f9
 
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 2097151 2095104 1023M 83 Linux
 
NOTE: on these ubuntu cluster nodes the root system partition is on /dev/vda – hence the next free scsi disk is sda!

 
Format and mount the bricks
 
Perform this step on all the nodes
 
Note: We are going to use the XFS filesystem for the backend bricks.
 
But Gluster is designed to work on top of any filesystem, which supports extended attributes.
 
The following examples assume that the brick will be residing on /dev/sda1.

 

mkfs.xfs -i size=512 /dev/sda1
mkdir -p /gluster
echo ‘/dev/sda1 /gluster/brick1 xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount
 

You should now see sda1 mounted at /gluster

 

root@ubuntu31:/home/kevin# mkfs.xfs -i size=512 /dev/sda1
meta-data=/dev/sda1 isize=512 agcount=4, agsize=65472 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=261888, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1566, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root@ubuntu31:/home/kevin#

 

do the same on the other two nodes, using
 
/gluster and /gluster respectively
 
echo ‘/dev/sda1 /gluster xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount

 

/dev/sda1 on /gluster type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
root@ubuntu31:/home/kevin# d
 
/dev/sda1 1041288 40296 1000992 4% /gluster
root@ubuntu31:/home/kevin
 

root@ubuntu31:/home/kevin# gluster pool list
UUID Hostname State
6b4ca918-e77c-40d9-821c-e24fe7130afa ubuntu32 Connected
e3b02490-9a14-45a3-ad0d-fcc66dd1c731 ubuntu33 Connected
2eb4eca2-11e4-40ef-9b70-43bfa551121c localhost Connected
root@ubuntu31:/home/kevin#

 
on ubuntu31, ubuntu32, ubuntu33:
 
mkdir -p /gluster/brick

 

replica n is the number of nodes in the gluster
 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/glusterfs/distributed ubuntu32:/glusterfs/distributed ubuntu33:/glusterfs/distributed

 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick

 
root@ubuntu31:/home/kevin# gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick
volume create: glustervol1: success: please start the volume to access data
root@ubuntu31:/home/kevin#
 

Now we’ve created the distributed volume ‘glustervol1’ – start the ‘glustervol1’ and check the volume info.
 
gluster volume start glustervol1
gluster volume info glustervol1

 
root@ubuntu31:/home/kevin# gluster volume start glustervol1
volume start: glustervol1: success
root@ubuntu31:/home/kevin#

 
root@ubuntu31:/home/kevin# gluster volume info glustervol1
 
Volume Name: glustervol1
Type: Replicate
Volume ID: 9335962f-342e-423e-aefc-a87777a5b081
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ubuntu31:/gluster/brick
Brick2: ubuntu32:/gluster/brick
Brick3: ubuntu33:/gluster/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
root@ubuntu31:/home/kevin#

 

on the client machines:

 

Install glusterfs-client to the Ubuntu system using the apt command.
 
sudo apt install glusterfs-client -y
 
Now create a new directory ‘/mnt/glusterfs’ when the glusterfs-client installation is complete.
 
mkdir -p /mnt/glusterfs

And mount the distributed glusterfs volume to the ‘/mnt/glusterfs’ directory.

 

mount -t glusterfs ubuntu31:/glustervol1 /mnt/glusterfs

 

ubuntu31:/glustervol1 1041288 50808 990480 5% /mnt/glusterfs
root@yoga:/home/kevin#

 

Continue Reading