How To install Cluster Fencing Using Libvert on KVM Virtual Machines

You are here:
< All Topics

These are my practical notes on installing libvert fencing on Centos  cluster nodes running on virtual machines using the KVM hypervisor platform.

 

 

NOTE: If a local firewall is enabled, open the chosen TCP port (in this example, the default of 1229) to the host.

 

Alternatively if you are using a testing or training environment you can disable the firewall. Do not do the latter on production environments!

 

1. On the KVM host machine, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

yum -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast

 

2. On the KVM host, create a shared secret key called /etc/cluster/fence_xvm.key. The target directory /etc/cluster needs to be created manually on the nodes and the KVM host.

 

mkdir -p /etc/cluster

 

dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

 

then distribute the key from the KVM host to all the nodes:

3. Distribute the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the KVM host.

 

scp /etc/cluster/fence_xvm.key centos1vm:/etc/cluster/

 

and copy also to the other nodes

4. On the KVM host, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener. Also make sure you give the correct directory location for the shared key you just created (here /etc/cluster/fence.xvm.key):

 

fence_virtd -c

5. Enable and start the fence_virtd daemon on the hypervisor.

 

systemctl enable fence_virtd
systemctl start fence_virtd

6. Also install fence_virtd and enable and start on the nodes

 

root@yoga:/etc# systemctl enable fence_virtd
Synchronizing state of fence_virtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable fence_virtd
root@yoga:/etc# systemctl start fence_virtd
root@yoga:/etc# systemctl status fence_virtd
● fence_virtd.service – Fence-Virt system host daemon
Loaded: loaded (/lib/systemd/system/fence_virtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-23 14:13:20 CET; 6min ago
Docs: man:fence_virtd(8)
man:fence_virt.con(5)
Main PID: 49779 (fence_virtd)
Tasks: 1 (limit: 18806)
Memory: 3.2M
CGroup: /system.slice/fence_virtd.service
└─49779 /usr/sbin/fence_virtd -w

 

Feb 23 14:13:20 yoga systemd[1]: Starting Fence-Virt system host daemon…
root@yoga:/etc#

 

7. Test the KVM host multicast connectivity with:

 

fence_xvm -o list

root@yoga:/etc# fence_xvm -o list
centos-base c023d3d6-b2b9-4dc2-b0c7-06a27ddf5e1d off
centos1 2daf2c38-b9bf-43ab-8a96-af124549d5c1 on
centos2 3c571551-8fa2-4499-95b5-c5a8e82eb6d5 on
centos3 2969e454-b569-4ff3-b88a-0f8ae26e22c1 on
centosstorage 501a3dbb-1088-48df-8090-adcf490393fe off
suse-base 0b360ee5-3600-456d-9eb3-d43c1ee4b701 off
suse1 646ce77a-da14-4782-858e-6bf03753e4b5 off
suse2 d9ae8fd2-eede-4bd6-8d4a-f2d7d8c90296 off
suse3 7ad89ea7-44ae-4965-82ba-d29c446a0607 off
root@yoga:/etc#

 

 

8. create your fencing devices, one for each node:

 

pcs stonith create <name for our fencing device for this vm cluster host> fence_xvm port=”<the KVM vm name>” pcmk_host_list=”<FQDN of the cluster host>”

 

one for each node with the values set accordingly for each host. So it will look like this:

 

MAKE SURE YOU SET ALL THE NAMES CORRECTLY!

 

On ONE of the nodes, create all the following fence devices, usually one does this on the DC (current designated co-ordinator) node:

 

[root@centos1 etc]# pcs stonith create fence_centos1 fence_xvm port=”centos1″ pcmk_host_list=”centos1.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos2 fence_xvm port=”centos2″ pcmk_host_list=”centos2.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos3 fence_xvm port=”centos3″ pcmk_host_list=”centos3.localdomain”
[root@centos1 etc]#

 

9. Next, enable fencing on the cluster nodes.

 

Make sure the property is set to TRUE

 

check with

 

pcs -f stonith_cfg property

 

If the cluster fencing stonith property is set to FALSE then you can manually set it to TRUE on all the Cluster nodes:

 

pcs -f stonith_cfg property set stonith-enabled=true

 

[root@centos1 ~]# pcs -f stonith_cfg property
Cluster Properties:
stonith-enabled: true
[root@centos1 ~]#

 

you can also do:

pcs stonith cleanup fence_centos1 and the other hosts centos2 and centos3

 

[root@centos1 ~]# pcs stonith cleanup fence_centos1
Cleaned up fence_centos1 on centos3.localdomain
Cleaned up fence_centos1 on centos2.localdomain
Cleaned up fence_centos1 on centos1.localdomain
Waiting for 3 replies from the controller
… got reply
… got reply
… got reply (done)
[root@centos1 ~]#

 

 

If a stonith id or node is not specified then all stonith resources and devices will be cleaned.

pcs stonith cleanup

 

then do

 

pcs stonith status

 

[root@centos1 ~]# pcs stonith status
* fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos3 (stonith:fence_xvm): Started centos3.localdomain
[root@centos1 ~]#

 

 

Some other stonith fencing commands:

 

To list the available fence agents, execute below command on any of the Cluster node

 

# pcs stonith list

 

(can take several seconds, dont kill!)

 

root@ubuntu1:~# pcs stonith list
apcmaster – APC MasterSwitch
apcmastersnmp – APC MasterSwitch (SNMP)
apcsmart – APCSmart
baytech – BayTech power switch
bladehpi – IBM BladeCenter (OpenHPI)
cyclades – Cyclades AlterPath PM
external/drac5 – DRAC5 STONITH device
.. .. .. list truncated…

 

 

To get more details about the respective fence agent you can use:

 

root@ubuntu1:~# pcs stonith describe fence_xvm
fence_xvm – Fence agent for virtual machines

 

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

 

Stonith options:
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
.. .. .. list truncated . ..

 

Table of Contents