Tags Archives: KVM

How To Configure KVM Virtualization on Ubuntu v20.04 Hosts

To start the KVM virt manager GUI enter:

 

virt-manager

 

alternatively, VMs can be created, started, and modified using the virt-install command to create a VM via Linux terminal.

 

The syntax is:

 

virt-install –option1=value –option2=value …

 

 

Options behind the command serve to define the parameters of the installation:

 

Option Description
–name The name you give to the VM
–description A short description of the VM
–ram The amount of RAM you wish to allocate to the VM
–vcpus The number of virtual CPUs you wish to allocate to the VM
–disk The location of the VM on your disk (if you specify a qcow2 disk file that does not exist, it will be automatically created)
–cdrom The location of the ISO file you downloaded
–graphics Specifies the display type

 

KVM componemt packages:

 

qemu-kvm – The main package
libvirt – Includes the libvirtd server exporting the virtualization support
libvirt-client – This package contains virsh and other client-side utilities
virt-install – Utility to install virtual machines
virt-viewer – Utility to display graphical console for a virtual machine

 

 

Check for Virtualization Support on Ubuntu 20.04

 

Before installing KVM, check if your CPU supports hardware virtualization:

 

egrep -c ‘(vmx|svm)’ /proc/cpuinfo

 

Check the number given in the output:

 

root@asus:~# egrep -c ‘(vmx|svm)’ /proc/cpuinfo

8
root@asus:~#

 

 

If the command returns a value of 0, the CPU processor is not capable of running KVM. If it returns any other number, then it means you can proceed with the installation.

 

Next check if your system can use KVM acceleration:

 

root@asus:~# kvm-ok

 

INFO: /dev/kvm exists
KVM acceleration can be used
root@asus:~#

 

If kvm-ok returns an error stating KVM acceleration cannot be used, try installing cpu-checker:

 

sudo apt install cpu-checker

 

Then restart the terminal.

 

You can now start installing KVM.

 

Install KVM on Ubuntu 20.04

Overview of the steps involved:

 

Install related packages using apt
Authorize users to run VMs

Verify that
the installation was successful

Step 1: Install KVM Packages

 

First, update the repositories:

 

sudo apt update

 

then:

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils

 

root@asus:~# apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils

 

Reading package lists… Done
Building dependency tree
Reading state information… Done

 

bridge-utils is already the newest version (1.6-3ubuntu1).
libvirt-clients is already the newest version (6.6.0-1ubuntu3.5).
libvirt-daemon-system is already the newest version (6.6.0-1ubuntu3.5).
qemu-kvm is already the newest version (1:5.0-5ubuntu9.9).

 

0 upgraded, 0 newly installed, 0 to remove and 9 not upgraded.

 

root@asus:~#

 

Step 2: Authorize Users

 

1. Only members of the libvirt and kvm user groups can run virtual machines. Add a user to the libvirt group:

 

sudo adduser ‘username’ libvirt

 

Replacing username with the actual username, in this case:

 

adduser kevin libvirt

 

root@asus:~# adduser kevin libvirt
The user `kevin’ is already a member of `libvirt’.
root@asus:~#

 

 

Adding a user to the libvirt usergroup

 

Next do the same for the kvm group:

 

sudo adduser ‘[username]’ kvm

 

Adding user to the kvm usergroup

 

adduser kevin kvm

root@asus:~# adduser kevin kvm
The user `kevin’ is already a member of `kvm’.
root@asus:~#

 

 

(NOTE: I had already added this information and installation during a previous session)

 

To remove a user from the libvirt or kvm group, replace adduser with deluser using the above syntax.

 

Step 3: Verify the Installation:

 

virsh list –all

 

 

root@asus:~# virsh list –all
Id Name State
————————————–
– centos-base centos8 shut off
– ceph-base centos7 shut off
– ceph-mon shut off
– ceph-osd0 shut off
– ceph-osd1 shut off
– ceph-osd2 shut off
– router1 10.0.8.100 shut off
– router2 10.0.9.100 shut off

 

root@asus:~#

 

The above list shows the virtual machines that already exist on this system.

 

 

 

Then make sure that the needed kernel modules have been loaded:

 

 

root@asus:~# lsmod | grep kvm
kvm_amd 102400 0
kvm 724992 1 kvm_amd
ccp 102400 1 kvm_amd
root@asus:~#

 

If your host machine is running an Intel CPU, then you will see kvm_intel displayed. In my case I am using an AMD processor, so the kvm_amd is the one displayed.

 

If the modules are not loaded automatically, you can load them manually using the modprobe command:

 

# modprobe kvm_intel

 

Finally, start the libvirtd daemon. The following command both enables it at boot time and starts it immediately:

 

systemctl enable –now libvirtd

 

root@asus:~# systemctl enable –now libvirtd
root@asus:~#

 

 

Use the systemctl command to check the status of libvirtd:

 

systemctl status libvirtd

root@asus:~# systemctl status libvirtd
● libvirtd.service – Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-08-18 22:17:51 CEST; 21h ago
TriggeredBy: ● libvirtd-ro.socket
● libvirtd.socket
● libvirtd-admin.socket
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 1140 (libvirtd)
Tasks: 21 (limit: 32768)
Memory: 29.3M
CGroup: /system.slice/libvirtd.service
├─1140 /usr/sbin/libvirtd
├─1435 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt/>
├─1436 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt/>
├─1499 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/10.0.8.0.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt>
└─1612 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/10.0.9.0.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt>

Aug 19 19:50:39 asus dnsmasq[1435]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1612]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1499]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1435]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1499]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1435]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1499]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1435]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1612]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1612]: using nameserver 127.0.0.53#53
lines 1-28/28 (END)

 

next, install virt-manager, a GUI tool for creating and managing VMs:

 

sudo apt install virt-manager

 

root@asus:~# apt install virt-manager
Reading package lists… Done
Building dependency tree
Reading state information… Done
virt-manager is already the newest version (1:2.2.1-4ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 9 not upgraded.
root@asus:~#

 

 

To use the Virt Manager GUI

 

1. Start virt-manager with:

 

sudo virt-manager

 

 

alternatively, usingthe virt-install command line tool:

 

Use the virt-install command to create a VM via Linux terminal. The syntax is:

 

 

virt-install –option1=value –option2=value …

 

Options behind the command serve to define the parameters of the installation.

 

Here is what each of them means:

 

Option Description
–name The name you give to the VM
–description A short description of the VM
–ram The amount of RAM you wish to allocate to the VM
–vcpus The number of virtual CPUs you wish to allocate to the VM
–disk The location of the VM on your disk (if you specify a qcow2 disk file that does not exist, it will be automatically created)
–cdrom The location of the ISO file you downloaded
–graphics Specifies the display type

 

 

currently running virtual machines:

 

 

virt-install –help

 

To create a virtual machine using the virt-install CLI command instead of using virt-manager GUI:

 

Installing a virtual machine from an ISO image

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–cdrom /path/to/rhel7.iso \
–os-variant rhel7

 

The –cdrom /path/to/rhel7.iso option specifies the VM will be installed from the CD or DVD image from the specified location.

 

Importing a virtual machine image from virtual disk image:

 

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk /path/to/imported/disk.qcow \
–import \
–os-variant rhel7

 

The –import option specifies the virtual machine will be imported from virtual disk image specified by –disk /path/to/imported/disk.qcow option.

 

Installing a virtual machine from a network location:

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–location http://example.com/path/to/os \
–os-variant rhel7

The –location http://example.com/path/to/os option specifies that the installation tree is at the specified network location.

 

Installing a virtual machine with Kickstart by using a kickstart file:

 

–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–location http://example.com/path/to/os \
–os-variant rhel7 \
–initrd-inject /path/to/ks.cfg \
–extra-args=”ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8″

 

The initrd-inject and the extra-args options specify that the virtual machine will be installed using a Kickstarter file.

 

 

To change some VM machine parameters you can use virsh as an alternative to virt-manager GUI. For example:

 

virsh edit linuxconfig-vm

 

this opens the VM config file for the vm specified.

 

Finally, reboot the VM:

 

virsh reboot linuxconfig-vm

 

 

 

To autostart a virtual machine on host boot-up using virsh:

 

virsh autostart linuxconfig-vm

 

To disable this option:

 

virsh autostart –disable linuxconfig-vm

 

 

 

 

Continue Reading

How To install Cluster Fencing Using Libvert on KVM Virtual Machines

These are my practical notes on installing libvert fencing on Centos  cluster nodes running on virtual machines using the KVM hypervisor platform.

 

 

NOTE: If a local firewall is enabled, open the chosen TCP port (in this example, the default of 1229) to the host.

 

Alternatively if you are using a testing or training environment you can disable the firewall. Do not do the latter on production environments!

 

1. On the KVM host machine, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

yum -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast

 

2. On the KVM host, create a shared secret key called /etc/cluster/fence_xvm.key. The target directory /etc/cluster needs to be created manually on the nodes and the KVM host.

 

mkdir -p /etc/cluster

 

dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

 

then distribute the key from the KVM host to all the nodes:

3. Distribute the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the KVM host.

 

scp /etc/cluster/fence_xvm.key centos1vm:/etc/cluster/

 

and copy also to the other nodes

4. On the KVM host, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener. Also make sure you give the correct directory location for the shared key you just created (here /etc/cluster/fence.xvm.key):

 

fence_virtd -c

5. Enable and start the fence_virtd daemon on the hypervisor.

 

systemctl enable fence_virtd
systemctl start fence_virtd

6. Also install fence_virtd and enable and start on the nodes

 

root@yoga:/etc# systemctl enable fence_virtd
Synchronizing state of fence_virtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable fence_virtd
root@yoga:/etc# systemctl start fence_virtd
root@yoga:/etc# systemctl status fence_virtd
● fence_virtd.service – Fence-Virt system host daemon
Loaded: loaded (/lib/systemd/system/fence_virtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-23 14:13:20 CET; 6min ago
Docs: man:fence_virtd(8)
man:fence_virt.con(5)
Main PID: 49779 (fence_virtd)
Tasks: 1 (limit: 18806)
Memory: 3.2M
CGroup: /system.slice/fence_virtd.service
└─49779 /usr/sbin/fence_virtd -w

 

Feb 23 14:13:20 yoga systemd[1]: Starting Fence-Virt system host daemon…
root@yoga:/etc#

 

7. Test the KVM host multicast connectivity with:

 

fence_xvm -o list

root@yoga:/etc# fence_xvm -o list
centos-base c023d3d6-b2b9-4dc2-b0c7-06a27ddf5e1d off
centos1 2daf2c38-b9bf-43ab-8a96-af124549d5c1 on
centos2 3c571551-8fa2-4499-95b5-c5a8e82eb6d5 on
centos3 2969e454-b569-4ff3-b88a-0f8ae26e22c1 on
centosstorage 501a3dbb-1088-48df-8090-adcf490393fe off
suse-base 0b360ee5-3600-456d-9eb3-d43c1ee4b701 off
suse1 646ce77a-da14-4782-858e-6bf03753e4b5 off
suse2 d9ae8fd2-eede-4bd6-8d4a-f2d7d8c90296 off
suse3 7ad89ea7-44ae-4965-82ba-d29c446a0607 off
root@yoga:/etc#

 

 

8. create your fencing devices, one for each node:

 

pcs stonith create <name for our fencing device for this vm cluster host> fence_xvm port=”<the KVM vm name>” pcmk_host_list=”<FQDN of the cluster host>”

 

one for each node with the values set accordingly for each host. So it will look like this:

 

MAKE SURE YOU SET ALL THE NAMES CORRECTLY!

 

On ONE of the nodes, create all the following fence devices, usually one does this on the DC (current designated co-ordinator) node:

 

[root@centos1 etc]# pcs stonith create fence_centos1 fence_xvm port=”centos1″ pcmk_host_list=”centos1.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos2 fence_xvm port=”centos2″ pcmk_host_list=”centos2.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos3 fence_xvm port=”centos3″ pcmk_host_list=”centos3.localdomain”
[root@centos1 etc]#

 

9. Next, enable fencing on the cluster nodes.

 

Make sure the property is set to TRUE

 

check with

 

pcs -f stonith_cfg property

 

If the cluster fencing stonith property is set to FALSE then you can manually set it to TRUE on all the Cluster nodes:

 

pcs -f stonith_cfg property set stonith-enabled=true

 

[root@centos1 ~]# pcs -f stonith_cfg property
Cluster Properties:
stonith-enabled: true
[root@centos1 ~]#

 

you can also do:

pcs stonith cleanup fence_centos1 and the other hosts centos2 and centos3

 

[root@centos1 ~]# pcs stonith cleanup fence_centos1
Cleaned up fence_centos1 on centos3.localdomain
Cleaned up fence_centos1 on centos2.localdomain
Cleaned up fence_centos1 on centos1.localdomain
Waiting for 3 replies from the controller
… got reply
… got reply
… got reply (done)
[root@centos1 ~]#

 

 

If a stonith id or node is not specified then all stonith resources and devices will be cleaned.

pcs stonith cleanup

 

then do

 

pcs stonith status

 

[root@centos1 ~]# pcs stonith status
* fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos3 (stonith:fence_xvm): Started centos3.localdomain
[root@centos1 ~]#

 

 

Some other stonith fencing commands:

 

To list the available fence agents, execute below command on any of the Cluster node

 

# pcs stonith list

 

(can take several seconds, dont kill!)

 

root@ubuntu1:~# pcs stonith list
apcmaster – APC MasterSwitch
apcmastersnmp – APC MasterSwitch (SNMP)
apcsmart – APCSmart
baytech – BayTech power switch
bladehpi – IBM BladeCenter (OpenHPI)
cyclades – Cyclades AlterPath PM
external/drac5 – DRAC5 STONITH device
.. .. .. list truncated…

 

 

To get more details about the respective fence agent you can use:

 

root@ubuntu1:~# pcs stonith describe fence_xvm
fence_xvm – Fence agent for virtual machines

 

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

 

Stonith options:
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
.. .. .. list truncated . ..

 

Continue Reading

Cluster Fencing Overview

There are two main types of cluster fencing:  power fencing and fabric fencing.

 

Both of these fencing methods require a fencing device to be implemented, such as a power switch or the virtual fencing daemon and fencing agent software to take care of communication between the cluster and the fencing device.

 

Power fencing

 

Cuts ELECTRIC POWER to the node. Known as STONITH. Make sure ALL the power supplies to a node are cut off.

 

Two different kinds of power fencing devices exist:

 

External fencing hardware: for example, a network-controlled power socket block which cuts off power.

 

Internal fencing hardware: for example ILO (Integrated Lights-Out from HP), DRAC, IPMI (Integrated Power Management Interface), or virtual machine fencing. These also power off the hardware of the node.

 

Power fencing can be configured to turn the target machine off and keep it off, or to turn it off and then on again. Turning a machine back on has the added benefit that the machine should come back up cleanly and rejoin the cluster if the cluster services have been enabled.

 

BUT: It is best NOT to permit an automatic rejoin to the cluster. This is because if a node has failed, there will be a reason and a cause and this needs to be investigated first and remedied.

 

Power fencing for a node with multiple power supplies must be configured to ensure ALL power supplies are turned off before being turned out again.

 

If this is not done, the node to be fenced never actually gets properly fenced because it still has power, defeating the point of the fencing operation.

 

Important to bear in mind that you should NOT use an IPMI which shares power or network access with the host because this will mean a power or network failure will cause both host AND its fencing device to fail.

 

Fabric fencing

 

disconnects a node from STORAGE. This is done either by closing ports on an FC (Fibre Channel) switch or by using SCSI reservations.

 

The node will not automatically rejoin.

 

If a node is fenced only with fabric fencing and not in combination with power fencing, then the system administrator must ensure the machine will be ready to rejoin the cluster. Usually this will be done by rebooting the failed node.

 

There are a variety of different fencing agents available to implement cluster node fencing.

 

Multiple fencing

 

Fencing methods can be combined, this is sometimes referred to as “nested fencing”.

 

For example, as first level fencing, one fence device can cut off Fibre Channel by blocking ports on the FC switch, and a second level fencing in which an ILO interface powers down the offending machine.

 

TIP: Don’t run production environment clusters without fencing enabled!

 

If a node fails, you cannot admit it back into the cluster unless it has been fenced.

 

There are a number of different ways of implementing these fencing systems. The notes below give an overview of some of these systems.

 

SCSI fencing

 

SCSI fencing does not require any physical fencing hardware.

 

SCSI Reservation is a mechanism which allows SCSI clients or initiators to reserve a LUN for their exclusive access only and prevents other initiators from accessing the device.

 

SCSI reservations are used to control access to a shared SCSI device such as a hard drive.

 

An initiator configures a reservation on a LUN to prevent another initiator or SCSI client from making changes to the LUN. This is a similar concept to the file-locking concept.

 

SCSI reservations are defined and released by the SCSI initiator.

 

SBD fencing

 

SBD Storage Based Device, sometimes called “Storage Based Death”

 

The SBD daemon together with the STONITH agent, provides a means of enabling STONITH and fencing in clusters through the means of shared storage, rather than requiring external power switching.

The SBD daemon runs on all cluster nodes and monitors the shared storage. SBD uses its own small shared disk partition for its administrative purposes. Each node has a small storage slot on the partition.

 

When it loses access to the majority of SBD devices, or notices another node has written a fencing request to its SBD storage slot, SBD will ensure the node will immediately fence itself.

 

Virtual machine fencing

Cluster nodes which run as virtual machines on KVM can be fenced using the KVM software interface libvirt and KVM software fencing device fence-virtd running on the KVM hypervisor host.

 

KVM Virtual machine fencing works using multicast mode by sending a fencing request signed with a shared secret key to the libvirt fencing multicast group.

 

This means that the node virtual machines can even be running on different hypervisor systems, provided that all the hypervisors have fence-virtd configured for the same multicast group, and are also using the same shared secret.

 

A note about monitoring STONITH resources

 

Fencing devices are a vital part of high-availability clusters, but they involve system and traffic overhead. Power management devices can be adversely impacted by high levels of broadcast traffic.

 

Also, some devices cannot process more than ten or so connections per minute.  Most cannot handle more than one connection session at any one moment and can become confused if two clients are attempting to connect at the same time.

 

For most fencing devices a monitoring interval of around 1800 seconds (30 minutes) and a status check on the power fencing devices every couple of hours should generally be sufficient.

 

Redundant Fencing

 

Redundant or multiple fencing is where fencing methods are combined. This is sometimes also referred to as “nested fencing”.
 

For example, as first level fencing, one fence device can cut off Fibre Channel by blocking ports on the FC switch, and a second level fencing in which an ILO interface powers down the offending machine.
 

You add different fencing levels by using pcs stonith level.
 

All level 1 device methods are tried first, then if no success it will try the level 2 devices.
 

Set with:
 

pcs stonith level add <level> <node> <devices>

eg
 
pcs stonith level add 1 centos1 fence_centos1_ilo
 

pcs stonith level add 2 centos1 fence_centos1_apc

 

to remove a level use:
 

pcs stonith level remove
 

to view the fence level configurations use
 

pcs stonith level

 

Continue Reading

How To Install Pacemaker and Corosync on Centos

This article sets out how to install the clustering management software Pacemaker and the cluster membership software Corosync on Centos version 8.

 

For this example, we are setting up a three node cluster using virtual machines on the Linux KVM hypervisor platform.

 

The virtual machines have the KVM names and hostnames centos1, centos2, and centos3.

 

Each node has two network interfaces: one for the KVM bridged NAT network (KVM network name: default via eth0) and the other for the cluster subnet 10.0.8.0 (KVM network name:network-10.0.8.0 via eth1). DHCP is not used for either of these interfaces. Pacemaker and Corosync require static IP addresses.

 

The machine centos1 will be our current designated co-ordinator (DC) cluster node.

 

First, make sure you have first created an ssh-key for root on the first node:

 

[root@centos1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:********** root@centos1.localdomain

 

then copy the ssh key to the other nodes:

 

ssh-copy-id centos2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

 

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)

 

[root@centos1 .ssh]#
First you need to enable the HighAvailability repository

 

[root@centos1 ~]# yum repolist all | grep -i HighAvailability
ha CentOS Stream 8 – HighAvailability disabled
[root@centos1 ~]# dnf config-manager –set-enabled ha
[root@centos1 ~]# yum repolist all | grep -i HighAvailability
ha CentOS Stream 8 – HighAvailability enabled
[root@centos1 ~]#

 

Next, install the following packages:

 

[root@centos1 ~]# yum install epel-release

 

[root@centos1 ~]# yum install pcs fence-agents-all

 

Next, STOP and DISABLE Firewall for lab testing convenience:

 

[root@centos1 ~]# systemctl stop firewalld
[root@centos1 ~]#
[root@centos1 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@centos1 ~]#

 

then check with:

 

[root@centos1 ~]# systemctl status firewalld
● firewalld.service – firewalld – dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)

 

Next we enable pcsd This is the Pacemaker daemon service:

 

[root@centos1 ~]# systemctl enable –now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@centos1 ~]#

 

then change the default password for user hacluster:

 

echo | passwd –stdin hacluster

 

Changing password for user hacluster.

passwd: all authentication tokens updated successfully.
[root@centos2 ~]#

 

Then, on only ONE of the nodes, I am doing it on centos1 on the KVM cluster, as this will be the default DC for the cluster:

 

pcs host auth centos1.localdomain centos2.localdomain centos3.localdomain

 

NOTE the correct command is pcs host auth – not pcs cluster auth unlike in some instruction material, the syntax has since changed.

 

[root@centos1 .ssh]# pcs host auth centos1.localdomain suse1.localdomain ubuntu4.localdomain
Username: hacluster
Password:
centos1.localdomain: Authorized
centos2.localdomain: Authorized
centos3.localdomain: Authorized
[root@centos1 .ssh]#

 

Next, on centos1, as this will be our default DC (designated coordinator node) we create a corosync secret key:

 

[root@centos1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 2048 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
[root@centos1 corosync]#

 

Then copy the key to the other 2nodes:

 

scp /etc/corosync/authkey centos2:/etc/corosync/
scp /etc/corosync/authkey centos3:/etc/corosync/

 

[root@centos1 corosync]# pcs cluster setup hacluster centos1.localdomain addr=10.0.8.11 centos2.localdomain addr=10.0.8.12 centos3.localdomain addr=10.0.8.13
Sending ‘corosync authkey’, ‘pacemaker authkey’ to ‘centos1.localdomain’, ‘centos2.localdomain’, ‘centos3.localdomain’
centos1.localdomain: successful distribution of the file ‘corosync authkey’
centos1.localdomain: successful distribution of the file ‘pacemaker authkey’
centos2.localdomain: successful distribution of the file ‘corosync authkey’
centos2.localdomain: successful distribution of the file ‘pacemaker authkey’
centos3.localdomain: successful distribution of the file ‘corosync authkey’
centos3.localdomain: successful distribution of the file ‘pacemaker authkey’
Sending ‘corosync.conf’ to ‘centos1.localdomain’, ‘centos2.localdomain’, ‘centos3.localdomain’
centos1.localdomain: successful distribution of the file ‘corosync.conf’
centos2.localdomain: successful distribution of the file ‘corosync.conf’
centos3.localdomain: successful distribution of the file ‘corosync.conf’
Cluster has been successfully set up.
[root@centos1 corosync]#

 

Note I had to specify the IP addresses for the nodes. This is because these nodes each have TWO network interfaces with separate IP addresses. If the nodes only had one network interface, then you can leave out the addr= setting.

 

Next you can start the cluster:

 

[root@centos1 corosync]# pcs cluster start
Starting Cluster…
[root@centos1 corosync]#
[root@centos1 corosync]#
[root@centos1 corosync]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: unknown
* Current DC: NONE
* Last updated: Mon Feb 22 12:57:37 2021
* Last change: Mon Feb 22 12:57:35 2021 by hacluster via crmd on centos1.localdomain
* 3 nodes configured
* 0 resource instances configured
Node List:
* Node centos1.localdomain: UNCLEAN (offline)
* Node centos2.localdomain: UNCLEAN (offline)
* Node centos3.localdomain: UNCLEAN (offline)

 

PCSD Status:
centos1.localdomain: Online
centos3.localdomain: Online
centos2.localdomain: Online
[root@centos1 corosync]#

 

 

The Node List says “UNCLEAN”.

 

So I did:

 

pcs cluster start centos1.localdomain
pcs cluster start centos2.localdomain
pcs cluster start centos3.localdomain
pcs cluster status

 

then the cluster was started in clean running state:

 

[root@centos1 cluster]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: centos1.localdomain (version 2.0.5-7.el8-ba59be7122) – partition with quorum
* Last updated: Mon Feb 22 13:22:29 2021
* Last change: Mon Feb 22 13:17:44 2021 by hacluster via crmd on centos1.localdomain
* 3 nodes configured
* 0 resource instances configured
Node List:
* Online: [ centos1.localdomain centos2.localdomain centos3.localdomain ]

 

PCSD Status:
centos1.localdomain: Online
centos2.localdomain: Online
centos3.localdomain: Online
[root@centos1 cluster]#

Continue Reading

How To Set Static or Permanent IP Addresses for Virtual Machines in KVM

Default KVM behaviour is for KVM to issue DHCP temporary IP addresses for its virtual machines. You can suppress this behaviour for newly defined subnets by simply unticking the “Enable DHCP” option for the defined subnet in the Virtual Networks section in the KVM dashboard.
  
However, the NAT bridged network interface is set to automatically issue DHCP IPs. This can be inconvenient when you want to login to the machine from a shell terminal on your PC or laptop rather than accessing the machine via the KVM console terminal.

  
To change these IPs from DHCP to Static, you need to carry out the following steps, using my current environment as an example:
  
Let’s say I want to change the IP of a machine called suse1 from DHCP to Static IP.
    
1. On the KVM host machine, display the list of current KVM networks:
  
virsh net-list
  
root@yoga:/etc# virsh net-list
Name State Autostart Persistent
—————————————————–
default active yes yes
network-10.0.7.0 active yes yes
network-10.0.8.0 active yes yes
  
The interface of the machine I want to set is located on network “default”.
  
2. Find the MAC address or addresses of the virtual machine whose IP address you want to set:
  
Note the machine name is the name used to define the machine in KVM. It need not be the same as the OS hostname of the machine.
  
virsh dumpxml <machine name> | grep -i ‘<mac’
  
root@yoga:/home/kevin# virsh dumpxml suse1 | grep -i ‘<mac’
<mac address=’52:54:00:b4:0c:8d’/>
<mac address=’52:54:00:e9:97:91’/>
  
So the machine has two network interfaces.
  
I know from ifconfig (or ip a) that the interface I want to set is the first one, eth0, with mac address: 52:54:00:b4:0c:8d.
  
This is the one that is using the network called “default”.
  
suse1:~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b4:0c:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.179/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb4:c8d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:e9:97:91 brd ff:ff:ff:ff:ff:ff
inet 10.0.7.11/24 brd 10.0.7.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.0.7.100/24 brd 10.0.7.255 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fee9:9791/64 scope link
valid_lft forever pre
  
3. Edit the network configuration file:
  
virsh net-edit <network name>
  
So in this example I do:
  
virsh net-edit default
  

Add the following entry between <dhcp> and </dhcp> as follows:
  
<host mac=’xx:xx:xx:xx:xx:xx’ name=’virtual_machine’ ip=’xxx.xxx.xxx.xxx’/>
  
whereby
  
mac = mac address of the virtual machine
  
name = KVM virtual machine name
  
IP = IP address you want to set for this interface
  
So for this example I add:
  
<host mac=’52:54:00:b4:0c:8d’ name=’suse1′ ip=’192.168.122.11’/>
  
then save and close the file.
  
4. Then restart the KVM DHCP service:
  
virsh net-destroy <network name>
  
virsh net-destroy default
  
virsh net-start <network name>
  
virsh net-start default
  
5. Shutdown the virtual machine:
  
virsh shutdown <machine name>
  
virsh shutdown suse1
  
6. Stop the network service:
  
virsh net-destroy default
  
7. Restart the libvertd system:
  
systemctl restart virtlogd.socket
systemctl restart libvirtd
  
8. Restart the network:
  
virsh net-start <network name>
  
virsh net-start default
  
9. Then restart the KVM desktop virt-manager
  
virt-manager
  
10. Then restart the virtual machine again, either on the KVM desktop or else using the command:
  
virsh start <virtual machine>
  
virsh start suse1
  
If the steps have all been performed correctly, the network interface on the machine should now have the static IP address you defined instead of a DHCP address from KVM.
  
Verify on the guest machine with ifconfig or ip a

 

Continue Reading