Tags Archives: libvert

How To Configure KVM Virtualization on Ubuntu v20.04 Hosts

To start the KVM virt manager GUI enter:

 

virt-manager

 

alternatively, VMs can be created, started, and modified using the virt-install command to create a VM via Linux terminal.

 

The syntax is:

 

virt-install –option1=value –option2=value …

 

 

Options behind the command serve to define the parameters of the installation:

 

Option Description
–name The name you give to the VM
–description A short description of the VM
–ram The amount of RAM you wish to allocate to the VM
–vcpus The number of virtual CPUs you wish to allocate to the VM
–disk The location of the VM on your disk (if you specify a qcow2 disk file that does not exist, it will be automatically created)
–cdrom The location of the ISO file you downloaded
–graphics Specifies the display type

 

KVM componemt packages:

 

qemu-kvm – The main package
libvirt – Includes the libvirtd server exporting the virtualization support
libvirt-client – This package contains virsh and other client-side utilities
virt-install – Utility to install virtual machines
virt-viewer – Utility to display graphical console for a virtual machine

 

 

Check for Virtualization Support on Ubuntu 20.04

 

Before installing KVM, check if your CPU supports hardware virtualization:

 

egrep -c ‘(vmx|svm)’ /proc/cpuinfo

 

Check the number given in the output:

 

root@asus:~# egrep -c ‘(vmx|svm)’ /proc/cpuinfo

8
root@asus:~#

 

 

If the command returns a value of 0, the CPU processor is not capable of running KVM. If it returns any other number, then it means you can proceed with the installation.

 

Next check if your system can use KVM acceleration:

 

root@asus:~# kvm-ok

 

INFO: /dev/kvm exists
KVM acceleration can be used
root@asus:~#

 

If kvm-ok returns an error stating KVM acceleration cannot be used, try installing cpu-checker:

 

sudo apt install cpu-checker

 

Then restart the terminal.

 

You can now start installing KVM.

 

Install KVM on Ubuntu 20.04

Overview of the steps involved:

 

Install related packages using apt
Authorize users to run VMs

Verify that
the installation was successful

Step 1: Install KVM Packages

 

First, update the repositories:

 

sudo apt update

 

then:

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils

 

root@asus:~# apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils

 

Reading package lists… Done
Building dependency tree
Reading state information… Done

 

bridge-utils is already the newest version (1.6-3ubuntu1).
libvirt-clients is already the newest version (6.6.0-1ubuntu3.5).
libvirt-daemon-system is already the newest version (6.6.0-1ubuntu3.5).
qemu-kvm is already the newest version (1:5.0-5ubuntu9.9).

 

0 upgraded, 0 newly installed, 0 to remove and 9 not upgraded.

 

root@asus:~#

 

Step 2: Authorize Users

 

1. Only members of the libvirt and kvm user groups can run virtual machines. Add a user to the libvirt group:

 

sudo adduser ‘username’ libvirt

 

Replacing username with the actual username, in this case:

 

adduser kevin libvirt

 

root@asus:~# adduser kevin libvirt
The user `kevin’ is already a member of `libvirt’.
root@asus:~#

 

 

Adding a user to the libvirt usergroup

 

Next do the same for the kvm group:

 

sudo adduser ‘[username]’ kvm

 

Adding user to the kvm usergroup

 

adduser kevin kvm

root@asus:~# adduser kevin kvm
The user `kevin’ is already a member of `kvm’.
root@asus:~#

 

 

(NOTE: I had already added this information and installation during a previous session)

 

To remove a user from the libvirt or kvm group, replace adduser with deluser using the above syntax.

 

Step 3: Verify the Installation:

 

virsh list –all

 

 

root@asus:~# virsh list –all
Id Name State
————————————–
– centos-base centos8 shut off
– ceph-base centos7 shut off
– ceph-mon shut off
– ceph-osd0 shut off
– ceph-osd1 shut off
– ceph-osd2 shut off
– router1 10.0.8.100 shut off
– router2 10.0.9.100 shut off

 

root@asus:~#

 

The above list shows the virtual machines that already exist on this system.

 

 

 

Then make sure that the needed kernel modules have been loaded:

 

 

root@asus:~# lsmod | grep kvm
kvm_amd 102400 0
kvm 724992 1 kvm_amd
ccp 102400 1 kvm_amd
root@asus:~#

 

If your host machine is running an Intel CPU, then you will see kvm_intel displayed. In my case I am using an AMD processor, so the kvm_amd is the one displayed.

 

If the modules are not loaded automatically, you can load them manually using the modprobe command:

 

# modprobe kvm_intel

 

Finally, start the libvirtd daemon. The following command both enables it at boot time and starts it immediately:

 

systemctl enable –now libvirtd

 

root@asus:~# systemctl enable –now libvirtd
root@asus:~#

 

 

Use the systemctl command to check the status of libvirtd:

 

systemctl status libvirtd

root@asus:~# systemctl status libvirtd
● libvirtd.service – Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-08-18 22:17:51 CEST; 21h ago
TriggeredBy: ● libvirtd-ro.socket
● libvirtd.socket
● libvirtd-admin.socket
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 1140 (libvirtd)
Tasks: 21 (limit: 32768)
Memory: 29.3M
CGroup: /system.slice/libvirtd.service
├─1140 /usr/sbin/libvirtd
├─1435 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt/>
├─1436 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt/>
├─1499 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/10.0.8.0.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt>
└─1612 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/10.0.9.0.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt>

Aug 19 19:50:39 asus dnsmasq[1435]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1612]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1499]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1435]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1499]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1435]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1499]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1435]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1612]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1612]: using nameserver 127.0.0.53#53
lines 1-28/28 (END)

 

next, install virt-manager, a GUI tool for creating and managing VMs:

 

sudo apt install virt-manager

 

root@asus:~# apt install virt-manager
Reading package lists… Done
Building dependency tree
Reading state information… Done
virt-manager is already the newest version (1:2.2.1-4ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 9 not upgraded.
root@asus:~#

 

 

To use the Virt Manager GUI

 

1. Start virt-manager with:

 

sudo virt-manager

 

 

alternatively, usingthe virt-install command line tool:

 

Use the virt-install command to create a VM via Linux terminal. The syntax is:

 

 

virt-install –option1=value –option2=value …

 

Options behind the command serve to define the parameters of the installation.

 

Here is what each of them means:

 

Option Description
–name The name you give to the VM
–description A short description of the VM
–ram The amount of RAM you wish to allocate to the VM
–vcpus The number of virtual CPUs you wish to allocate to the VM
–disk The location of the VM on your disk (if you specify a qcow2 disk file that does not exist, it will be automatically created)
–cdrom The location of the ISO file you downloaded
–graphics Specifies the display type

 

 

currently running virtual machines:

 

 

virt-install –help

 

To create a virtual machine using the virt-install CLI command instead of using virt-manager GUI:

 

Installing a virtual machine from an ISO image

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–cdrom /path/to/rhel7.iso \
–os-variant rhel7

 

The –cdrom /path/to/rhel7.iso option specifies the VM will be installed from the CD or DVD image from the specified location.

 

Importing a virtual machine image from virtual disk image:

 

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk /path/to/imported/disk.qcow \
–import \
–os-variant rhel7

 

The –import option specifies the virtual machine will be imported from virtual disk image specified by –disk /path/to/imported/disk.qcow option.

 

Installing a virtual machine from a network location:

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–location http://example.com/path/to/os \
–os-variant rhel7

The –location http://example.com/path/to/os option specifies that the installation tree is at the specified network location.

 

Installing a virtual machine with Kickstart by using a kickstart file:

 

–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–location http://example.com/path/to/os \
–os-variant rhel7 \
–initrd-inject /path/to/ks.cfg \
–extra-args=”ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8″

 

The initrd-inject and the extra-args options specify that the virtual machine will be installed using a Kickstarter file.

 

 

To change some VM machine parameters you can use virsh as an alternative to virt-manager GUI. For example:

 

virsh edit linuxconfig-vm

 

this opens the VM config file for the vm specified.

 

Finally, reboot the VM:

 

virsh reboot linuxconfig-vm

 

 

 

To autostart a virtual machine on host boot-up using virsh:

 

virsh autostart linuxconfig-vm

 

To disable this option:

 

virsh autostart –disable linuxconfig-vm

 

 

 

 

Continue Reading

How To install Cluster Fencing Using Libvert on KVM Virtual Machines

These are my practical notes on installing libvert fencing on Centos  cluster nodes running on virtual machines using the KVM hypervisor platform.

 

 

NOTE: If a local firewall is enabled, open the chosen TCP port (in this example, the default of 1229) to the host.

 

Alternatively if you are using a testing or training environment you can disable the firewall. Do not do the latter on production environments!

 

1. On the KVM host machine, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

yum -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast

 

2. On the KVM host, create a shared secret key called /etc/cluster/fence_xvm.key. The target directory /etc/cluster needs to be created manually on the nodes and the KVM host.

 

mkdir -p /etc/cluster

 

dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

 

then distribute the key from the KVM host to all the nodes:

3. Distribute the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the KVM host.

 

scp /etc/cluster/fence_xvm.key centos1vm:/etc/cluster/

 

and copy also to the other nodes

4. On the KVM host, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener. Also make sure you give the correct directory location for the shared key you just created (here /etc/cluster/fence.xvm.key):

 

fence_virtd -c

5. Enable and start the fence_virtd daemon on the hypervisor.

 

systemctl enable fence_virtd
systemctl start fence_virtd

6. Also install fence_virtd and enable and start on the nodes

 

root@yoga:/etc# systemctl enable fence_virtd
Synchronizing state of fence_virtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable fence_virtd
root@yoga:/etc# systemctl start fence_virtd
root@yoga:/etc# systemctl status fence_virtd
● fence_virtd.service – Fence-Virt system host daemon
Loaded: loaded (/lib/systemd/system/fence_virtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-23 14:13:20 CET; 6min ago
Docs: man:fence_virtd(8)
man:fence_virt.con(5)
Main PID: 49779 (fence_virtd)
Tasks: 1 (limit: 18806)
Memory: 3.2M
CGroup: /system.slice/fence_virtd.service
└─49779 /usr/sbin/fence_virtd -w

 

Feb 23 14:13:20 yoga systemd[1]: Starting Fence-Virt system host daemon…
root@yoga:/etc#

 

7. Test the KVM host multicast connectivity with:

 

fence_xvm -o list

root@yoga:/etc# fence_xvm -o list
centos-base c023d3d6-b2b9-4dc2-b0c7-06a27ddf5e1d off
centos1 2daf2c38-b9bf-43ab-8a96-af124549d5c1 on
centos2 3c571551-8fa2-4499-95b5-c5a8e82eb6d5 on
centos3 2969e454-b569-4ff3-b88a-0f8ae26e22c1 on
centosstorage 501a3dbb-1088-48df-8090-adcf490393fe off
suse-base 0b360ee5-3600-456d-9eb3-d43c1ee4b701 off
suse1 646ce77a-da14-4782-858e-6bf03753e4b5 off
suse2 d9ae8fd2-eede-4bd6-8d4a-f2d7d8c90296 off
suse3 7ad89ea7-44ae-4965-82ba-d29c446a0607 off
root@yoga:/etc#

 

 

8. create your fencing devices, one for each node:

 

pcs stonith create <name for our fencing device for this vm cluster host> fence_xvm port=”<the KVM vm name>” pcmk_host_list=”<FQDN of the cluster host>”

 

one for each node with the values set accordingly for each host. So it will look like this:

 

MAKE SURE YOU SET ALL THE NAMES CORRECTLY!

 

On ONE of the nodes, create all the following fence devices, usually one does this on the DC (current designated co-ordinator) node:

 

[root@centos1 etc]# pcs stonith create fence_centos1 fence_xvm port=”centos1″ pcmk_host_list=”centos1.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos2 fence_xvm port=”centos2″ pcmk_host_list=”centos2.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos3 fence_xvm port=”centos3″ pcmk_host_list=”centos3.localdomain”
[root@centos1 etc]#

 

9. Next, enable fencing on the cluster nodes.

 

Make sure the property is set to TRUE

 

check with

 

pcs -f stonith_cfg property

 

If the cluster fencing stonith property is set to FALSE then you can manually set it to TRUE on all the Cluster nodes:

 

pcs -f stonith_cfg property set stonith-enabled=true

 

[root@centos1 ~]# pcs -f stonith_cfg property
Cluster Properties:
stonith-enabled: true
[root@centos1 ~]#

 

you can also do:

pcs stonith cleanup fence_centos1 and the other hosts centos2 and centos3

 

[root@centos1 ~]# pcs stonith cleanup fence_centos1
Cleaned up fence_centos1 on centos3.localdomain
Cleaned up fence_centos1 on centos2.localdomain
Cleaned up fence_centos1 on centos1.localdomain
Waiting for 3 replies from the controller
… got reply
… got reply
… got reply (done)
[root@centos1 ~]#

 

 

If a stonith id or node is not specified then all stonith resources and devices will be cleaned.

pcs stonith cleanup

 

then do

 

pcs stonith status

 

[root@centos1 ~]# pcs stonith status
* fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos3 (stonith:fence_xvm): Started centos3.localdomain
[root@centos1 ~]#

 

 

Some other stonith fencing commands:

 

To list the available fence agents, execute below command on any of the Cluster node

 

# pcs stonith list

 

(can take several seconds, dont kill!)

 

root@ubuntu1:~# pcs stonith list
apcmaster – APC MasterSwitch
apcmastersnmp – APC MasterSwitch (SNMP)
apcsmart – APCSmart
baytech – BayTech power switch
bladehpi – IBM BladeCenter (OpenHPI)
cyclades – Cyclades AlterPath PM
external/drac5 – DRAC5 STONITH device
.. .. .. list truncated…

 

 

To get more details about the respective fence agent you can use:

 

root@ubuntu1:~# pcs stonith describe fence_xvm
fence_xvm – Fence agent for virtual machines

 

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

 

Stonith options:
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
.. .. .. list truncated . ..

 

Continue Reading