Tags Archives: KVM

How to Install and Use KVM libvirt on Ubuntu

Virtualization Compatibility Check

 

Execute the below command to install the cpu-checker package.

 

apt install -y cpu-checker

 

check if your CPU supports virtualization by running this command.

 

kvm-ok

 

If you get the following result, your CPU supports virtualization with KVM:

 

root@asus:~# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
root@asus:~#

 

 

Installing KVM on Ubuntu

 

Provided your CPU supports virtualization, you can install KVM on your machine.

 

To install KVM on Ubuntu, run the apt command below.

 

apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager -y

 

These are the packages that will be installed:

 

qemu-kvm – This package provides accelerated KVM support. This is a userspace component for QEMU, responsible for accelerating KVM guest machines.

 

libvirt-daemon-system – This is the libvirt daemon and related management tools used for creating, managing, and destroying virtual machines on the host machine.

 

libvirt-clients – This provides the libvirt command-line interface for managing virtual machines.

 

bridge-utils – This package provides the userland tools used for configuring virtual network bridges. On a hosted virtualization system like KVM, the network bridge connects the virtual guest VM network with the real host machine network.

 

virt-manager – This provides a graphical user interface (GUI) for managing your virtual machines should you wish to use it.

 

If you want the libvirt daemon to start automatically at boot then enable libvirtd:

 

systemctl enable libvirtd

 

If libvirtd is not already running, do:

 

systemctl start libvirtd

 

Check if KVM is loaded in the kernel with:

 

lsmod | grep -i kvm

 

 

 

Overview of basic libvirt commands

 

 

To launch the libvirt KVM GUI Virtual Machine Manager run:

 

 

virt-manager

 

Alternatively you can use the virt-install commands to launch machines from the CLI.

 

Example:

 

virt-install –name fedora1 –vcpu 1 –memory 2048 –cdrom /root/Fedora-Workstation-Live-x86_64-36-1.5.iso –disk size=12 –check disk_size=off

 

 

This creates a Fedora machine with hostname fedora1 with 2GB RAM and a 12GB virtual hard drive.

 

 

To list all VMs:

 

virsh list –all

 

To shutdown the machine:

 

 

virsh shutdown fedora1

 

 

To start the machine:

 

 

virsh start fedora1

 

 

To display the storage allocated to the machine:

 

 

virsh domblklist fedora1

 

To destroy the machine:

 

virsh destroy fedora1

 

To delete the machine and its disk image use virsh undefine. This deletes the VM, and the –storage option takes the comma-separated list of storage you wish to remove. Example:

 

virsh undefine fedora1 -storage /var/lib/libvirt/images/fedora1-1.qcow2

 

 

Continue Reading

How To Get Started With Vagrant

What is Vagrant

 

Vagrant is a simple open-source virtual machine manager originally developed by HashiCorp which allows you to easily create and run a minimal pre-built virtual machine from a virtual machine image source and SSH immediately into it without any further configuration being necessary.

It’s ideal for developers who require a test machine for their application development.

 

Vagrant itself only manages your virtual machines and it can use VirtualBox or other VM platforms such as libvirt by means of plug-ins.

Vagrant acts as a wrapper on virtual machines, communicating with them via API providers or hypervisors. The default provider for Vagrant is VirtualBox.

 

Vagrant is available via the official Ubuntu repository and can be installed using other methods such as apt, apt-get, and aptitude.

 

 

How to setup your Vagrant environment

 

Create a directory called ~/Vagrant. This is where your Vagrantfiles will be stored.

 

mkdir ~/Vagrant

 

In this directory, create a subdirectory for the distribution you want to download.

 

For instance, for a CentOS test server, create a CentOS directory:

 

mkdir ~/Vagrant/centos

 

cd ~/Vagrant/centos

 

Next you need to create a Vagrantfile:

 

vagrant init

 

You should now see the following Vagrantfile in the Vagrant directory:

# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The “2” in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don’t change it unless you know what
# you’re doing.
Vagrant.configure(“2”) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = “base”
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing “localhost:8080” will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network “forwarded_port”, guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network “forwarded_port”, guest: 80, host: 8080, host_ip: “127.0.0.1”
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network “private_network”, ip: “192.168.33.10”
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network “public_network”
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder “../data”, “/vagrant_data”
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider “virtualbox” do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = “1024”
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision “shell”, inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
# SHELL
end

 

 

To define the virtual machine (known in Vagrant as a “box”) in Vagrantfile edit the following line:

 

config.vm.box = “[box_name]”

 

To validate the Vagrantfile use:

 

vagrant validate

 

 

After adding new changes to the Vagrantfile, to apply the changes use

 

vagrant reload

 

 

To list the status of the current running VM:

 

vagrant status

 

 

To get debug info on deployment:

 

vagrant –debug up

 

To display full list of guest ports mapped to the host machine ports:

 

vagrant port [vm_name]

 

 

 

 

Selecting a Vagrant virtual machine to run

 

Vagrant boxes are sourced from three different places: Hashicorp (the maintainers of Vagrant), distribution maintainers, and other third-parties.

 

You can browse through the images at app.vagrantup.com/boxes/search.

 

vagrant init generic/centos8

 

The init subcommand will create the Vagrantfile configuration file, in your current directory, and then transform that directory into a Vagrant environment.

 

You can view a list of current known Vagrant environments by means of the global-status subcommand:

 

vagrant global-status
id name provider state directory
——————————————-
49c797f default libvirt running /home/tux/Vagrant/centos8
Starting a virtual machine with Vagrant

 

You can then start your virtual machine by entering:

 

vagrant up

 

This causes Vagrant to download the virtual machine image if it doesn’t already exist locally, set up a virtual network, and configure your box.

 

Entering a Vagrant virtual machine

 

Once your virtual machine is up and running, you can log in to it with vagrant ssh:

 

vagrant ssh
box$

 

You connect to the box by means of ssh. You can run all the commands native to that host OS. It’s a virtual machine with its own kernel, emulated hardware and all common Linux software.

 

Leaving a Vagrant virtual machine

 

To leave your Vagrant virtual machine, log out of the host as you normally exit a Linux computer:

 

box$ exit

 

Alternatively, you can power the virtual machine down:

 

box$ sudo poweroff

 

You can also stop the machine from running using the vagrant command:

 

box$ vagrant halt

 

 

Destroying a Vagrant virtual machine

 

When finished with a Vagrant virtual machine, you can destroy it:

 

vagrant destroy

 

Alternatively, you can remove a virtual machine by running the global box subcommand:

vagrant box remove generic/centos8

 

What is libvirt

 

The libvirt project is a toolkit for managing virtualization, with support for KVM, QEMU, LXC, and more. Its rather like a virtual machine API, allowing developers to test and run applications on virtual machines with minimal overhead.

 

On some distributions you may need to first start the libvirt daemon:

 

systemctl start libvirtd

 

Install vagrant-libvirt plugin in Linux

 

In order to run Vagrant virtual machines on KVM, you need to install the vagrant-libvirt plugin. This plugin adds the Libvirt provider to Vagrant and allows Vagrant to control and provision machines via Libvirt.

 

Install the necessary dependencies for vagrant-libvirt plugin.

 

On Ubuntu:

 

$ sudo apt install qemu libvirt-daemon-system libvirt-clients libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev ruby-libvirt ebtables dnsmasq-base

 

root@asus:/home/kevin# vagrant –version
Vagrant 2.3.4
root@asus:/home/kevin#

 

Now we can install the plug-in:

 

root@asus:/home/kevin# vagrant plugin install vagrant-libvirt
==> vagrant: A new version of Vagrant is available: 2.3.5 (installed version: 2.3.4)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

 

Installing the ‘vagrant-libvirt’ plugin. This can take a few minutes…

 

Fetching formatador-1.1.0.gem
Fetching fog-core-2.3.0.gem
Fetching fog-json-1.2.0.gem
Fetching nokogiri-1.15.0-x86_64-linux.gem
Fetching fog-xml-0.1.4.gem
Fetching ruby-libvirt-0.8.0.gem
Building native extensions. This could take a while…
Fetching fog-libvirt-0.11.0.gem
Fetching xml-simple-1.1.9.gem
Fetching diffy-3.4.2.gem
Fetching vagrant-libvirt-0.12.0.gem
Installed the plugin ‘vagrant-libvirt (0.12.0)’!
root@asus:/home/kevin#

 

 

Testing Vagrant Box

First let’s download a Vagrant box that supports libvirt.

 

 

see https://vagrantcloud.com/generic/ubuntu2004

 

vagrant box add generic/ubuntu2204 –provider libvirt

 

Create a small configuration file to use use this new Vagrant box:

 

cat <<-VAGRANTFILE > Vagrantfile

 

Vagrant.configure(“2”) do |config|
config.vm.box = “generic/ubuntu2204”
end
VAGRANTFILE

 

 

root@asus:/home/kevin# cat <<-VAGRANTFILE > Vagrantfile
> Vagrant.configure(“2”) do |config|
config.vm.box = “generic/ubuntu2204”
end
VAGRANTFILE
root@asus:/home/kevin#

 

 

And now bring up the system (< 20 seconds):

 

vagrant up –provider libvirt

 

You can now log onto the virtual machine guest with:

 

vagrant ssh

 

Check the list of boxes present locally.

 

$ vagrant box list

 

root@asus:/home/kevin# vagrant box list
generic/ubuntu2204 (libvirt, 4.2.16)
root@asus:/home/kevin#

 

 

 

Vagrant will create a Linux bridge on the host system.

 

$ brctl show virbr1

 

root@asus:/home/kevin# brctl show virbr1
bridge name bridge id STP enabled interfaces
virbr1 8000.525400075114 yes
root@asus:/home/kevin#

 

root@asus:/home/kevin# vagrant ssh
vagrant@ubuntu2204:~$

 

vagrant@ubuntu2204:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 956K 197M 1% /run
/dev/mapper/ubuntu–vg-ubuntu–lv 62G 4.9G 54G 9% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
vagrant@ubuntu2204:~$

 

 

 

Run virsh list to see if you’ll get a list of VMs.

 

virsh list

 

To ssh to the VM, use vagrant ssh command.

 

vagrant ssh

 

 

To output .ssh/config valid syntax for connecting to this environment via ssh, run ssh-config command.

 

You need to place provided output under ~/.ssh/config directory to ssh.

 

$ vagrant ssh-config

 

 

root@asus:/home/kevin# vagrant up
Bringing machine ‘default’ up with ‘libvirt’ provider…
==> default: Checking if box ‘generic/ubuntu2204’ version ‘4.2.16’ is up to date…
==> default: Uploading base box image as volume into Libvirt storage…
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings…
==> default: — Name: kevin_default
==> default: — Description: Source: /home/kevin/Vagrantfile
==> default: — Domain type: kvm
==> default: — Cpus: 2
==> default: — Feature: acpi
==> default: — Feature: apic
==> default: — Feature: pae
==> default: — Clock offset: utc
==> default: — Memory: 2048M
==> default: — Base box: generic/ubuntu2204
==> default: — Storage pool: default
==> default: — Image(vda): /var/lib/libvirt/images/kevin_default.img, virtio, 128G
==> default: — Disk driver opts: cache=’default’
==> default: — Graphics Type: vnc
==> default: — Video Type: cirrus
==> default: — Video VRAM: 256
==> default: — Video 3D accel: false
==> default: — Keymap: en-us
==> default: — TPM Backend: passthrough
==> default: — INPUT: type=mouse, bus=ps2
==> default: Creating shared folders metadata…
==> default: Starting domain.
==> default: Domain launching with graphics connection settings…
==> default: — Graphics Port: 5900
==> default: — Graphics IP: 127.0.0.1
==> default: — Graphics Password: Not defined
==> default: — Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 192.168.121.218:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection refused. Retrying…
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it’s present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Machine booted and ready!
root@asus:/home/kevin#

 

 

root@asus:/home/kevin# vagrant ssh
vagrant@ubuntu2204:~$

 

vagrant@ubuntu2204:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 956K 197M 1% /run
/dev/mapper/ubuntu–vg-ubuntu–lv 62G 4.9G 54G 9% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
vagrant@ubuntu2204:~$

 

 

To shut down the VM, run:

 

$ vagrant halt

 

 

To set VM to its initial state by cleaning all data, use vagrant destroy:

 

$ vagrant destroy

 

 

At any time, you can view a list of known Vagrant environments using the global-status subcommand:

 

$ vagrant global-status

 

root@asus:/home/kevin# vagrant global-status
id name provider state directory
———————————————————————-
0466da3 default libvirt running /home/kevin

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date (use “vagrant global-status –prune” to prune invalid
entries). To interact with any of the machines, you can go to that
directory and run Vagrant, or you can use the ID directly with
Vagrant commands from any directory. For example:
“vagrant destroy 1a2b3c4d”
root@asus:/home/kevin#

 

vagrant@ubuntu2204:~$ sudo poweroff
Connection to 192.168.121.218 closed by remote host.
root@asus:/home/kevin#
root@asus:/home/kevin#

 

root@asus:/home/kevin# vagrant box list
generic/ubuntu2204 (libvirt, 4.2.16)
root@asus:/home/kevin#

 

 

CHEATSHEET FOR VAGRANT

Typing vagrant from the command line will display a list of all available commands.

 

Be sure that you are in the same directory as the Vagrantfile when running these commands!

 

Creating a VM

vagrant init — Initialize Vagrant with a Vagrantfile and ./.vagrant directory, using no specified base image. Before you can do vagrant up, you’ll need to specify a base image in the Vagrantfile.

vagrant init <boxpath> — Initialize Vagrant with a specific box. To find a box, go to the public Vagrant box catalog. When you find one you like, just replace it’s name with boxpath. For example, vagrant init ubuntu/trusty64.

Starting a VM
vagrant up — starts vagrant environment (also provisions only on the FIRST vagrant up)
vagrant resume — resume a suspended machine (vagrant up works just fine for this as well)
vagrant provision — forces reprovisioning of the vagrant machine
vagrant reload — restarts vagrant machine, loads new Vagrantfile configuration
vagrant reload –provision — restart the virtual machine and force provisioning

Getting into a VM
vagrant ssh — connects to machine via SSH
vagrant ssh <boxname> — If you give your box a name in your Vagrantfile, you can ssh into it with boxname. Works from any directory.

Stopping a VM
vagrant halt — stops the vagrant machine
vagrant suspend — suspends a virtual machine (remembers state)

 

Cleaning Up a VM
vagrant destroy — stops and deletes all traces of the vagrant machine
vagrant destroy -f — same as above, without confirmation

Boxes
vagrant box list — see a list of all installed boxes on your computer
vagrant box add <name> <url> — download a box image to your computer
vagrant box outdated — check for updates vagrant box update
vagrant box remove <name> — deletes a box from the machine
vagrant package — packages a running virtualbox env in a reusable box

Saving Progress
-vagrant snapshot save [options] [vm-name] <name> — vm-name is often default. Allows us to save so that we can rollback at a later time

 

Tips
vagrant -v — get the vagrant version
vagrant status — outputs status of the vagrant machine
vagrant global-status — outputs status of all vagrant machines
vagrant global-status –prune — same as above, but prunes invalid entries
vagrant provision –debug — use the debug flag to increase the verbosity of the output
vagrant push — yes, vagrant can be configured to deploy code!
vagrant up –provision | tee provision.log — Runs vagrant up, forces provisioning and logs all output to a file

Plugins
vagrant-hostsupdater : $ vagrant plugin install vagrant-hostsupdater to update your /etc/hosts file automatically each time you start/stop your vagrant box.

 

 

Multi-Machine
Vagrant can be used to run and control multiple guest machines via a Vagrantfile. This is called a “multi-machine” environment.

 

 

Defining Multiple Machines

 

Multiple machines are defined in the Vagrantfile using the config.vm.define statement.

 

This configuration directive effectively creates a Vagrant configuration within a configuration, for example:

 

 

Vagrant.configure(“2”) do |config|
config.vm.provision “shell”, inline: “echo Hello”

 

config.vm.define “web” do |web|
web.vm.box = “apache”
end

 

config.vm.define “db” do |db|
db.vm.box = “mysql”
end

end

 

Controlling Multiple Machines

 

Commands that target a specific virtual machine, such as vagrant ssh, require the name of the machine to be specified.

 

For example, here you would specify vagrant ssh web or vagrant ssh db.

 

Other commands, such as vagrant up, apply to all VMs by default.

 

Alternatively, you can specify only specific machines, such as

 

vagrant up web

 

or

 

vagrant up db.

 

Autostart Machines

 

By default in a multi-machine environment, vagrant up will start all the defined VMs.

 

The autostart setting enables you to instruct Vagrant NOT to start specific machines.

 

For example:

 

config.vm.define “web”
config.vm.define “db”
config.vm.define “db_follower”, autostart: false

 

If you then run vagrant up with the above settings, Vagrant will automatically start the “web” and “db” machines, but does not launch the “db_follower” VM.

 

 

 

 

Continue Reading

How To Configure KVM Virtualization on Ubuntu v20.04 Hosts

To start the KVM virt manager GUI enter:

 

virt-manager

 

alternatively, VMs can be created, started, and modified using the virt-install command to create a VM via Linux terminal.

 

The syntax is:

 

virt-install –option1=value –option2=value …

 

 

Options behind the command serve to define the parameters of the installation:

 

Option Description
–name The name you give to the VM
–description A short description of the VM
–ram The amount of RAM you wish to allocate to the VM
–vcpus The number of virtual CPUs you wish to allocate to the VM
–disk The location of the VM on your disk (if you specify a qcow2 disk file that does not exist, it will be automatically created)
–cdrom The location of the ISO file you downloaded
–graphics Specifies the display type

 

KVM componemt packages:

 

qemu-kvm – The main package
libvirt – Includes the libvirtd server exporting the virtualization support
libvirt-client – This package contains virsh and other client-side utilities
virt-install – Utility to install virtual machines
virt-viewer – Utility to display graphical console for a virtual machine

 

 

Check for Virtualization Support on Ubuntu 20.04

 

Before installing KVM, check if your CPU supports hardware virtualization:

 

egrep -c ‘(vmx|svm)’ /proc/cpuinfo

 

Check the number given in the output:

 

root@asus:~# egrep -c ‘(vmx|svm)’ /proc/cpuinfo

8
root@asus:~#

 

 

If the command returns a value of 0, the CPU processor is not capable of running KVM. If it returns any other number, then it means you can proceed with the installation.

 

Next check if your system can use KVM acceleration:

 

root@asus:~# kvm-ok

 

INFO: /dev/kvm exists
KVM acceleration can be used
root@asus:~#

 

If kvm-ok returns an error stating KVM acceleration cannot be used, try installing cpu-checker:

 

sudo apt install cpu-checker

 

Then restart the terminal.

 

You can now start installing KVM.

 

Install KVM on Ubuntu 20.04

Overview of the steps involved:

 

Install related packages using apt
Authorize users to run VMs

Verify that
the installation was successful

Step 1: Install KVM Packages

 

First, update the repositories:

 

sudo apt update

 

then:

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils

 

root@asus:~# apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils

 

Reading package lists… Done
Building dependency tree
Reading state information… Done

 

bridge-utils is already the newest version (1.6-3ubuntu1).
libvirt-clients is already the newest version (6.6.0-1ubuntu3.5).
libvirt-daemon-system is already the newest version (6.6.0-1ubuntu3.5).
qemu-kvm is already the newest version (1:5.0-5ubuntu9.9).

 

0 upgraded, 0 newly installed, 0 to remove and 9 not upgraded.

 

root@asus:~#

 

Step 2: Authorize Users

 

1. Only members of the libvirt and kvm user groups can run virtual machines. Add a user to the libvirt group:

 

sudo adduser ‘username’ libvirt

 

Replacing username with the actual username, in this case:

 

adduser kevin libvirt

 

root@asus:~# adduser kevin libvirt
The user `kevin’ is already a member of `libvirt’.
root@asus:~#

 

 

Adding a user to the libvirt usergroup

 

Next do the same for the kvm group:

 

sudo adduser ‘[username]’ kvm

 

Adding user to the kvm usergroup

 

adduser kevin kvm

root@asus:~# adduser kevin kvm
The user `kevin’ is already a member of `kvm’.
root@asus:~#

 

 

(NOTE: I had already added this information and installation during a previous session)

 

To remove a user from the libvirt or kvm group, replace adduser with deluser using the above syntax.

 

Step 3: Verify the Installation:

 

virsh list –all

 

 

root@asus:~# virsh list –all
Id Name State
————————————–
– centos-base centos8 shut off
– ceph-base centos7 shut off
– ceph-mon shut off
– ceph-osd0 shut off
– ceph-osd1 shut off
– ceph-osd2 shut off
– router1 10.0.8.100 shut off
– router2 10.0.9.100 shut off

 

root@asus:~#

 

The above list shows the virtual machines that already exist on this system.

 

 

 

Then make sure that the needed kernel modules have been loaded:

 

 

root@asus:~# lsmod | grep kvm
kvm_amd 102400 0
kvm 724992 1 kvm_amd
ccp 102400 1 kvm_amd
root@asus:~#

 

If your host machine is running an Intel CPU, then you will see kvm_intel displayed. In my case I am using an AMD processor, so the kvm_amd is the one displayed.

 

If the modules are not loaded automatically, you can load them manually using the modprobe command:

 

# modprobe kvm_intel

 

Finally, start the libvirtd daemon. The following command both enables it at boot time and starts it immediately:

 

systemctl enable –now libvirtd

 

root@asus:~# systemctl enable –now libvirtd
root@asus:~#

 

 

Use the systemctl command to check the status of libvirtd:

 

systemctl status libvirtd

root@asus:~# systemctl status libvirtd
● libvirtd.service – Virtualization daemon
Loaded: loaded (/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-08-18 22:17:51 CEST; 21h ago
TriggeredBy: ● libvirtd-ro.socket
● libvirtd.socket
● libvirtd-admin.socket
Docs: man:libvirtd(8)
https://libvirt.org
Main PID: 1140 (libvirtd)
Tasks: 21 (limit: 32768)
Memory: 29.3M
CGroup: /system.slice/libvirtd.service
├─1140 /usr/sbin/libvirtd
├─1435 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt/>
├─1436 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/default.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt/>
├─1499 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/10.0.8.0.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt>
└─1612 /usr/sbin/dnsmasq –conf-file=/var/lib/libvirt/dnsmasq/10.0.9.0.conf –leasefile-ro –dhcp-script=/usr/lib/libvirt>

Aug 19 19:50:39 asus dnsmasq[1435]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1612]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1499]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1435]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1499]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1435]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1499]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1435]: using nameserver 127.0.0.53#53
Aug 19 19:50:39 asus dnsmasq[1612]: reading /etc/resolv.conf
Aug 19 19:50:39 asus dnsmasq[1612]: using nameserver 127.0.0.53#53
lines 1-28/28 (END)

 

next, install virt-manager, a GUI tool for creating and managing VMs:

 

sudo apt install virt-manager

 

root@asus:~# apt install virt-manager
Reading package lists… Done
Building dependency tree
Reading state information… Done
virt-manager is already the newest version (1:2.2.1-4ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 9 not upgraded.
root@asus:~#

 

 

To use the Virt Manager GUI

 

1. Start virt-manager with:

 

sudo virt-manager

 

 

alternatively, usingthe virt-install command line tool:

 

Use the virt-install command to create a VM via Linux terminal. The syntax is:

 

 

virt-install –option1=value –option2=value …

 

Options behind the command serve to define the parameters of the installation.

 

Here is what each of them means:

 

Option Description
–name The name you give to the VM
–description A short description of the VM
–ram The amount of RAM you wish to allocate to the VM
–vcpus The number of virtual CPUs you wish to allocate to the VM
–disk The location of the VM on your disk (if you specify a qcow2 disk file that does not exist, it will be automatically created)
–cdrom The location of the ISO file you downloaded
–graphics Specifies the display type

 

 

currently running virtual machines:

 

 

virt-install –help

 

To create a virtual machine using the virt-install CLI command instead of using virt-manager GUI:

 

Installing a virtual machine from an ISO image

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–cdrom /path/to/rhel7.iso \
–os-variant rhel7

 

The –cdrom /path/to/rhel7.iso option specifies the VM will be installed from the CD or DVD image from the specified location.

 

Importing a virtual machine image from virtual disk image:

 

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk /path/to/imported/disk.qcow \
–import \
–os-variant rhel7

 

The –import option specifies the virtual machine will be imported from virtual disk image specified by –disk /path/to/imported/disk.qcow option.

 

Installing a virtual machine from a network location:

# virt-install \
–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–location http://example.com/path/to/os \
–os-variant rhel7

The –location http://example.com/path/to/os option specifies that the installation tree is at the specified network location.

 

Installing a virtual machine with Kickstart by using a kickstart file:

 

–name guest1-rhel7 \
–memory 2048 \
–vcpus 2 \
–disk size=8 \
–location http://example.com/path/to/os \
–os-variant rhel7 \
–initrd-inject /path/to/ks.cfg \
–extra-args=”ks=file:/ks.cfg console=tty0 console=ttyS0,115200n8″

 

The initrd-inject and the extra-args options specify that the virtual machine will be installed using a Kickstarter file.

 

 

To change some VM machine parameters you can use virsh as an alternative to virt-manager GUI. For example:

 

virsh edit linuxconfig-vm

 

this opens the VM config file for the vm specified.

 

Finally, reboot the VM:

 

virsh reboot linuxconfig-vm

 

 

 

To autostart a virtual machine on host boot-up using virsh:

 

virsh autostart linuxconfig-vm

 

To disable this option:

 

virsh autostart –disable linuxconfig-vm

 

 

 

 

Continue Reading

How To install Cluster Fencing Using Libvert on KVM Virtual Machines

These are my practical notes on installing libvert fencing on Centos  cluster nodes running on virtual machines using the KVM hypervisor platform.

 

 

NOTE: If a local firewall is enabled, open the chosen TCP port (in this example, the default of 1229) to the host.

 

Alternatively if you are using a testing or training environment you can disable the firewall. Do not do the latter on production environments!

 

1. On the KVM host machine, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

yum -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast

 

2. On the KVM host, create a shared secret key called /etc/cluster/fence_xvm.key. The target directory /etc/cluster needs to be created manually on the nodes and the KVM host.

 

mkdir -p /etc/cluster

 

dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

 

then distribute the key from the KVM host to all the nodes:

3. Distribute the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the KVM host.

 

scp /etc/cluster/fence_xvm.key centos1vm:/etc/cluster/

 

and copy also to the other nodes

4. On the KVM host, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener. Also make sure you give the correct directory location for the shared key you just created (here /etc/cluster/fence.xvm.key):

 

fence_virtd -c

5. Enable and start the fence_virtd daemon on the hypervisor.

 

systemctl enable fence_virtd
systemctl start fence_virtd

6. Also install fence_virtd and enable and start on the nodes

 

root@yoga:/etc# systemctl enable fence_virtd
Synchronizing state of fence_virtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable fence_virtd
root@yoga:/etc# systemctl start fence_virtd
root@yoga:/etc# systemctl status fence_virtd
● fence_virtd.service – Fence-Virt system host daemon
Loaded: loaded (/lib/systemd/system/fence_virtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-23 14:13:20 CET; 6min ago
Docs: man:fence_virtd(8)
man:fence_virt.con(5)
Main PID: 49779 (fence_virtd)
Tasks: 1 (limit: 18806)
Memory: 3.2M
CGroup: /system.slice/fence_virtd.service
└─49779 /usr/sbin/fence_virtd -w

 

Feb 23 14:13:20 yoga systemd[1]: Starting Fence-Virt system host daemon…
root@yoga:/etc#

 

7. Test the KVM host multicast connectivity with:

 

fence_xvm -o list

root@yoga:/etc# fence_xvm -o list
centos-base c023d3d6-b2b9-4dc2-b0c7-06a27ddf5e1d off
centos1 2daf2c38-b9bf-43ab-8a96-af124549d5c1 on
centos2 3c571551-8fa2-4499-95b5-c5a8e82eb6d5 on
centos3 2969e454-b569-4ff3-b88a-0f8ae26e22c1 on
centosstorage 501a3dbb-1088-48df-8090-adcf490393fe off
suse-base 0b360ee5-3600-456d-9eb3-d43c1ee4b701 off
suse1 646ce77a-da14-4782-858e-6bf03753e4b5 off
suse2 d9ae8fd2-eede-4bd6-8d4a-f2d7d8c90296 off
suse3 7ad89ea7-44ae-4965-82ba-d29c446a0607 off
root@yoga:/etc#

 

 

8. create your fencing devices, one for each node:

 

pcs stonith create <name for our fencing device for this vm cluster host> fence_xvm port=”<the KVM vm name>” pcmk_host_list=”<FQDN of the cluster host>”

 

one for each node with the values set accordingly for each host. So it will look like this:

 

MAKE SURE YOU SET ALL THE NAMES CORRECTLY!

 

On ONE of the nodes, create all the following fence devices, usually one does this on the DC (current designated co-ordinator) node:

 

[root@centos1 etc]# pcs stonith create fence_centos1 fence_xvm port=”centos1″ pcmk_host_list=”centos1.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos2 fence_xvm port=”centos2″ pcmk_host_list=”centos2.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos3 fence_xvm port=”centos3″ pcmk_host_list=”centos3.localdomain”
[root@centos1 etc]#

 

9. Next, enable fencing on the cluster nodes.

 

Make sure the property is set to TRUE

 

check with

 

pcs -f stonith_cfg property

 

If the cluster fencing stonith property is set to FALSE then you can manually set it to TRUE on all the Cluster nodes:

 

pcs -f stonith_cfg property set stonith-enabled=true

 

[root@centos1 ~]# pcs -f stonith_cfg property
Cluster Properties:
stonith-enabled: true
[root@centos1 ~]#

 

you can also do:

pcs stonith cleanup fence_centos1 and the other hosts centos2 and centos3

 

[root@centos1 ~]# pcs stonith cleanup fence_centos1
Cleaned up fence_centos1 on centos3.localdomain
Cleaned up fence_centos1 on centos2.localdomain
Cleaned up fence_centos1 on centos1.localdomain
Waiting for 3 replies from the controller
… got reply
… got reply
… got reply (done)
[root@centos1 ~]#

 

 

If a stonith id or node is not specified then all stonith resources and devices will be cleaned.

pcs stonith cleanup

 

then do

 

pcs stonith status

 

[root@centos1 ~]# pcs stonith status
* fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos3 (stonith:fence_xvm): Started centos3.localdomain
[root@centos1 ~]#

 

 

Some other stonith fencing commands:

 

To list the available fence agents, execute below command on any of the Cluster node

 

# pcs stonith list

 

(can take several seconds, dont kill!)

 

root@ubuntu1:~# pcs stonith list
apcmaster – APC MasterSwitch
apcmastersnmp – APC MasterSwitch (SNMP)
apcsmart – APCSmart
baytech – BayTech power switch
bladehpi – IBM BladeCenter (OpenHPI)
cyclades – Cyclades AlterPath PM
external/drac5 – DRAC5 STONITH device
.. .. .. list truncated…

 

 

To get more details about the respective fence agent you can use:

 

root@ubuntu1:~# pcs stonith describe fence_xvm
fence_xvm – Fence agent for virtual machines

 

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

 

Stonith options:
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
.. .. .. list truncated . ..

 

Continue Reading

Cluster Fencing Overview

There are two main types of cluster fencing:  power fencing and fabric fencing.

 

Both of these fencing methods require a fencing device to be implemented, such as a power switch or the virtual fencing daemon and fencing agent software to take care of communication between the cluster and the fencing device.

 

Power fencing

 

Cuts ELECTRIC POWER to the node. Known as STONITH. Make sure ALL the power supplies to a node are cut off.

 

Two different kinds of power fencing devices exist:

 

External fencing hardware: for example, a network-controlled power socket block which cuts off power.

 

Internal fencing hardware: for example ILO (Integrated Lights-Out from HP), DRAC, IPMI (Integrated Power Management Interface), or virtual machine fencing. These also power off the hardware of the node.

 

Power fencing can be configured to turn the target machine off and keep it off, or to turn it off and then on again. Turning a machine back on has the added benefit that the machine should come back up cleanly and rejoin the cluster if the cluster services have been enabled.

 

BUT: It is best NOT to permit an automatic rejoin to the cluster. This is because if a node has failed, there will be a reason and a cause and this needs to be investigated first and remedied.

 

Power fencing for a node with multiple power supplies must be configured to ensure ALL power supplies are turned off before being turned out again.

 

If this is not done, the node to be fenced never actually gets properly fenced because it still has power, defeating the point of the fencing operation.

 

Important to bear in mind that you should NOT use an IPMI which shares power or network access with the host because this will mean a power or network failure will cause both host AND its fencing device to fail.

 

Fabric fencing

 

disconnects a node from STORAGE. This is done either by closing ports on an FC (Fibre Channel) switch or by using SCSI reservations.

 

The node will not automatically rejoin.

 

If a node is fenced only with fabric fencing and not in combination with power fencing, then the system administrator must ensure the machine will be ready to rejoin the cluster. Usually this will be done by rebooting the failed node.

 

There are a variety of different fencing agents available to implement cluster node fencing.

 

Multiple fencing

 

Fencing methods can be combined, this is sometimes referred to as “nested fencing”.

 

For example, as first level fencing, one fence device can cut off Fibre Channel by blocking ports on the FC switch, and a second level fencing in which an ILO interface powers down the offending machine.

 

TIP: Don’t run production environment clusters without fencing enabled!

 

If a node fails, you cannot admit it back into the cluster unless it has been fenced.

 

There are a number of different ways of implementing these fencing systems. The notes below give an overview of some of these systems.

 

SCSI fencing

 

SCSI fencing does not require any physical fencing hardware.

 

SCSI Reservation is a mechanism which allows SCSI clients or initiators to reserve a LUN for their exclusive access only and prevents other initiators from accessing the device.

 

SCSI reservations are used to control access to a shared SCSI device such as a hard drive.

 

An initiator configures a reservation on a LUN to prevent another initiator or SCSI client from making changes to the LUN. This is a similar concept to the file-locking concept.

 

SCSI reservations are defined and released by the SCSI initiator.

 

SBD fencing

 

SBD Storage Based Device, sometimes called “Storage Based Death”

 

The SBD daemon together with the STONITH agent, provides a means of enabling STONITH and fencing in clusters through the means of shared storage, rather than requiring external power switching.

The SBD daemon runs on all cluster nodes and monitors the shared storage. SBD uses its own small shared disk partition for its administrative purposes. Each node has a small storage slot on the partition.

 

When it loses access to the majority of SBD devices, or notices another node has written a fencing request to its SBD storage slot, SBD will ensure the node will immediately fence itself.

 

Virtual machine fencing

Cluster nodes which run as virtual machines on KVM can be fenced using the KVM software interface libvirt and KVM software fencing device fence-virtd running on the KVM hypervisor host.

 

KVM Virtual machine fencing works using multicast mode by sending a fencing request signed with a shared secret key to the libvirt fencing multicast group.

 

This means that the node virtual machines can even be running on different hypervisor systems, provided that all the hypervisors have fence-virtd configured for the same multicast group, and are also using the same shared secret.

 

A note about monitoring STONITH resources

 

Fencing devices are a vital part of high-availability clusters, but they involve system and traffic overhead. Power management devices can be adversely impacted by high levels of broadcast traffic.

 

Also, some devices cannot process more than ten or so connections per minute.  Most cannot handle more than one connection session at any one moment and can become confused if two clients are attempting to connect at the same time.

 

For most fencing devices a monitoring interval of around 1800 seconds (30 minutes) and a status check on the power fencing devices every couple of hours should generally be sufficient.

 

Redundant Fencing

 

Redundant or multiple fencing is where fencing methods are combined. This is sometimes also referred to as “nested fencing”.
 

For example, as first level fencing, one fence device can cut off Fibre Channel by blocking ports on the FC switch, and a second level fencing in which an ILO interface powers down the offending machine.
 

You add different fencing levels by using pcs stonith level.
 

All level 1 device methods are tried first, then if no success it will try the level 2 devices.
 

Set with:
 

pcs stonith level add <level> <node> <devices>

eg
 
pcs stonith level add 1 centos1 fence_centos1_ilo
 

pcs stonith level add 2 centos1 fence_centos1_apc

 

to remove a level use:
 

pcs stonith level remove
 

to view the fence level configurations use
 

pcs stonith level

 

Continue Reading

How To Install Pacemaker and Corosync on Centos

This article sets out how to install the clustering management software Pacemaker and the cluster membership software Corosync on Centos version 8.

 

For this example, we are setting up a three node cluster using virtual machines on the Linux KVM hypervisor platform.

 

The virtual machines have the KVM names and hostnames centos1, centos2, and centos3.

 

Each node has two network interfaces: one for the KVM bridged NAT network (KVM network name: default via eth0) and the other for the cluster subnet 10.0.8.0 (KVM network name:network-10.0.8.0 via eth1). DHCP is not used for either of these interfaces. Pacemaker and Corosync require static IP addresses.

 

The machine centos1 will be our current designated co-ordinator (DC) cluster node.

 

First, make sure you have first created an ssh-key for root on the first node:

 

[root@centos1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:********** root@centos1.localdomain

 

then copy the ssh key to the other nodes:

 

ssh-copy-id centos2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

 

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)

 

[root@centos1 .ssh]#
First you need to enable the HighAvailability repository

 

[root@centos1 ~]# yum repolist all | grep -i HighAvailability
ha CentOS Stream 8 – HighAvailability disabled
[root@centos1 ~]# dnf config-manager –set-enabled ha
[root@centos1 ~]# yum repolist all | grep -i HighAvailability
ha CentOS Stream 8 – HighAvailability enabled
[root@centos1 ~]#

 

Next, install the following packages:

 

[root@centos1 ~]# yum install epel-release

 

[root@centos1 ~]# yum install pcs fence-agents-all

 

Next, STOP and DISABLE Firewall for lab testing convenience:

 

[root@centos1 ~]# systemctl stop firewalld
[root@centos1 ~]#
[root@centos1 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@centos1 ~]#

 

then check with:

 

[root@centos1 ~]# systemctl status firewalld
● firewalld.service – firewalld – dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)

 

Next we enable pcsd This is the Pacemaker daemon service:

 

[root@centos1 ~]# systemctl enable –now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@centos1 ~]#

 

then change the default password for user hacluster:

 

echo | passwd –stdin hacluster

 

Changing password for user hacluster.

passwd: all authentication tokens updated successfully.
[root@centos2 ~]#

 

Then, on only ONE of the nodes, I am doing it on centos1 on the KVM cluster, as this will be the default DC for the cluster:

 

pcs host auth centos1.localdomain centos2.localdomain centos3.localdomain

 

NOTE the correct command is pcs host auth – not pcs cluster auth unlike in some instruction material, the syntax has since changed.

 

[root@centos1 .ssh]# pcs host auth centos1.localdomain suse1.localdomain ubuntu4.localdomain
Username: hacluster
Password:
centos1.localdomain: Authorized
centos2.localdomain: Authorized
centos3.localdomain: Authorized
[root@centos1 .ssh]#

 

Next, on centos1, as this will be our default DC (designated coordinator node) we create a corosync secret key:

 

[root@centos1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 2048 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
[root@centos1 corosync]#

 

Then copy the key to the other 2nodes:

 

scp /etc/corosync/authkey centos2:/etc/corosync/
scp /etc/corosync/authkey centos3:/etc/corosync/

 

[root@centos1 corosync]# pcs cluster setup hacluster centos1.localdomain addr=10.0.8.11 centos2.localdomain addr=10.0.8.12 centos3.localdomain addr=10.0.8.13
Sending ‘corosync authkey’, ‘pacemaker authkey’ to ‘centos1.localdomain’, ‘centos2.localdomain’, ‘centos3.localdomain’
centos1.localdomain: successful distribution of the file ‘corosync authkey’
centos1.localdomain: successful distribution of the file ‘pacemaker authkey’
centos2.localdomain: successful distribution of the file ‘corosync authkey’
centos2.localdomain: successful distribution of the file ‘pacemaker authkey’
centos3.localdomain: successful distribution of the file ‘corosync authkey’
centos3.localdomain: successful distribution of the file ‘pacemaker authkey’
Sending ‘corosync.conf’ to ‘centos1.localdomain’, ‘centos2.localdomain’, ‘centos3.localdomain’
centos1.localdomain: successful distribution of the file ‘corosync.conf’
centos2.localdomain: successful distribution of the file ‘corosync.conf’
centos3.localdomain: successful distribution of the file ‘corosync.conf’
Cluster has been successfully set up.
[root@centos1 corosync]#

 

Note I had to specify the IP addresses for the nodes. This is because these nodes each have TWO network interfaces with separate IP addresses. If the nodes only had one network interface, then you can leave out the addr= setting.

 

Next you can start the cluster:

 

[root@centos1 corosync]# pcs cluster start
Starting Cluster…
[root@centos1 corosync]#
[root@centos1 corosync]#
[root@centos1 corosync]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: unknown
* Current DC: NONE
* Last updated: Mon Feb 22 12:57:37 2021
* Last change: Mon Feb 22 12:57:35 2021 by hacluster via crmd on centos1.localdomain
* 3 nodes configured
* 0 resource instances configured
Node List:
* Node centos1.localdomain: UNCLEAN (offline)
* Node centos2.localdomain: UNCLEAN (offline)
* Node centos3.localdomain: UNCLEAN (offline)

 

PCSD Status:
centos1.localdomain: Online
centos3.localdomain: Online
centos2.localdomain: Online
[root@centos1 corosync]#

 

 

The Node List says “UNCLEAN”.

 

So I did:

 

pcs cluster start centos1.localdomain
pcs cluster start centos2.localdomain
pcs cluster start centos3.localdomain
pcs cluster status

 

then the cluster was started in clean running state:

 

[root@centos1 cluster]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: centos1.localdomain (version 2.0.5-7.el8-ba59be7122) – partition with quorum
* Last updated: Mon Feb 22 13:22:29 2021
* Last change: Mon Feb 22 13:17:44 2021 by hacluster via crmd on centos1.localdomain
* 3 nodes configured
* 0 resource instances configured
Node List:
* Online: [ centos1.localdomain centos2.localdomain centos3.localdomain ]

 

PCSD Status:
centos1.localdomain: Online
centos2.localdomain: Online
centos3.localdomain: Online
[root@centos1 cluster]#

Continue Reading

How To Set Static or Permanent IP Addresses for Virtual Machines in KVM

Default KVM behaviour is for KVM to issue DHCP temporary IP addresses for its virtual machines. You can suppress this behaviour for newly defined subnets by simply unticking the “Enable DHCP” option for the defined subnet in the Virtual Networks section in the KVM dashboard.
  
However, the NAT bridged network interface is set to automatically issue DHCP IPs. This can be inconvenient when you want to login to the machine from a shell terminal on your PC or laptop rather than accessing the machine via the KVM console terminal.

  
To change these IPs from DHCP to Static, you need to carry out the following steps, using my current environment as an example:
  
Let’s say I want to change the IP of a machine called suse1 from DHCP to Static IP.
    
1. On the KVM host machine, display the list of current KVM networks:
  
virsh net-list
  
root@yoga:/etc# virsh net-list
Name State Autostart Persistent
—————————————————–
default active yes yes
network-10.0.7.0 active yes yes
network-10.0.8.0 active yes yes
  
The interface of the machine I want to set is located on network “default”.
  
2. Find the MAC address or addresses of the virtual machine whose IP address you want to set:
  
Note the machine name is the name used to define the machine in KVM. It need not be the same as the OS hostname of the machine.
  
virsh dumpxml <machine name> | grep -i ‘<mac’
  
root@yoga:/home/kevin# virsh dumpxml suse1 | grep -i ‘<mac’
<mac address=’52:54:00:b4:0c:8d’/>
<mac address=’52:54:00:e9:97:91’/>
  
So the machine has two network interfaces.
  
I know from ifconfig (or ip a) that the interface I want to set is the first one, eth0, with mac address: 52:54:00:b4:0c:8d.
  
This is the one that is using the network called “default”.
  
suse1:~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b4:0c:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.179/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb4:c8d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:e9:97:91 brd ff:ff:ff:ff:ff:ff
inet 10.0.7.11/24 brd 10.0.7.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.0.7.100/24 brd 10.0.7.255 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fee9:9791/64 scope link
valid_lft forever pre
  
3. Edit the network configuration file:
  
virsh net-edit <network name>
  
So in this example I do:
  
virsh net-edit default
  

Add the following entry between <dhcp> and </dhcp> as follows:
  
<host mac=’xx:xx:xx:xx:xx:xx’ name=’virtual_machine’ ip=’xxx.xxx.xxx.xxx’/>
  
whereby
  
mac = mac address of the virtual machine
  
name = KVM virtual machine name
  
IP = IP address you want to set for this interface
  
So for this example I add:
  
<host mac=’52:54:00:b4:0c:8d’ name=’suse1′ ip=’192.168.122.11’/>
  
then save and close the file.
  
4. Then restart the KVM DHCP service:
  
virsh net-destroy <network name>
  
virsh net-destroy default
  
virsh net-start <network name>
  
virsh net-start default
  
5. Shutdown the virtual machine:
  
virsh shutdown <machine name>
  
virsh shutdown suse1
  
6. Stop the network service:
  
virsh net-destroy default
  
7. Restart the libvertd system:
  
systemctl restart virtlogd.socket
systemctl restart libvirtd
  
8. Restart the network:
  
virsh net-start <network name>
  
virsh net-start default
  
9. Then restart the KVM desktop virt-manager
  
virt-manager
  
10. Then restart the virtual machine again, either on the KVM desktop or else using the command:
  
virsh start <virtual machine>
  
virsh start suse1
  
If the steps have all been performed correctly, the network interface on the machine should now have the static IP address you defined instead of a DHCP address from KVM.
  
Verify on the guest machine with ifconfig or ip a

 

Continue Reading