Tags Archives: hypervisor

How To Get Started With Vagrant

What is Vagrant

 

Vagrant is a simple open-source virtual machine manager originally developed by HashiCorp which allows you to easily create and run a minimal pre-built virtual machine from a virtual machine image source and SSH immediately into it without any further configuration being necessary.

It’s ideal for developers who require a test machine for their application development.

 

Vagrant itself only manages your virtual machines and it can use VirtualBox or other VM platforms such as libvirt by means of plug-ins.

Vagrant acts as a wrapper on virtual machines, communicating with them via API providers or hypervisors. The default provider for Vagrant is VirtualBox.

 

Vagrant is available via the official Ubuntu repository and can be installed using other methods such as apt, apt-get, and aptitude.

 

 

How to setup your Vagrant environment

 

Create a directory called ~/Vagrant. This is where your Vagrantfiles will be stored.

 

mkdir ~/Vagrant

 

In this directory, create a subdirectory for the distribution you want to download.

 

For instance, for a CentOS test server, create a CentOS directory:

 

mkdir ~/Vagrant/centos

 

cd ~/Vagrant/centos

 

Next you need to create a Vagrantfile:

 

vagrant init

 

You should now see the following Vagrantfile in the Vagrant directory:

# -*- mode: ruby -*-
# vi: set ft=ruby :
# All Vagrant configuration is done below. The “2” in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don’t change it unless you know what
# you’re doing.
Vagrant.configure(“2”) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.
# Every Vagrant development environment requires a box. You can search for
# boxes at https://vagrantcloud.com/search.
config.vm.box = “base”
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing “localhost:8080” will access port 80 on the guest machine.
# NOTE: This will enable public access to the opened port
# config.vm.network “forwarded_port”, guest: 80, host: 8080
# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine and only allow access
# via 127.0.0.1 to disable public access
# config.vm.network “forwarded_port”, guest: 80, host: 8080, host_ip: “127.0.0.1”
# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network “private_network”, ip: “192.168.33.10”
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network “public_network”
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# config.vm.synced_folder “../data”, “/vagrant_data”
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
# config.vm.provider “virtualbox” do |vb|
# # Display the VirtualBox GUI when booting the machine
# vb.gui = true
#
# # Customize the amount of memory on the VM:
# vb.memory = “1024”
# end
#
# View the documentation for the provider you are using for more
# information on available options.
# Enable provisioning with a shell script. Additional provisioners such as
# Ansible, Chef, Docker, Puppet and Salt are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision “shell”, inline: <<-SHELL
# apt-get update
# apt-get install -y apache2
# SHELL
end

 

 

To define the virtual machine (known in Vagrant as a “box”) in Vagrantfile edit the following line:

 

config.vm.box = “[box_name]”

 

To validate the Vagrantfile use:

 

vagrant validate

 

 

After adding new changes to the Vagrantfile, to apply the changes use

 

vagrant reload

 

 

To list the status of the current running VM:

 

vagrant status

 

 

To get debug info on deployment:

 

vagrant –debug up

 

To display full list of guest ports mapped to the host machine ports:

 

vagrant port [vm_name]

 

 

 

 

Selecting a Vagrant virtual machine to run

 

Vagrant boxes are sourced from three different places: Hashicorp (the maintainers of Vagrant), distribution maintainers, and other third-parties.

 

You can browse through the images at app.vagrantup.com/boxes/search.

 

vagrant init generic/centos8

 

The init subcommand will create the Vagrantfile configuration file, in your current directory, and then transform that directory into a Vagrant environment.

 

You can view a list of current known Vagrant environments by means of the global-status subcommand:

 

vagrant global-status
id name provider state directory
——————————————-
49c797f default libvirt running /home/tux/Vagrant/centos8
Starting a virtual machine with Vagrant

 

You can then start your virtual machine by entering:

 

vagrant up

 

This causes Vagrant to download the virtual machine image if it doesn’t already exist locally, set up a virtual network, and configure your box.

 

Entering a Vagrant virtual machine

 

Once your virtual machine is up and running, you can log in to it with vagrant ssh:

 

vagrant ssh
box$

 

You connect to the box by means of ssh. You can run all the commands native to that host OS. It’s a virtual machine with its own kernel, emulated hardware and all common Linux software.

 

Leaving a Vagrant virtual machine

 

To leave your Vagrant virtual machine, log out of the host as you normally exit a Linux computer:

 

box$ exit

 

Alternatively, you can power the virtual machine down:

 

box$ sudo poweroff

 

You can also stop the machine from running using the vagrant command:

 

box$ vagrant halt

 

 

Destroying a Vagrant virtual machine

 

When finished with a Vagrant virtual machine, you can destroy it:

 

vagrant destroy

 

Alternatively, you can remove a virtual machine by running the global box subcommand:

vagrant box remove generic/centos8

 

What is libvirt

 

The libvirt project is a toolkit for managing virtualization, with support for KVM, QEMU, LXC, and more. Its rather like a virtual machine API, allowing developers to test and run applications on virtual machines with minimal overhead.

 

On some distributions you may need to first start the libvirt daemon:

 

systemctl start libvirtd

 

Install vagrant-libvirt plugin in Linux

 

In order to run Vagrant virtual machines on KVM, you need to install the vagrant-libvirt plugin. This plugin adds the Libvirt provider to Vagrant and allows Vagrant to control and provision machines via Libvirt.

 

Install the necessary dependencies for vagrant-libvirt plugin.

 

On Ubuntu:

 

$ sudo apt install qemu libvirt-daemon-system libvirt-clients libxslt-dev libxml2-dev libvirt-dev zlib1g-dev ruby-dev ruby-libvirt ebtables dnsmasq-base

 

root@asus:/home/kevin# vagrant –version
Vagrant 2.3.4
root@asus:/home/kevin#

 

Now we can install the plug-in:

 

root@asus:/home/kevin# vagrant plugin install vagrant-libvirt
==> vagrant: A new version of Vagrant is available: 2.3.5 (installed version: 2.3.4)!
==> vagrant: To upgrade visit: https://www.vagrantup.com/downloads.html

 

Installing the ‘vagrant-libvirt’ plugin. This can take a few minutes…

 

Fetching formatador-1.1.0.gem
Fetching fog-core-2.3.0.gem
Fetching fog-json-1.2.0.gem
Fetching nokogiri-1.15.0-x86_64-linux.gem
Fetching fog-xml-0.1.4.gem
Fetching ruby-libvirt-0.8.0.gem
Building native extensions. This could take a while…
Fetching fog-libvirt-0.11.0.gem
Fetching xml-simple-1.1.9.gem
Fetching diffy-3.4.2.gem
Fetching vagrant-libvirt-0.12.0.gem
Installed the plugin ‘vagrant-libvirt (0.12.0)’!
root@asus:/home/kevin#

 

 

Testing Vagrant Box

First let’s download a Vagrant box that supports libvirt.

 

 

see https://vagrantcloud.com/generic/ubuntu2004

 

vagrant box add generic/ubuntu2204 –provider libvirt

 

Create a small configuration file to use use this new Vagrant box:

 

cat <<-VAGRANTFILE > Vagrantfile

 

Vagrant.configure(“2”) do |config|
config.vm.box = “generic/ubuntu2204”
end
VAGRANTFILE

 

 

root@asus:/home/kevin# cat <<-VAGRANTFILE > Vagrantfile
> Vagrant.configure(“2”) do |config|
config.vm.box = “generic/ubuntu2204”
end
VAGRANTFILE
root@asus:/home/kevin#

 

 

And now bring up the system (< 20 seconds):

 

vagrant up –provider libvirt

 

You can now log onto the virtual machine guest with:

 

vagrant ssh

 

Check the list of boxes present locally.

 

$ vagrant box list

 

root@asus:/home/kevin# vagrant box list
generic/ubuntu2204 (libvirt, 4.2.16)
root@asus:/home/kevin#

 

 

 

Vagrant will create a Linux bridge on the host system.

 

$ brctl show virbr1

 

root@asus:/home/kevin# brctl show virbr1
bridge name bridge id STP enabled interfaces
virbr1 8000.525400075114 yes
root@asus:/home/kevin#

 

root@asus:/home/kevin# vagrant ssh
vagrant@ubuntu2204:~$

 

vagrant@ubuntu2204:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 956K 197M 1% /run
/dev/mapper/ubuntu–vg-ubuntu–lv 62G 4.9G 54G 9% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
vagrant@ubuntu2204:~$

 

 

 

Run virsh list to see if you’ll get a list of VMs.

 

virsh list

 

To ssh to the VM, use vagrant ssh command.

 

vagrant ssh

 

 

To output .ssh/config valid syntax for connecting to this environment via ssh, run ssh-config command.

 

You need to place provided output under ~/.ssh/config directory to ssh.

 

$ vagrant ssh-config

 

 

root@asus:/home/kevin# vagrant up
Bringing machine ‘default’ up with ‘libvirt’ provider…
==> default: Checking if box ‘generic/ubuntu2204’ version ‘4.2.16’ is up to date…
==> default: Uploading base box image as volume into Libvirt storage…
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings…
==> default: — Name: kevin_default
==> default: — Description: Source: /home/kevin/Vagrantfile
==> default: — Domain type: kvm
==> default: — Cpus: 2
==> default: — Feature: acpi
==> default: — Feature: apic
==> default: — Feature: pae
==> default: — Clock offset: utc
==> default: — Memory: 2048M
==> default: — Base box: generic/ubuntu2204
==> default: — Storage pool: default
==> default: — Image(vda): /var/lib/libvirt/images/kevin_default.img, virtio, 128G
==> default: — Disk driver opts: cache=’default’
==> default: — Graphics Type: vnc
==> default: — Video Type: cirrus
==> default: — Video VRAM: 256
==> default: — Video 3D accel: false
==> default: — Keymap: en-us
==> default: — TPM Backend: passthrough
==> default: — INPUT: type=mouse, bus=ps2
==> default: Creating shared folders metadata…
==> default: Starting domain.
==> default: Domain launching with graphics connection settings…
==> default: — Graphics Port: 5900
==> default: — Graphics IP: 127.0.0.1
==> default: — Graphics Password: Not defined
==> default: — Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address…
==> default: Waiting for machine to boot. This may take a few minutes…
default: SSH address: 192.168.121.218:22
default: SSH username: vagrant
default: SSH auth method: private key
default: Warning: Connection refused. Retrying…
default:
default: Vagrant insecure key detected. Vagrant will automatically replace
default: this with a newly generated keypair for better security.
default:
default: Inserting generated public key within guest…
default: Removing insecure key from the guest if it’s present…
default: Key inserted! Disconnecting and reconnecting using new SSH key…
==> default: Machine booted and ready!
root@asus:/home/kevin#

 

 

root@asus:/home/kevin# vagrant ssh
vagrant@ubuntu2204:~$

 

vagrant@ubuntu2204:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 198M 956K 197M 1% /run
/dev/mapper/ubuntu–vg-ubuntu–lv 62G 4.9G 54G 9% /
tmpfs 988M 0 988M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/vda2 2.0G 130M 1.7G 8% /boot
tmpfs 198M 4.0K 198M 1% /run/user/1000
vagrant@ubuntu2204:~$

 

 

To shut down the VM, run:

 

$ vagrant halt

 

 

To set VM to its initial state by cleaning all data, use vagrant destroy:

 

$ vagrant destroy

 

 

At any time, you can view a list of known Vagrant environments using the global-status subcommand:

 

$ vagrant global-status

 

root@asus:/home/kevin# vagrant global-status
id name provider state directory
———————————————————————-
0466da3 default libvirt running /home/kevin

The above shows information about all known Vagrant environments
on this machine. This data is cached and may not be completely
up-to-date (use “vagrant global-status –prune” to prune invalid
entries). To interact with any of the machines, you can go to that
directory and run Vagrant, or you can use the ID directly with
Vagrant commands from any directory. For example:
“vagrant destroy 1a2b3c4d”
root@asus:/home/kevin#

 

vagrant@ubuntu2204:~$ sudo poweroff
Connection to 192.168.121.218 closed by remote host.
root@asus:/home/kevin#
root@asus:/home/kevin#

 

root@asus:/home/kevin# vagrant box list
generic/ubuntu2204 (libvirt, 4.2.16)
root@asus:/home/kevin#

 

 

CHEATSHEET FOR VAGRANT

Typing vagrant from the command line will display a list of all available commands.

 

Be sure that you are in the same directory as the Vagrantfile when running these commands!

 

Creating a VM

vagrant init — Initialize Vagrant with a Vagrantfile and ./.vagrant directory, using no specified base image. Before you can do vagrant up, you’ll need to specify a base image in the Vagrantfile.

vagrant init <boxpath> — Initialize Vagrant with a specific box. To find a box, go to the public Vagrant box catalog. When you find one you like, just replace it’s name with boxpath. For example, vagrant init ubuntu/trusty64.

Starting a VM
vagrant up — starts vagrant environment (also provisions only on the FIRST vagrant up)
vagrant resume — resume a suspended machine (vagrant up works just fine for this as well)
vagrant provision — forces reprovisioning of the vagrant machine
vagrant reload — restarts vagrant machine, loads new Vagrantfile configuration
vagrant reload –provision — restart the virtual machine and force provisioning

Getting into a VM
vagrant ssh — connects to machine via SSH
vagrant ssh <boxname> — If you give your box a name in your Vagrantfile, you can ssh into it with boxname. Works from any directory.

Stopping a VM
vagrant halt — stops the vagrant machine
vagrant suspend — suspends a virtual machine (remembers state)

 

Cleaning Up a VM
vagrant destroy — stops and deletes all traces of the vagrant machine
vagrant destroy -f — same as above, without confirmation

Boxes
vagrant box list — see a list of all installed boxes on your computer
vagrant box add <name> <url> — download a box image to your computer
vagrant box outdated — check for updates vagrant box update
vagrant box remove <name> — deletes a box from the machine
vagrant package — packages a running virtualbox env in a reusable box

Saving Progress
-vagrant snapshot save [options] [vm-name] <name> — vm-name is often default. Allows us to save so that we can rollback at a later time

 

Tips
vagrant -v — get the vagrant version
vagrant status — outputs status of the vagrant machine
vagrant global-status — outputs status of all vagrant machines
vagrant global-status –prune — same as above, but prunes invalid entries
vagrant provision –debug — use the debug flag to increase the verbosity of the output
vagrant push — yes, vagrant can be configured to deploy code!
vagrant up –provision | tee provision.log — Runs vagrant up, forces provisioning and logs all output to a file

Plugins
vagrant-hostsupdater : $ vagrant plugin install vagrant-hostsupdater to update your /etc/hosts file automatically each time you start/stop your vagrant box.

 

 

Multi-Machine
Vagrant can be used to run and control multiple guest machines via a Vagrantfile. This is called a “multi-machine” environment.

 

 

Defining Multiple Machines

 

Multiple machines are defined in the Vagrantfile using the config.vm.define statement.

 

This configuration directive effectively creates a Vagrant configuration within a configuration, for example:

 

 

Vagrant.configure(“2”) do |config|
config.vm.provision “shell”, inline: “echo Hello”

 

config.vm.define “web” do |web|
web.vm.box = “apache”
end

 

config.vm.define “db” do |db|
db.vm.box = “mysql”
end

end

 

Controlling Multiple Machines

 

Commands that target a specific virtual machine, such as vagrant ssh, require the name of the machine to be specified.

 

For example, here you would specify vagrant ssh web or vagrant ssh db.

 

Other commands, such as vagrant up, apply to all VMs by default.

 

Alternatively, you can specify only specific machines, such as

 

vagrant up web

 

or

 

vagrant up db.

 

Autostart Machines

 

By default in a multi-machine environment, vagrant up will start all the defined VMs.

 

The autostart setting enables you to instruct Vagrant NOT to start specific machines.

 

For example:

 

config.vm.define “web”
config.vm.define “db”
config.vm.define “db_follower”, autostart: false

 

If you then run vagrant up with the above settings, Vagrant will automatically start the “web” and “db” machines, but does not launch the “db_follower” VM.

 

 

 

 

Continue Reading

How To install Cluster Fencing Using Libvert on KVM Virtual Machines

These are my practical notes on installing libvert fencing on Centos  cluster nodes running on virtual machines using the KVM hypervisor platform.

 

 

NOTE: If a local firewall is enabled, open the chosen TCP port (in this example, the default of 1229) to the host.

 

Alternatively if you are using a testing or training environment you can disable the firewall. Do not do the latter on production environments!

 

1. On the KVM host machine, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

yum -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast

 

2. On the KVM host, create a shared secret key called /etc/cluster/fence_xvm.key. The target directory /etc/cluster needs to be created manually on the nodes and the KVM host.

 

mkdir -p /etc/cluster

 

dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

 

then distribute the key from the KVM host to all the nodes:

3. Distribute the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the KVM host.

 

scp /etc/cluster/fence_xvm.key centos1vm:/etc/cluster/

 

and copy also to the other nodes

4. On the KVM host, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener. Also make sure you give the correct directory location for the shared key you just created (here /etc/cluster/fence.xvm.key):

 

fence_virtd -c

5. Enable and start the fence_virtd daemon on the hypervisor.

 

systemctl enable fence_virtd
systemctl start fence_virtd

6. Also install fence_virtd and enable and start on the nodes

 

root@yoga:/etc# systemctl enable fence_virtd
Synchronizing state of fence_virtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable fence_virtd
root@yoga:/etc# systemctl start fence_virtd
root@yoga:/etc# systemctl status fence_virtd
● fence_virtd.service – Fence-Virt system host daemon
Loaded: loaded (/lib/systemd/system/fence_virtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-23 14:13:20 CET; 6min ago
Docs: man:fence_virtd(8)
man:fence_virt.con(5)
Main PID: 49779 (fence_virtd)
Tasks: 1 (limit: 18806)
Memory: 3.2M
CGroup: /system.slice/fence_virtd.service
└─49779 /usr/sbin/fence_virtd -w

 

Feb 23 14:13:20 yoga systemd[1]: Starting Fence-Virt system host daemon…
root@yoga:/etc#

 

7. Test the KVM host multicast connectivity with:

 

fence_xvm -o list

root@yoga:/etc# fence_xvm -o list
centos-base c023d3d6-b2b9-4dc2-b0c7-06a27ddf5e1d off
centos1 2daf2c38-b9bf-43ab-8a96-af124549d5c1 on
centos2 3c571551-8fa2-4499-95b5-c5a8e82eb6d5 on
centos3 2969e454-b569-4ff3-b88a-0f8ae26e22c1 on
centosstorage 501a3dbb-1088-48df-8090-adcf490393fe off
suse-base 0b360ee5-3600-456d-9eb3-d43c1ee4b701 off
suse1 646ce77a-da14-4782-858e-6bf03753e4b5 off
suse2 d9ae8fd2-eede-4bd6-8d4a-f2d7d8c90296 off
suse3 7ad89ea7-44ae-4965-82ba-d29c446a0607 off
root@yoga:/etc#

 

 

8. create your fencing devices, one for each node:

 

pcs stonith create <name for our fencing device for this vm cluster host> fence_xvm port=”<the KVM vm name>” pcmk_host_list=”<FQDN of the cluster host>”

 

one for each node with the values set accordingly for each host. So it will look like this:

 

MAKE SURE YOU SET ALL THE NAMES CORRECTLY!

 

On ONE of the nodes, create all the following fence devices, usually one does this on the DC (current designated co-ordinator) node:

 

[root@centos1 etc]# pcs stonith create fence_centos1 fence_xvm port=”centos1″ pcmk_host_list=”centos1.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos2 fence_xvm port=”centos2″ pcmk_host_list=”centos2.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos3 fence_xvm port=”centos3″ pcmk_host_list=”centos3.localdomain”
[root@centos1 etc]#

 

9. Next, enable fencing on the cluster nodes.

 

Make sure the property is set to TRUE

 

check with

 

pcs -f stonith_cfg property

 

If the cluster fencing stonith property is set to FALSE then you can manually set it to TRUE on all the Cluster nodes:

 

pcs -f stonith_cfg property set stonith-enabled=true

 

[root@centos1 ~]# pcs -f stonith_cfg property
Cluster Properties:
stonith-enabled: true
[root@centos1 ~]#

 

you can also do:

pcs stonith cleanup fence_centos1 and the other hosts centos2 and centos3

 

[root@centos1 ~]# pcs stonith cleanup fence_centos1
Cleaned up fence_centos1 on centos3.localdomain
Cleaned up fence_centos1 on centos2.localdomain
Cleaned up fence_centos1 on centos1.localdomain
Waiting for 3 replies from the controller
… got reply
… got reply
… got reply (done)
[root@centos1 ~]#

 

 

If a stonith id or node is not specified then all stonith resources and devices will be cleaned.

pcs stonith cleanup

 

then do

 

pcs stonith status

 

[root@centos1 ~]# pcs stonith status
* fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos3 (stonith:fence_xvm): Started centos3.localdomain
[root@centos1 ~]#

 

 

Some other stonith fencing commands:

 

To list the available fence agents, execute below command on any of the Cluster node

 

# pcs stonith list

 

(can take several seconds, dont kill!)

 

root@ubuntu1:~# pcs stonith list
apcmaster – APC MasterSwitch
apcmastersnmp – APC MasterSwitch (SNMP)
apcsmart – APCSmart
baytech – BayTech power switch
bladehpi – IBM BladeCenter (OpenHPI)
cyclades – Cyclades AlterPath PM
external/drac5 – DRAC5 STONITH device
.. .. .. list truncated…

 

 

To get more details about the respective fence agent you can use:

 

root@ubuntu1:~# pcs stonith describe fence_xvm
fence_xvm – Fence agent for virtual machines

 

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

 

Stonith options:
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
.. .. .. list truncated . ..

 

Continue Reading