Tags Archives: virtualization

How to Install and Use KVM libvirt on Ubuntu

Virtualization Compatibility Check

 

Execute the below command to install the cpu-checker package.

 

apt install -y cpu-checker

 

check if your CPU supports virtualization by running this command.

 

kvm-ok

 

If you get the following result, your CPU supports virtualization with KVM:

 

root@asus:~# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
root@asus:~#

 

 

Installing KVM on Ubuntu

 

Provided your CPU supports virtualization, you can install KVM on your machine.

 

To install KVM on Ubuntu, run the apt command below.

 

apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager -y

 

These are the packages that will be installed:

 

qemu-kvm – This package provides accelerated KVM support. This is a userspace component for QEMU, responsible for accelerating KVM guest machines.

 

libvirt-daemon-system – This is the libvirt daemon and related management tools used for creating, managing, and destroying virtual machines on the host machine.

 

libvirt-clients – This provides the libvirt command-line interface for managing virtual machines.

 

bridge-utils – This package provides the userland tools used for configuring virtual network bridges. On a hosted virtualization system like KVM, the network bridge connects the virtual guest VM network with the real host machine network.

 

virt-manager – This provides a graphical user interface (GUI) for managing your virtual machines should you wish to use it.

 

If you want the libvirt daemon to start automatically at boot then enable libvirtd:

 

systemctl enable libvirtd

 

If libvirtd is not already running, do:

 

systemctl start libvirtd

 

Check if KVM is loaded in the kernel with:

 

lsmod | grep -i kvm

 

 

 

Overview of basic libvirt commands

 

 

To launch the libvirt KVM GUI Virtual Machine Manager run:

 

 

virt-manager

 

Alternatively you can use the virt-install commands to launch machines from the CLI.

 

Example:

 

virt-install –name fedora1 –vcpu 1 –memory 2048 –cdrom /root/Fedora-Workstation-Live-x86_64-36-1.5.iso –disk size=12 –check disk_size=off

 

 

This creates a Fedora machine with hostname fedora1 with 2GB RAM and a 12GB virtual hard drive.

 

 

To list all VMs:

 

virsh list –all

 

To shutdown the machine:

 

 

virsh shutdown fedora1

 

 

To start the machine:

 

 

virsh start fedora1

 

 

To display the storage allocated to the machine:

 

 

virsh domblklist fedora1

 

To destroy the machine:

 

virsh destroy fedora1

 

To delete the machine and its disk image use virsh undefine. This deletes the VM, and the –storage option takes the comma-separated list of storage you wish to remove. Example:

 

virsh undefine fedora1 -storage /var/lib/libvirt/images/fedora1-1.qcow2

 

 

Continue Reading

Introduction to cloud-init

To quote from the official cloud-init project website:

 

Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization.

 

It is supported across all major public cloud providers, provisioning systems for private cloud infrastructure, and bare-metal installations.

 

Cloud-init will identify the cloud it is running on during boot, read any provided metadata from the cloud and initialize the system accordingly.

 

This may involve setting up the network and storage devices to configuring SSH access key and many other aspects of a system.

 

Later on the cloud-init will also parse and process any optional user or vendor data that was passed to the instance.

 

 

The official cloud-init project site is at https://cloudinit.readthedocs.io/en/latest/topics/availability.html

 

cloud-init automates the initialization of cloud instances during VM system startup.

 

You can configure cloud-init to perform various tasks, such as:

 

setting a host name
installing software packages on a VM
running scripts
suppressing or modifying default VM behaviour
generate ssh private keys
adding ssh keys to user’s .ssh/authorized_keys for ssh logins
setting up mount points

 

cloud-init uses YAML-formatted file instructions to perform its tasks.

 

When an instance boots, the cloud-init service starts and searches for and executes the instructions for cloud-init.

 

You define the cloud-init tasks by configuring the /etc/cloud/cloud.cfg file and adding directives into the /etc/cloud/cloud.cfg.d/ directory with files named *.cfg which must contain a #cloud-config at the start of the file.

 

cloud-init runs through five stages during a system boot. Those stages determine whether cloud-init runs and where it will will seek its datasources:

 

The cloud-init generator stage: this runs via the systemd service, determines whether to run cloud-init on bootup.

 

The local stage: in which cloud-init finds local datasources and activates the defined network configuration.

 

The network stage: in which cloud-init processes user data and runs the modules listed under cloud_init_modules in the cloud.cfg file.

 

This allows you to enable, disable, or add modules to the cloud_init_modules section.

 

The config stage: in which cloud-init runs the modules listed under cloud_config_modules in the cloud.cfg file. Yoou can enable, disable, or add modules to the cloud_config_modules section.

 

The final stage: in which cloud-init runs what you have included under cloud_final_modules in the cloud.cfg file.

 

You can include package installations that you want to run after booting, as well as configuration management plug-ins and user scripts. You can also enable, disable, or add modules to the cloud_final_modules section.

 

 

 

How To Install cloud-init

 

On Ubuntu/Debian systems:

 

check if installed:

 

dpkg –get-selections | grep cloud-init

 

root@gemini:~#
root@gemini:~# dpkg –get-selections | grep cloud-init
cloud-init install
cloud-initramfs-copymods install
cloud-initramfs-dyn-netconf install
root@gemini:~#

 

 

if you wish to remove a default version from the OS and reinstall a fresh version:

 

apt remove –purge cloud-init

 

then do

 

apt install cloud-init

 

root@gemini:/etc/cloud# ll
total 32
drwxr-xr-x 4 root root 4096 Mar 23 11:49 ./
drwxr-xr-x 123 root root 12288 Mar 23 06:42 ../
-rw-r–r– 1 root root 3819 Feb 8 05:55 cloud.cfg
drwxr-xr-x 2 root root 4096 Feb 8 05:55 cloud.cfg.d/
-rw-r–r– 1 root root 3819 Mar 23 11:48 cloud.cfg.orig
drwxr-xr-x 2 root root 4096 Mar 23 11:52 templates/
root@gemini:/etc/cloud#

 

the main config file is /etc/cloud/cloud.cfg:

 

you can disable items/functionality that you do not want the VM to use.

 

 

 

Troubleshooting cloud-init

 

check the /etc/systemd/network directory

 

if this directory contains a symlink to /dev/null then this must be removed, else cloud-init will not work!

 

After changing the config, you can activate it in the existing VM session by running these two commands:

 

 

cloud-init clean

cloud-init init

 

 

Continue Reading

Installing Docker on Ubuntu

First of all, a quick overview of Docker.

 

 

The definition of Docker from Wikipedia:

 

 

Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. Because all of the containers share the services of a single operating system kernel, they use fewer resources than virtual machines.

 

 

Docker is a container virtualization system used to separate processes and applications from other processes running on the same physical machine.

 

A container is simply another process on a machine that has been isolated from all other processes on the host machine. This isolation methodology utilizes kernel namespaces and cgroups, features that have been present in Linux for a long time.

 

 

What is a container image?

 

When running a container, it uses an isolated filesystem. This custom filesystem is provided by a container image. Since the image contains the container’s filesystem, it must contain everything needed to run an application – all dependencies, configuration, scripts, binaries, etc.

 

The image also contains other configuration for the container, such as environment variables, a default command to run, and other metadata.

 

 

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which builds, runs, and distributes your Docker containers.

 

The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon running on anóther server.

 

The Docker client and daemon can communicate using a REST API, via UNIX sockets, or via a network interface.

 

Docker Compose is a special type of Docker client that lets you work with applications consisting of a set of containers.

 

 

The Docker daemon

 

The Docker daemon (dockerd) listens for Docker API requests and manages the Docker objects such as images, containers, networks, and volumes.

 

The dockerd daemon can also communicate with other daemons to manage Docker services.

 

root@asus:~# ps -ef | grep dockerd
root 877 1 0 13:01 ? 00:00:00 /usr/bin/dockerd -H fd:// –containerd=/run/containerd/containerd.sock
root 3196 2308 0 13:05 pts/0 00:00:00 grep –color=auto dockerd
root@asus:~#

 

 

 

The Docker client

 

The Docker client (docker) is the main way Docker users interact with Docker.

 

Docker commands such as docker run are sent by the docker client to dockerd, which executes them. The docker command uses the Docker API. The Docker client can communicate with more than one daemon.

 

 

Docker Hub Registry

 

The Docker Hub registry is a public registry which stores Docker images that anyone can use.

 

Docker is configured to search for and pull images from Docker Hub by default.

 

However, you can also run your own private Docker registry.

 

When you use the docker pull or docker run commands, the required images are pulled from the registry you have specified in your configuration.

 

And when you use the docker push command, your image is pushed to your configured registry.

 

 

Docker objects

 

When you use Docker, you are creating and using images, containers, networks, volumes, plugins, and other objects.

 

Images

 

An image is a read-only template with instructions for creating a Docker container.

 

An image can also be based on another image, with additional customization. For example, you may build an image based on the ubuntu image, but which installs the Apache web server and also your own web application, as well as the configuration details needed to make your application run correctly.

 

You might create your own images or you might only use those created by others and published in a registry.

 

To build your own image, you create a Dockerfile which defines the steps needed to create the image and run it.

 

Each instruction in a Dockerfile creates a layer in the image.

 

When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt.

 

This methodology enables Docker images to be lightweight, small, and fast, compared to other virtualization technologies.

 

Containers

 

A container is a runnable instance of an image.

 

You can create, start, stop, move, or delete a container using the Docker API or CLI.

 

You can connect a container to one or more networks, attach it to storage, or even create a new image based on its current state.

 

By default, a container is isolated from other containers and from its host machine.

 

You can specify the level of isolation from a container’s network, storage, or other underlying subsystems, from other containers or from the host machine.

 

A container is defined by its image as well as any configuration options you specify when you create or start it.

 

Note that when a container is removed, any changes to its state that are not stored in persistent storage will disappear.

 

 

Example docker run command

 

The following command runs a Linux Ubuntu container, then attaches interactively to your local command-line session, and then runs a /bin/bash shell.

 

$ docker run -i -t ubuntu /bin/bash

 

When you run this command, this is what happens:

 

If the ubuntu image is not already locally present, Docker pulls it from your configured registry, as though you had run docker pull ubuntu manually.

 

Docker then creates a new container, as though you had run a manual docker container create command.

 

Docker allocates a read-write filesystem to the container, as its final layer.

 

This allows a running container to create or modify files and directories in its local filesystem.

 

Docker then creates a network interface to connect the container to the default network, since you did not specify any networking options. This includes assigning an IP address to the container.

 

By default, containers can connect to external networks using the host machine’s network connection.

 

Next, Docker starts the container and executes /bin/bash.

 

Because the container is running interactively and attached to your terminal (due to the -i and -t flags), you can provide input using your keyboard while the output is logged to your terminal.

 

When you type exit to terminate the /bin/bash command, the container stops but is not removed. You can start it again or remove it.

 

 

You can obtain Docker from the standard Ubuntu 20.04 repositories, but the version may not always be the latest or the one you want.

 

 

The procedure for installing Docker

 

The steps are:

 

update the system packages index
enable the Docker repository
import the repository GPG key
then install the package.

 

sudo apt update
sudo apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common

 

 

root@asus:/# apt install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
Reading package lists… Done
Building dependency tree
Reading state information… Done
ca-certificates is already the newest version (20210119~20.10.1).
ca-certificates set to manually installed.
curl is already the newest version (7.68.0-1ubuntu4.3).
software-properties-common is already the newest version (0.99.3.1).
apt-transport-https is already the newest version (2.1.10ubuntu0.3).
The following NEW packages will be installed:
gnupg-agent
0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded.
Need to get 5.232 B of archives.
After this operation, 46,1 kB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://de.archive.ubuntu.com/ubuntu groovy-updates/universe amd64 gnupg-agent all 2.2.20-1ubuntu1.1 [5.232 B]
Fetched 5.232 B in 0s (32,6 kB/s)
Selecting previously unselected package gnupg-agent.
(Reading database … 214639 files and directories currently installed.)
Preparing to unpack …/gnupg-agent_2.2.20-1ubuntu1.1_all.deb …
Unpacking gnupg-agent (2.2.20-1ubuntu1.1) …
Setting up gnupg-agent (2.2.20-1ubuntu1.1) …
root@asus:/#

 

 

Import the repository’s GPG key using the curl command:

 

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add –

root@asus:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add –
Warning: apt-key is deprecated. Manage keyring files in trusted.gpg.d instead (see apt-key(8)).
OK
root@asus:~#

 

 

Add the Docker APT repository to the system:

 

add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”

root@asus:~# add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”
Repository: ‘deb [arch=amd64] https://download.docker.com/linux/ubuntu groovy stable’
Description:
Archive for codename: groovy components: stable
More info: https://download.docker.com/linux/ubuntu
Adding repository.
Press [ENTER] to continue or Ctrl-c to cancel.
Adding deb entry to /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-groovy.list
Adding disabled deb-src entry to /etc/apt/sources.list.d/archive_uri-https_download_docker_com_linux_ubuntu-groovy.list
Hit:1 http://de.archive.ubuntu.com/ubuntu groovy InRelease
Get:2 http://de.archive.ubuntu.com/ubuntu groovy-updates InRelease [115 kB]
Ign:3 htt
… … …

security.ubuntu.com/ubuntu groovy-security/main amd64 DEP-11 Metadata [18,9 kB]
Get:24 http://security.ubuntu.com/ubuntu groovy-security/universe amd64 DEP-11 Metadata [4.628 B]
Reading package lists… Done
E: The repository ‘http://ppa.launchpad.net/alexlarsson/flatpak/ubuntu groovy Release’ does not have a Release file.
N: Updating from such a repository can’t be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.
root@asus:~#

 

 

Now the Docker repository is enabled, you can install any Docker version available in the repositories.

 

To install the latest version of Docker, run the commands below. If you want to install a specific Docker version, skip this step and go to the next one.

 

sudo apt update
sudo apt install docker-ce docker-ce-cli containerd.io

 

 

root@asus:~# apt install docker-ce docker-ce-cli containerd.io
Reading package lists… Done
Building dependency tree
Reading state information… Done
The following additional packages will be installed:
docker-ce-rootless-extras docker-scan-plugin git git-man liberror-perl pigz slirp4netns
Suggested packages:
aufs-tools cgroupfs-mount | cgroup-lite git-daemon-run | git-daemon-sysvinit git-doc git-el git-email git-gui gitk gitweb git-cvs
git-mediawiki git-svn
The following NEW packages will be installed:
containerd.io docker-ce docker-ce-cli docker-ce-rootless-extras docker-scan-plugin git git-man liberror-perl pigz slirp4netns
0 upgraded, 10 newly installed, 0 to remove and 6 not upgraded.
Need to get 113 MB of archives.
After this operation, 508 MB of additional disk space will be used.
Do you want to continue? [Y/n] y

… … ..

Selecting previously unselected package git.
Preparing to unpack …/8-git_1%3a2.27.0-1ubuntu1.1_amd64.deb …
Unpacking git (1:2.27.0-1ubuntu1.1) …
Selecting previously unselected package slirp4netns.
Preparing to unpack …/9-slirp4netns_1.0.1-1_amd64.deb …
Unpacking slirp4netns (1.0.1-1) …
Setting up slirp4netns (1.0.1-1) …
Setting up docker-scan-plugin (0.8.0~ubuntu-groovy) …
Setting up liberror-perl (0.17029-1) …
Setting up containerd.io (1.4.6-1) …
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Setting up docker-ce-cli (5:20.10.7~3-0~ubuntu-groovy) …
Setting up pigz (2.4-1) …
Setting up git-man (1:2.27.0-1ubuntu1.1) …
Setting up docker-ce-rootless-extras (5:20.10.7~3-0~ubuntu-groovy) …
Setting up docker-ce (5:20.10.7~3-0~ubuntu-groovy) …
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
Setting up git (1:2.27.0-1ubuntu1.1) …
Processing triggers for man-db (2.9.3-2) …
Processing triggers for systemd (246.6-1ubuntu1.4) …
root@asus:~#

 

 

Once the installation is completed, the Docker service will start automatically.

 

Verify:

 

systemctl status docker

 

root@asus:~# systemctl status docker
● docker.service – Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-06-30 22:58:36 CEST; 9min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 1718018 (dockerd)
Tasks: 16
Memory: 44.9M
CGroup: /system.slice/docker.service
└─1718018 /usr/bin/dockerd -H fd:// –containerd=/run/containerd/containerd.sock

 

Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.752582946+02:00″ level=warning msg=”Your kernel does not support CPU realtime sc>
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.752632979+02:00″ level=warning msg=”Your kernel does not support cgroup blkio we>
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.752640243+02:00″ level=warning msg=”Your kernel does not support cgroup blkio we>
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.752809760+02:00″ level=info msg=”Loading containers: start.”
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.856955079+02:00″ level=info msg=”Default bridge (docker0) is assigned with an IP>
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.918027046+02:00″ level=info msg=”Loading containers: done.”
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.940673725+02:00″ level=info msg=”Docker daemon” commit=b0f5bc3 graphdriver(s)=ov>
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.940830529+02:00″ level=info msg=”Daemon has completed initialization”
Jun 30 22:58:36 asus systemd[1]: Started Docker Application Container Engine.
Jun 30 22:58:36 asus dockerd[1718018]: time=”2021-06-30T22:58:36.973960760+02:00″ level=info msg=”API listen on /run/docker.sock”
root@asus:~#

 

To prevent the Docker package from being updated, mark it as held back:

 

sudo apt-mark hold docker-ce

 

 

To execute Docker Commands as a Non-Root User:

 

By default, only root and user with sudo privileges can execute Docker commands.

 

To execute Docker commands as non-root user, add the user to the docker group that was created during the installation of the Docker CE package:

 

sudo usermod -aG docker $USER

 

root@asus:~#
root@asus:~# usermod -aG docker kevin
root@asus:~#
root@asus:~#
root@asus:~# usermod -aG docker root
root@asus:~#

 

 

Log out and log back in to refresh the group membership.

 

 

To verify Docker is successfully installed and that you can execute docker commands, run a test container:

 

docker container run hello-world

This command downloads the test image, if it is not found locally, then runs it in a container, which prints a “Hello from Docker” message, and then exits:

 

root@asus:/home/kevin# docker container run hello-world
Unable to find image ‘hello-world:latest’ locally
latest: Pulling from library/hello-world
b8dfde127a29: Pulling fs layer
b8dfde127a29: Pull complete
Digest: sha256:9f6ad537c5132bcce57f7a0a20e317228d382c3cd61edae14650eec68b2b345c
Status: Downloaded newer image for hello-world:latest

 

Hello from Docker!

 

This message shows that your installation appears to be working correctly.

 

To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the “hello-world” image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.

 

To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash

 

Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/

 

For more examples and ideas, visit:
https://docs.docker.com/get-started/

 

root@asus:/home/kevin#

 

 

Note the container will stop after printing the message because it does not contain a long-running process.

 

By default, Docker always pulls images from the Docker Hub. This is a cloud-based registry which stores Docker images in public or private repositories.

 

root@asus:/home/kevin# docker run -it ubuntu bash
Unable to find image ‘ubuntu:latest’ locally
latest: Pulling from library/ubuntu
c549ccf8d472: Pull complete
Digest: sha256:aba80b77e27148d99c034a987e7da3a287ed455390352663418c0f2ed40417fe
Status: Downloaded newer image for ubuntu:latest
root@f152766a2e6d:/#
root@f152766a2e6d:/#
root@f152766a2e6d:/#
root@f152766a2e6d:/#
root@f152766a2e6d:/# whoami
root
root@f152766a2e6d:/# ls
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var
root@f152766a2e6d:/#
root@f152766a2e6d:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 395G 152G 224G 41% /
tmpfs 64M 0 64M 0% /dev
tmpfs 8.8G 0 8.8G 0% /sys/fs/cgroup
shm 64M 0 64M 0% /dev/shm
/dev/nvme0n1p4 395G 152G 224G 41% /etc/hosts
tmpfs 8.8G 0 8.8G 0% /proc/asound
tmpfs 8.8G 0 8.8G 0% /proc/acpi
tmpfs 8.8G 0 8.8G 0% /proc/scsi
tmpfs 8.8G 0 8.8G 0% /sys/firmware
root@f152766a2e6d:/#

 

you are now inside a docker container!

 

 

Uninstalling Docker

 

Before uninstalling Docker you should first remove all containers, images, volumes, and docker networks.

 

Run the following commands to stop all running containers and remove all docker objects:

 

docker container stop $(docker container ls -aq)

 

docker system prune -a –volumes

 

You can now uninstall Docker as any other package installed with apt:

 

sudo apt purge docker-ce
sudo apt autoremove

 

 

 

Using Docker Images to Deploy Containers

 

Docker images are templates containing instructions and specifications for creating a container.

 

To use Docker, you need to obtain an image or else create your own by building a dockerfile.

 

Listing Images

 

To list all the docker images on your system, use:

 

docker images

 

root@asus:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 9873176a8ff5 12 days ago 72.7MB
hello-world latest d1165f221234 3 months ago 13.3kB
root@asus:~#

 

 

Finding an Image

 

Images are stored on Docker registries, such as Docker Hub (Docker’s official registry).

 

You can browse the images on the registry or use the following command to search the Docker registry.

 

Replace [keyword] with the keyword you are searching for, such as nginx or apache.

 

docker search [keyword]

 

root@asus:~# docker search apache
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
httpd The Apache HTTP Server Project 3566 [OK]
tomcat Apache Tomcat is an open source implementati… 3060 [OK]
maven Apache Maven is a software project managemen… 1221 [OK]
apache/airflow Apache Airflow 260
apache/nifi Unofficial convenience binaries and Docker i… 218 [OK]
apache/zeppelin Apache Zeppelin 147 [OK]
eboraas/apache-php PHP on Apache (with SSL/TLS support), built … 144 [OK]
eboraas/apache Apache (with SSL/TLS support), built on Debi… 92 [OK]
apacheignite/ignite Apache Ignite – Distributed Database 78 [OK]
nimmis/apache-php5 This is docker images of Ubuntu 14.04 LTS wi… 69 [OK]
apache/skywalking-oap-server Apache SkyWalking OAP Server 68
bitnami/apache Bitnami Apache Docker Image 67 [OK]
apache/superset Apache Superset 50
apachepulsar/pulsar Apache Pulsar – Distributed pub/sub messagin… 42
linuxserver/apache An Apache container, brought to you by Linux… 28
antage/apache2-php5 Docker image for running Apache 2.x with PHP… 24 [OK]
apache/nutch Apache Nutch 23 [OK]
webdevops/apache Apache container 15 [OK]
apache/tika Apache Tika Server – the content analysis to… 8
newdeveloper/apache-php apache-php7.2 8
newdeveloper/apache-php-composer apache-php-composer 7
lephare/apache Apache container 6 [OK]
apache/fineract Apache Fineract 3
secoresearch/apache-varnish Apache+PHP+Varnish5.0 2 [OK]
apache/arrow-dev Apache Arrow convenience images for developm… 1
root@asus:~#

 

 

root@asus:~# docker search ubuntu
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
ubuntu Ubuntu is a Debian-based Linux operating sys… 12453 [OK]
dorowu/ubuntu-desktop-lxde-vnc Docker image to provide HTML5 VNC interface … 544 [OK]
websphere-liberty WebSphere Liberty multi-architecture images … 274 [OK]
rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 254 [OK]
consol/ubuntu-xfce-vnc Ubuntu container with “headless” VNC session… 241 [OK]
ubuntu-upstart Upstart is an event-based replacement for th… 111 [OK]
ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 98 [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 50 [OK]
ubuntu-debootstrap debootstrap –variant=minbase –components=m… 44 [OK]
i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 25
nuagebec/ubuntu Simple always updated Ubuntu docker images w… 24 [OK]
1and1internet/ubuntu-16-apache-php-5.6 ubuntu-16-apache-php-5.6 14 [OK]
1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 13 [OK]
eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 13 [OK]
1and1internet/ubuntu-16-nginx-php-phpmyadmin-mariadb-10 ubuntu-16-nginx-php-phpmyadmin-mariadb-10 11 [OK]
1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 9 [OK]
1and1internet/ubuntu-16-apache-php-7.1 ubuntu-16-apache-php-7.1 7 [OK]
darksheer/ubuntu Base Ubuntu Image — Updated hourly 5 [OK]
1and1internet/ubuntu-16-nginx-php-7.0 ubuntu-16-nginx-php-7.0 4 [OK]
owncloud/ubuntu ownCloud Ubuntu base image 3
1and1internet/ubuntu-16-nginx-php-7.1-wordpress-4 ubuntu-16-nginx-php-7.1-wordpress-4 3 [OK]
smartentry/ubuntu ubuntu with smartentry 1 [OK]
1and1internet/ubuntu-16-php-7.1 ubuntu-16-php-7.1 1 [OK]
1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK]
1and1internet/ubuntu-16-rspec ubuntu-16-rspec 0 [OK]
root@asus:~#

 

 

 

Starting a Container

 

To start a Docker container use:

 

docker start [ID]

replacing [ID] with the container ID corresponding with the container you want to start.

 

Stopping a Container

 

To stop a Docker container, replace [ID] with the container ID corresponding with the container you wish to stop:

 

docker stop [ID]

 

 

 

Removing a Container

 

To remove a Docker container, replace [ID] with the container ID corresponding with the container you wish to remove:

 

docker rm [ID]

 

 

Obtaining an Image

 

Once you find an image, you can download it to your server using the “docker pull” command. Replace [image] with the name of the image you’d like to use:

 

docker pull [image]

 

For instance, to pull the official nginx image:

 

docker pull nginx

 

Running an Image

 

To create a container based on an image, use the “docker run” command. Replace [image] with the name of the image you’d like to use.

 

docker run [image]

 

If the image hasn’t yet been downloaded and is available in the Docker registry, then the image will automatically be pulled to your server.

 

Managing Docker Containers

 

Listing Containers

 

To list all active (and inactive) Docker containers running on your system, run the following command:

 

docker ps -a

 

You should see something like the following:

 

root@asus:~# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b5d96e61c851 ubuntu “bash” 5 minutes ago Up 5 minutes interesting_rhodes
f152766a2e6d ubuntu “bash” 25 minutes ago Exited (0) 16 minutes ago vigorous_hypatia
64b484e8e343 hello-world “/hello” 28 minutes ago Exited (0) 28 minutes ago infallible_gagarin
root@asus:~#

 

 

kevin@asus:~$ sudo su
root@asus:/home/kevin# docker version
Client: Docker Engine – Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:56:41 2021
OS/Arch: linux/amd64
Context: default
Experimental: true

 

Server: Docker Engine – Community
Engine:
Version: 20.10.7
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: b0f5bc3
Built: Wed Jun 2 11:54:53 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.6
GitCommit: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc:
Version: 1.0.0-rc95
GitCommit: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
docker-init:
Version: 0.19.0
GitCommit: de40ad0
root@asus:/home/kevin#

 

 

 

The official Docker Hub registry is at:

 

https://hub.docker.com/_/node

 

and download the node image:

 

docker pull node

 

 

root@asus:/home/kevin# docker pull node
Using default tag: latest
latest: Pulling from library/node
0bc3020d05f1: Pull complete
a110e5871660: Pull complete
83d3c0fa203a: Pull complete
a8fd09c11b02: Pull complete
14feb89c4a52: Pull complete
612a5de913f3: Pull complete
b86d81a99d41: Pull complete
5dd61d4ad9e8: Pull complete
7aae82345965: Pull complete
Digest: sha256:ca6daf1543242acb0ca59ff425509eab7defb9452f6ae07c156893db06c7a9a4
Status: Downloaded newer image for node:latest
docker.io/library/node:latest
root@asus:/home/kevin#

 

 

root@asus:~# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
node latest 7105279fa2ab 8 days ago 908MB
ubuntu latest 9873176a8ff5 13 days ago 72.7MB
docker/getting-started latest 083d7564d904 2 weeks ago 28MB
hello-world latest d1165f221234 3 months ago 13.3kB
root@asus:~#

 

 

docker run node

 

 

root@asus:/home/kevin# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.5.1-docker)
scan: Docker Scan (Docker Inc., v0.8.0)

 

Server:
Containers: 9
Running: 1
Paused: 0
Stopped: 8
Images: 4
Server Version: 20.10.7
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: cgroupfs
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d71fcd7d8303cbf684402823e425e9dd2e99285d
runc version: b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 5.8.0-59-generic
Operating System: Ubuntu 20.10
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 17.59GiB
Name: asus
ID: WFX5:GZ5T:UHPK:VCR6:EHQU:DCCE:JTHA:U5SD:GWFE:XJJ5:QVJ6:RMXA
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

root@asus:/home/kevin#

 

 

 

To install a Jenkins container:

 

 

root@asus:~#
root@asus:~# docker pull jenkins/jenkins
Using default tag: latest
latest: Pulling from jenkins/jenkins
0bc3020d05f1: Already exists
ee3587ec32c3: Pull complete
0bd0b3e8a1ee: Pull complete
7b5615a9059c: Pull complete
62980ab719b4: Pull complete
ee9399291836: Pull complete
1a40e67771e3: Pull complete
ee53ed120856: Pull complete
34f2dd2cbb3e: Pull complete
70625628bb19: Pull complete
52967d83ef48: Pull complete
488b4fe169de: Pull complete
bd260a926aeb: Pull complete
32d7feae958e: Pull complete
a8fd2466ac4c: Pull complete
c7bfa4fe9cd5: Pull complete
Digest: sha256:443a28765cdd2133c0e816ef8d9f25c4c1e32b79b5aa1b9d2002a2f815a122bd
Status: Downloaded newer image for jenkins/jenkins:latest
docker.io/jenkins/jenkins:latest
root@asus:~# docker run -p 8080:8080 -p 50000:50000 jenkins
Unable to find image ‘jenkins:latest’ locally
docker: Error response from daemon: manifest for jenkins:latest not found: manifest unknown: manifest unknown.
See ‘docker run –help’.
root@asus:~# docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins
Running from: /usr/share/jenkins/jenkins.war
webroot: EnvVars.masterEnvVars.get(“JENKINS_HOME”)
2021-07-01 17:06:24.542+0000 [id=1] INFO org.eclipse.jetty.util.log.Log#initialized: Logging initialized @308ms to org.eclipse.jetty.util.log.JavaUtilLog
2021-07-01 17:06:24.668+0000 [id=1] INFO winstone.Logger#logInternal: Beginning extraction from war file
2021-07-01 17:06:25.742+0000 [id=1] WARNING o.e.j.s.handler.ContextHandler#setContextPath: Empty contextPath
2021-07-01 17:06:25.802+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: jetty-9.4.41.v20210516; built: 2021-05-16T23:56:28.993Z; git: 98607f93c7833e7dc59489b13f3cb0a114fb9f4c; jvm 1.8.0_292-b10
2021-07-01 17:06:26.053+0000 [id=1] INFO o.e.j.w.StandardDescriptorProcessor#visitServlet: NO JSP Support for /, did not find org.eclipse.jetty.jsp.JettyJspServlet
2021-07-01 17:06:26.118+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: DefaultSessionIdManager workerName=node0
2021-07-01 17:06:26.118+0000 [id=1] INFO o.e.j.s.s.DefaultSessionIdManager#doStart: No SessionScavenger set, using defaults
2021-07-01 17:06:26.120+0000 [id=1] INFO o.e.j.server.session.HouseKeeper#startScavenging: node0 Scavenging every 600000ms
2021-07-01 17:06:26.496+0000 [id=1] INFO hudson.WebAppMain#contextInitialized: Jenkins home directory: /var/jenkins_home found at: EnvVars.masterEnvVars.get(“JENKINS_HOME”)
2021-07-01 17:06:26.605+0000 [id=1] INFO o.e.j.s.handler.ContextHandler#doStart: Started w.@677dbd89{Jenkins v2.300,/,file:///var/jenkins_home/war/,AVAILABLE}{/var/jenkins_home/war}
2021-07-01 17:06:26.621+0000 [id=1] INFO o.e.j.server.AbstractConnector#doStart: Started ServerConnector@624ea235{HTTP/1.1, (http/1.1)}{0.0.0.0:8080}
2021-07-01 17:06:26.622+0000 [id=1] INFO org.eclipse.jetty.server.Server#doStart: Started @2388ms
2021-07-01 17:06:26.623+0000 [id=23] INFO winstone.Logger#logInternal: Winstone Servlet Engine running: controlPort=disabled
2021-07-01 17:06:27.860+0000 [id=30] INFO jenkins.InitReactorRunner$1#onAttained: Started initialization
2021-07-01 17:06:27.884+0000 [id=29] INFO jenkins.InitReactorRunner$1#onAttained: Listed all plugins
2021-07-01 17:06:29.130+0000 [id=32] INFO jenkins.InitReactorRunner$1#onAttained: Prepared all plugins
2021-07-01 17:06:29.135+0000 [id=32] INFO jenkins.InitReactorRunner$1#onAttained: Started all plugins
2021-07-01 17:06:29.143+0000 [id=32] INFO jenkins.InitReactorRunner$1#onAttained: Augmented all extensions
2021-07-01 17:06:29.911+0000 [id=34] INFO jenkins.InitReactorRunner$1#onAttained: System config loaded
2021-07-01 17:06:29.912+0000 [id=30] INFO jenkins.InitReactorRunner$1#onAttained: System config adapted
2021-07-01 17:06:29.912+0000 [id=41] INFO jenkins.InitReactorRunner$1#onAttained: Loaded all jobs
2021-07-01 17:06:29.913+0000 [id=37] INFO jenkins.InitReactorRunner$1#onAttained: Configuration for all jobs updated
2021-07-01 17:06:30.027+0000 [id=56] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Started Download metadata
2021-07-01 17:06:30.036+0000 [id=56] INFO hudson.util.Retrier#start: Attempt #1 to do the action check updates server
2021-07-01 17:06:30.384+0000 [id=43] INFO jenkins.install.SetupWizard#init:

*************************************************************
*************************************************************
*************************************************************

Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:

735f090fba884ee780f97e67ff2ad0bd

This may also be found at: /var/jenkins_home/secrets/initialAdminPassword

*************************************************************
*************************************************************
*************************************************************

2021-07-01 17:06:45.324+0000 [id=28] INFO jenkins.InitReactorRunner$1#onAttained: Completed initialization
2021-07-01 17:06:45.337+0000 [id=22] INFO hudson.WebAppMain$3#run: Jenkins is fully up and running
2021-07-01 17:06:45.634+0000 [id=56] INFO h.m.DownloadService$Downloadable#load: Obtained the updated data file for hudson.tasks.Maven.MavenInstaller
2021-07-01 17:06:45.634+0000 [id=56] INFO hudson.util.Retrier#start: Performed the action check updates server successfully at the attempt #1
2021-07-01 17:06:45.637+0000 [id=56] INFO hudson.model.AsyncPeriodicWork#lambda$doRun$0: Finished Download metadata. 15,609 ms

 

 

The official site for Docker hub is https://www.docker.com/community-edition#/add_ons

browse and find the Jenkins image.

 

then on your host machine enter:

docker pull jenkins/jenkins

(this is the copypasted command from the hub jenkins page)

To run Jenkins, you need to run the following command −

sudo docker run -p 8080:8080 -p 50000:50000 jenkins/jenkins

 

Note the following points about the above sudo command −

We are using the sudo command to ensure it runs with root access.

Here, jenkins/jenkins is the name of the image we want to download from Docker hub and install on our Ubuntu machine.

-p is used to map the port number of the internal Docker image to our main Ubuntu server so that we can access the container accordingly.

 

You will then have Jenkins successfully running as a container on the Ubuntu machine.

 

 

Continue Reading

Setting up NAT Networking on Oracle Virtualbox on CentOS

First define a nat network under tools — preferences – network and give it a name, I called it NatNetwork.

 

Then right click on properties, and define the ip of the subnet – a new one just for NatNetwork, I chose 10.0.5.0

 

Next go to each VM and add a network adapter connected to NatNetwork

 

and select the network you created.

 

To enable IP packet forwarding please edit /etc/sysctl.conf with your editor of choice and set:
# Controls IP packet forwarding
net.ipv4.ip_forward = 1

You can then verify your settings with:
/sbin/sysctl -p

 

on each machine

 

sysctl -w net.ipv4.ip_forward=1

 

you also have to put it in the /etc/sysctl.d/sysctl.conf file! otherwise it does not take effect -and do:

 

root@router:/etc/netplan# sysctl –system

 

I did it with:

 

 

[root@clusterserver sysctl.d]# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[root@clusterserver sysctl.d]#

 

[root@clusterserver sysctl.d]# /sbin/sysctl -p
net.ipv4.ip_forward = 1
[root@clusterserver sysctl.d]#
root@router:/etc/netplan# sysctl –system

 

 

NOTE with centos and nmcli you have to first add a new connection:

 

[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
enp0s10 ethernet disconnected —
lo loopback unmanaged —
virbr0-nic tun unmanaged —

 

[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli con add type ethernet con-name enp0s10 ifname enp0s10 ip4 10.0.5.10
Connection ‘enp0s10’ (392ee518-be1b-4498-885c-cacef2e295d9) successfully added.
[root@clusterserver network-scripts]#

 

Unter CentOS a “connection” is not the same as a network interface, I have used the same name for the connection here, but it can be labeled differently.

 

then it looks like this:

 

[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s10 ethernet connected enp0s10
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
lo loopback unmanaged —
virbr0-nic tun unmanaged

 

Note that manual changes to the ifcfg file will not be noticed by NetworkManager until the interface is next brought up.

 

So, you have to do a

 

nmcli con down enp0s10 && nmcli con up enp0s10

 

[root@clusterserver network-scripts]# nmcli con down enp0s10 && nmcli con up enp0s10
Connection ‘enp0s10’ successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@clusterserver network-scripts]#

 

To configure a static route for an existing Ethernet connection using the command line, enter a command as follows:
~]# nmcli connection modify eth0 +ipv4.routes “192.168.122.0/24 10.10.10.1”

 

This will direct traffic for the 192.168.122.0/24 subnet to the gateway at 10.10.10.1.

 

so, we need to do:

 

[root@clusterserver network-scripts]# nmcli connection modify enp0s10 +ipv4.routes “10.0.2.0/24 10.0.2.10”
[root@clusterserver network-scripts]#

 

Next, IMPORTANT!! do a reload of the specific connection:

 

[root@clusterserver network-scripts]# nmcli con reload enp0s10
[root@clusterserver network-scripts]#

 

otherwise the changes will not be active!

 

OR do interactively:

 

[root@clusterserver network-scripts]# nmcli con edit type ethernet con-name enp0s10

 

===| nmcli interactive connection editor |===

 

Adding a new ‘802-3-ethernet’ connection

 

Type ‘help’ or ‘?’ for available commands.
Type ‘print’ to show all the connection properties.
Type ‘describe [<setting>.<prop>]’ for detailed property description.

 

You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, sriov, ethtool, match, ipv4, ipv6, tc, proxy
nmcli> set ipv4.routes 10.0.5.0/24 10.0.5.10
nmcli>
nmcli>
nmcli> save persistent
Saving the connection with ‘autoconnect=yes’. That might result in an immediate activation of the connection.
Do you still want to save? (yes/no) [yes] yes
Connection ‘enp0s10’ (cbaf5c33-de4a-43a1-83af-7f51103706bd) successfully saved.
nmcli>

 

Setting up NAT NETWORK on Oracle VB on CentOS

 

first define a nat network under tools — preferences – network and give it a name, I called it NatNetwork

 

and then right click on properties, and define the ip of the subnet – a new one just for the nat network, I chose 10.0.5.0

 

next go to each VM and add a network adapter connected to NAT Network

 

and select the network you created.

 

To enable IP packet forwarding please edit /etc/sysctl.conf with your editor of choice and set:

 

# Controls IP packet forwarding
net.ipv4.ip_forward = 1
You can then verify your settings with:
/sbin/sysctl -p

 

on each machine

 

sysctl -w net.ipv4.ip_forward=1

 

you also have to put it in the /etc/sysctl.d/sysctl.conf file! otherwise it does not take effect -and do:

 

root@router:/etc/netplan# sysctl –system

 

I did it with:

 

 

[root@clusterserver sysctl.d]# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

[root@clusterserver sysctl.d]#

 

[root@clusterserver sysctl.d]# /sbin/sysctl -p
net.ipv4.ip_forward = 1

[root@clusterserver sysctl.d]#
root@router:/etc/netplan# sysctl –system

 

 

NOTE with centos and nmcli you have to first add a new connection:

 

[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
enp0s10 ethernet disconnected —
lo loopback unmanaged —
virbr0-nic tun unmanaged —
[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli con add type ethernet con-name enp0s10 ifname enp0s10 ip4 10.0.5.10
Connection ‘enp0s10’ (392ee518-be1b-4498-885c-cacef2e295d9) successfully added.
[root@clusterserver network-scripts]#

 

Unter CentOS a “connection” is not the same as a network interface, I have used the same name for the connection here, but it can be labeled differently.

 

then it looks like this:

 

[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s10 ethernet connected enp0s10
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
lo loopback unmanaged —
virbr0-nic tun unmanaged

 

Note that manual changes to the ifcfg file will not be noticed by NetworkManager until the interface is next brought up.

 

So, you have to do a

 

nmcli con down enp0s10 && nmcli con up enp0s10

 

[root@clusterserver network-scripts]# nmcli con down enp0s10 && nmcli con up enp0s10
Connection ‘enp0s10’ successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@clusterserver network-scripts]#

 

 

To configure a static route for an existing Ethernet connection using the command line, enter a command as follows:
~]# nmcli connection modify eth0 +ipv4.routes “192.168.122.0/24 10.10.10.1”
This will direct traffic for the 192.168.122.0/24 subnet to the gateway at 10.10.10.1.

 

so, we need to do:

 

[root@clusterserver network-scripts]# nmcli connection modify enp0s10 +ipv4.routes “10.0.2.0/24 10.0.2.10”
[root@clusterserver network-scripts]#

 

Next, IMPORTANT!! do a reload of the specific connection:

 

[root@clusterserver network-scripts]# nmcli con reload enp0s10
[root@clusterserver network-scripts]#

 

otherwise the changes will not be active!

 

OR do interactively:

 

[root@clusterserver network-scripts]# nmcli con edit type ethernet con-name enp0s10

 

===| nmcli interactive connection editor |===

 

Adding a new ‘802-3-ethernet’ connection

Type ‘help’ or ‘?’ for available commands.
Type ‘print’ to show all the connection properties.
Type ‘describe [<setting>.<prop>]’ for detailed property description.

 

You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, sriov, ethtool, match, ipv4, ipv6, tc, proxy
nmcli> set ipv4.routes 10.0.5.0/24 10.0.5.10
nmcli>
nmcli>
nmcli> save persistent
Saving the connection with ‘autoconnect=yes’. That might result in an immediate activation of the connection.
Do you still want to save? (yes/no) [yes] yes
Connection ‘enp0s10’ (cbaf5c33-de4a-43a1-83af-7f51103706bd) successfully saved.
nmcli>

 

 

 

 

Continue Reading

How To install Cluster Fencing Using Libvert on KVM Virtual Machines

These are my practical notes on installing libvert fencing on Centos  cluster nodes running on virtual machines using the KVM hypervisor platform.

 

 

NOTE: If a local firewall is enabled, open the chosen TCP port (in this example, the default of 1229) to the host.

 

Alternatively if you are using a testing or training environment you can disable the firewall. Do not do the latter on production environments!

 

1. On the KVM host machine, install the fence-virtd, fence-virtd-libvirt, and fence-virtd-multicast packages. These packages provide the virtual machine fencing daemon, libvirt integration, and multicast listener, respectively.

yum -y install fence-virtd fence-virtd-libvirt fence-virtd­multicast

 

2. On the KVM host, create a shared secret key called /etc/cluster/fence_xvm.key. The target directory /etc/cluster needs to be created manually on the nodes and the KVM host.

 

mkdir -p /etc/cluster

 

dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=lk count=4

 

then distribute the key from the KVM host to all the nodes:

3. Distribute the shared secret key /etc/cluster/fence_xvm. key to all cluster nodes, keeping the name and the path the same as on the KVM host.

 

scp /etc/cluster/fence_xvm.key centos1vm:/etc/cluster/

 

and copy also to the other nodes

4. On the KVM host, configure the fence_virtd daemon. Defaults can be used for most options, but make sure to select the libvirt back end and the multicast listener. Also make sure you give the correct directory location for the shared key you just created (here /etc/cluster/fence.xvm.key):

 

fence_virtd -c

5. Enable and start the fence_virtd daemon on the hypervisor.

 

systemctl enable fence_virtd
systemctl start fence_virtd

6. Also install fence_virtd and enable and start on the nodes

 

root@yoga:/etc# systemctl enable fence_virtd
Synchronizing state of fence_virtd.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable fence_virtd
root@yoga:/etc# systemctl start fence_virtd
root@yoga:/etc# systemctl status fence_virtd
● fence_virtd.service – Fence-Virt system host daemon
Loaded: loaded (/lib/systemd/system/fence_virtd.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-02-23 14:13:20 CET; 6min ago
Docs: man:fence_virtd(8)
man:fence_virt.con(5)
Main PID: 49779 (fence_virtd)
Tasks: 1 (limit: 18806)
Memory: 3.2M
CGroup: /system.slice/fence_virtd.service
└─49779 /usr/sbin/fence_virtd -w

 

Feb 23 14:13:20 yoga systemd[1]: Starting Fence-Virt system host daemon…
root@yoga:/etc#

 

7. Test the KVM host multicast connectivity with:

 

fence_xvm -o list

root@yoga:/etc# fence_xvm -o list
centos-base c023d3d6-b2b9-4dc2-b0c7-06a27ddf5e1d off
centos1 2daf2c38-b9bf-43ab-8a96-af124549d5c1 on
centos2 3c571551-8fa2-4499-95b5-c5a8e82eb6d5 on
centos3 2969e454-b569-4ff3-b88a-0f8ae26e22c1 on
centosstorage 501a3dbb-1088-48df-8090-adcf490393fe off
suse-base 0b360ee5-3600-456d-9eb3-d43c1ee4b701 off
suse1 646ce77a-da14-4782-858e-6bf03753e4b5 off
suse2 d9ae8fd2-eede-4bd6-8d4a-f2d7d8c90296 off
suse3 7ad89ea7-44ae-4965-82ba-d29c446a0607 off
root@yoga:/etc#

 

 

8. create your fencing devices, one for each node:

 

pcs stonith create <name for our fencing device for this vm cluster host> fence_xvm port=”<the KVM vm name>” pcmk_host_list=”<FQDN of the cluster host>”

 

one for each node with the values set accordingly for each host. So it will look like this:

 

MAKE SURE YOU SET ALL THE NAMES CORRECTLY!

 

On ONE of the nodes, create all the following fence devices, usually one does this on the DC (current designated co-ordinator) node:

 

[root@centos1 etc]# pcs stonith create fence_centos1 fence_xvm port=”centos1″ pcmk_host_list=”centos1.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos2 fence_xvm port=”centos2″ pcmk_host_list=”centos2.localdomain”
[root@centos1 etc]# pcs stonith create fence_centos3 fence_xvm port=”centos3″ pcmk_host_list=”centos3.localdomain”
[root@centos1 etc]#

 

9. Next, enable fencing on the cluster nodes.

 

Make sure the property is set to TRUE

 

check with

 

pcs -f stonith_cfg property

 

If the cluster fencing stonith property is set to FALSE then you can manually set it to TRUE on all the Cluster nodes:

 

pcs -f stonith_cfg property set stonith-enabled=true

 

[root@centos1 ~]# pcs -f stonith_cfg property
Cluster Properties:
stonith-enabled: true
[root@centos1 ~]#

 

you can also do:

pcs stonith cleanup fence_centos1 and the other hosts centos2 and centos3

 

[root@centos1 ~]# pcs stonith cleanup fence_centos1
Cleaned up fence_centos1 on centos3.localdomain
Cleaned up fence_centos1 on centos2.localdomain
Cleaned up fence_centos1 on centos1.localdomain
Waiting for 3 replies from the controller
… got reply
… got reply
… got reply (done)
[root@centos1 ~]#

 

 

If a stonith id or node is not specified then all stonith resources and devices will be cleaned.

pcs stonith cleanup

 

then do

 

pcs stonith status

 

[root@centos1 ~]# pcs stonith status
* fence_centos1 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos2 (stonith:fence_xvm): Started centos3.localdomain
* fence_centos3 (stonith:fence_xvm): Started centos3.localdomain
[root@centos1 ~]#

 

 

Some other stonith fencing commands:

 

To list the available fence agents, execute below command on any of the Cluster node

 

# pcs stonith list

 

(can take several seconds, dont kill!)

 

root@ubuntu1:~# pcs stonith list
apcmaster – APC MasterSwitch
apcmastersnmp – APC MasterSwitch (SNMP)
apcsmart – APCSmart
baytech – BayTech power switch
bladehpi – IBM BladeCenter (OpenHPI)
cyclades – Cyclades AlterPath PM
external/drac5 – DRAC5 STONITH device
.. .. .. list truncated…

 

 

To get more details about the respective fence agent you can use:

 

root@ubuntu1:~# pcs stonith describe fence_xvm
fence_xvm – Fence agent for virtual machines

 

fence_xvm is an I/O Fencing agent which can be used withvirtual machines.

 

Stonith options:
debug: Specify (stdin) or increment (command line) debug level
ip_family: IP Family ([auto], ipv4, ipv6)
multicast_address: Multicast address (default=225.0.0.12 / ff05::3:1)
ipport: TCP, Multicast, VMChannel, or VM socket port (default=1229)
.. .. .. list truncated . ..

 

Continue Reading

How To Set Static or Permanent IP Addresses for Virtual Machines in KVM

Default KVM behaviour is for KVM to issue DHCP temporary IP addresses for its virtual machines. You can suppress this behaviour for newly defined subnets by simply unticking the “Enable DHCP” option for the defined subnet in the Virtual Networks section in the KVM dashboard.
  
However, the NAT bridged network interface is set to automatically issue DHCP IPs. This can be inconvenient when you want to login to the machine from a shell terminal on your PC or laptop rather than accessing the machine via the KVM console terminal.

  
To change these IPs from DHCP to Static, you need to carry out the following steps, using my current environment as an example:
  
Let’s say I want to change the IP of a machine called suse1 from DHCP to Static IP.
    
1. On the KVM host machine, display the list of current KVM networks:
  
virsh net-list
  
root@yoga:/etc# virsh net-list
Name State Autostart Persistent
—————————————————–
default active yes yes
network-10.0.7.0 active yes yes
network-10.0.8.0 active yes yes
  
The interface of the machine I want to set is located on network “default”.
  
2. Find the MAC address or addresses of the virtual machine whose IP address you want to set:
  
Note the machine name is the name used to define the machine in KVM. It need not be the same as the OS hostname of the machine.
  
virsh dumpxml <machine name> | grep -i ‘<mac’
  
root@yoga:/home/kevin# virsh dumpxml suse1 | grep -i ‘<mac’
<mac address=’52:54:00:b4:0c:8d’/>
<mac address=’52:54:00:e9:97:91’/>
  
So the machine has two network interfaces.
  
I know from ifconfig (or ip a) that the interface I want to set is the first one, eth0, with mac address: 52:54:00:b4:0c:8d.
  
This is the one that is using the network called “default”.
  
suse1:~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:b4:0c:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.122.179/24 brd 192.168.122.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:feb4:c8d/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:e9:97:91 brd ff:ff:ff:ff:ff:ff
inet 10.0.7.11/24 brd 10.0.7.255 scope global eth1
valid_lft forever preferred_lft forever
inet 10.0.7.100/24 brd 10.0.7.255 scope global secondary eth1
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fee9:9791/64 scope link
valid_lft forever pre
  
3. Edit the network configuration file:
  
virsh net-edit <network name>
  
So in this example I do:
  
virsh net-edit default
  

Add the following entry between <dhcp> and </dhcp> as follows:
  
<host mac=’xx:xx:xx:xx:xx:xx’ name=’virtual_machine’ ip=’xxx.xxx.xxx.xxx’/>
  
whereby
  
mac = mac address of the virtual machine
  
name = KVM virtual machine name
  
IP = IP address you want to set for this interface
  
So for this example I add:
  
<host mac=’52:54:00:b4:0c:8d’ name=’suse1′ ip=’192.168.122.11’/>
  
then save and close the file.
  
4. Then restart the KVM DHCP service:
  
virsh net-destroy <network name>
  
virsh net-destroy default
  
virsh net-start <network name>
  
virsh net-start default
  
5. Shutdown the virtual machine:
  
virsh shutdown <machine name>
  
virsh shutdown suse1
  
6. Stop the network service:
  
virsh net-destroy default
  
7. Restart the libvertd system:
  
systemctl restart virtlogd.socket
systemctl restart libvirtd
  
8. Restart the network:
  
virsh net-start <network name>
  
virsh net-start default
  
9. Then restart the KVM desktop virt-manager
  
virt-manager
  
10. Then restart the virtual machine again, either on the KVM desktop or else using the command:
  
virsh start <virtual machine>
  
virsh start suse1
  
If the steps have all been performed correctly, the network interface on the machine should now have the static IP address you defined instead of a DHCP address from KVM.
  
Verify on the guest machine with ifconfig or ip a

 

Continue Reading

How To Resolve Oracle VirtualBox error : kernel driver not installed(rc=-1908)

When trying to start an Oracle virtual box, the start fails with the following error:

Oracle VirtualBox error : kernel driver not installed(rc=-1908)

Kernel driver not installed (rc=-1908)

The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a
permission problem with /dev/vboxdrv. Please reinstall the kernel module by
executing


‘/etc/init.d/vboxdrv setup’

as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv
kernel module if necessary.

 

How to Resolve

running

/etc/init.d/vboxdrv setup

as suggested by the error did not resolve the problem.

You need to do the following:

sudo apt-get remove virtualbox-dkms
sudo apt-get install virtualbox-dkms


The Oracle virtual box machine can then be started without error.

Continue Reading