Tags Archives: Centos

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: GlusterFS Configuration on Centos

How To Install GlusterFS on Centos7

 

Choose a package source: either the CentOS Storage SIG or Gluster.org

 

Using CentOS Storage SIG Packages

 

 

yum search centos-release-gluster

 

yum install centos-release-gluster37

 

yum install centos-release-gluster37

 

yum install glusterfs gluster-cli glusterfs-libs glusterfs-server

 

 

 

[root@glusterfs1 ~]# yum search centos-release-gluster
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
================================================= N/S matched: centos-release-gluster =================================================
centos-release-gluster-legacy.noarch : Disable unmaintained Gluster repositories from the CentOS Storage SIG
centos-release-gluster40.x86_64 : Gluster 4.0 (Short Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster41.noarch : Gluster 4.1 (Long Term Stable) packages from the CentOS Storage SIG repository
centos-release-gluster5.noarch : Gluster 5 packages from the CentOS Storage SIG repository
centos-release-gluster6.noarch : Gluster 6 packages from the CentOS Storage SIG repository
centos-release-gluster7.noarch : Gluster 7 packages from the CentOS Storage SIG repository
centos-release-gluster8.noarch : Gluster 8 packages from the CentOS Storage SIG repository
centos-release-gluster9.noarch : Gluster 9 packages from the CentOS Storage SIG repository

Name and summary matches only, use “search all” for everything.
[root@glusterfs1 ~]#

 

 

Alternatively, using Gluster.org Packages

 

# yum update -y

 

 

Download the latest glusterfs-epel repository from gluster.org:

 

yum install wget -y

 

 

[root@glusterfs1 ~]# yum install wget -y
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
Package wget-1.14-18.el7_6.1.x86_64 already installed and latest version
Nothing to do
[root@glusterfs1 ~]#

 

 

 

wget -P /etc/yum.repos.d/ http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo

 

Also install the latest EPEL repository from fedoraproject.org to resolve all dependencies:

 

yum install http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm

 

 

[root@glusterfs1 ~]# yum repolist
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
repo id repo name status
base/7/x86_64 CentOS-7 – Base 10,072
centos-ceph-nautilus/7/x86_64 CentOS-7 – Ceph Nautilus 609
centos-nfs-ganesha28/7/x86_64 CentOS-7 – NFS Ganesha 2.8 153
ceph-noarch Ceph noarch packages 184
epel/x86_64 Extra Packages for Enterprise Linux 7 – x86_64 13,638
extras/7/x86_64 CentOS-7 – Extras 498
updates/7/x86_64 CentOS-7 – Updates 2,579
repolist: 27,733
[root@glusterfs1 ~]#

 

 

Then install GlusterFS Server on all glusterfs storage cluster nodes.

[root@glusterfs1 ~]# yum install glusterfs gluster-cli glusterfs-libs glusterfs-server

 

Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.xtom.de
* centos-ceph-nautilus: mirror1.hs-esslingen.de
* centos-nfs-ganesha28: ftp.agdsn.de
* epel: mirrors.xtom.de
* extras: mirror.netcologne.de
* updates: mirrors.xtom.de
No package gluster-cli available.
No package glusterfs-server available.
Resolving Dependencies
–> Running transaction check
—> Package glusterfs.x86_64 0:6.0-49.1.el7 will be installed
—> Package glusterfs-libs.x86_64 0:6.0-49.1.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

=======================================================================================================================================
Package Arch Version Repository Size
=======================================================================================================================================
Installing:
glusterfs x86_64 6.0-49.1.el7 updates 622 k
glusterfs-libs x86_64 6.0-49.1.el7 updates 398 k

Transaction Summary
=======================================================================================================================================
Install 2 Packages

Total download size: 1.0 M
Installed size: 4.3 M
Is this ok [y/d/N]: y
Downloading packages:
(1/2): glusterfs-libs-6.0-49.1.el7.x86_64.rpm | 398 kB 00:00:00
(2/2): glusterfs-6.0-49.1.el7.x86_64.rpm | 622 kB 00:00:00
—————————————————————————————————————————————
Total 2.8 MB/s | 1.0 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : glusterfs-libs-6.0-49.1.el7.x86_64 1/2
Installing : glusterfs-6.0-49.1.el7.x86_64 2/2
Verifying : glusterfs-6.0-49.1.el7.x86_64 1/2
Verifying : glusterfs-libs-6.0-49.1.el7.x86_64 2/2

Installed:
glusterfs.x86_64 0:6.0-49.1.el7 glusterfs-libs.x86_64 0:6.0-49.1.el7

Complete!
[root@glusterfs1 ~]#

 

 

 

 

 

Continue Reading

Setting up NAT Networking on Oracle Virtualbox on CentOS

First define a nat network under tools — preferences – network and give it a name, I called it NatNetwork.

 

Then right click on properties, and define the ip of the subnet – a new one just for NatNetwork, I chose 10.0.5.0

 

Next go to each VM and add a network adapter connected to NatNetwork

 

and select the network you created.

 

To enable IP packet forwarding please edit /etc/sysctl.conf with your editor of choice and set:
# Controls IP packet forwarding
net.ipv4.ip_forward = 1

You can then verify your settings with:
/sbin/sysctl -p

 

on each machine

 

sysctl -w net.ipv4.ip_forward=1

 

you also have to put it in the /etc/sysctl.d/sysctl.conf file! otherwise it does not take effect -and do:

 

root@router:/etc/netplan# sysctl –system

 

I did it with:

 

 

[root@clusterserver sysctl.d]# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1
[root@clusterserver sysctl.d]#

 

[root@clusterserver sysctl.d]# /sbin/sysctl -p
net.ipv4.ip_forward = 1
[root@clusterserver sysctl.d]#
root@router:/etc/netplan# sysctl –system

 

 

NOTE with centos and nmcli you have to first add a new connection:

 

[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
enp0s10 ethernet disconnected —
lo loopback unmanaged —
virbr0-nic tun unmanaged —

 

[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli con add type ethernet con-name enp0s10 ifname enp0s10 ip4 10.0.5.10
Connection ‘enp0s10’ (392ee518-be1b-4498-885c-cacef2e295d9) successfully added.
[root@clusterserver network-scripts]#

 

Unter CentOS a “connection” is not the same as a network interface, I have used the same name for the connection here, but it can be labeled differently.

 

then it looks like this:

 

[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s10 ethernet connected enp0s10
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
lo loopback unmanaged —
virbr0-nic tun unmanaged

 

Note that manual changes to the ifcfg file will not be noticed by NetworkManager until the interface is next brought up.

 

So, you have to do a

 

nmcli con down enp0s10 && nmcli con up enp0s10

 

[root@clusterserver network-scripts]# nmcli con down enp0s10 && nmcli con up enp0s10
Connection ‘enp0s10’ successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@clusterserver network-scripts]#

 

To configure a static route for an existing Ethernet connection using the command line, enter a command as follows:
~]# nmcli connection modify eth0 +ipv4.routes “192.168.122.0/24 10.10.10.1”

 

This will direct traffic for the 192.168.122.0/24 subnet to the gateway at 10.10.10.1.

 

so, we need to do:

 

[root@clusterserver network-scripts]# nmcli connection modify enp0s10 +ipv4.routes “10.0.2.0/24 10.0.2.10”
[root@clusterserver network-scripts]#

 

Next, IMPORTANT!! do a reload of the specific connection:

 

[root@clusterserver network-scripts]# nmcli con reload enp0s10
[root@clusterserver network-scripts]#

 

otherwise the changes will not be active!

 

OR do interactively:

 

[root@clusterserver network-scripts]# nmcli con edit type ethernet con-name enp0s10

 

===| nmcli interactive connection editor |===

 

Adding a new ‘802-3-ethernet’ connection

 

Type ‘help’ or ‘?’ for available commands.
Type ‘print’ to show all the connection properties.
Type ‘describe [<setting>.<prop>]’ for detailed property description.

 

You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, sriov, ethtool, match, ipv4, ipv6, tc, proxy
nmcli> set ipv4.routes 10.0.5.0/24 10.0.5.10
nmcli>
nmcli>
nmcli> save persistent
Saving the connection with ‘autoconnect=yes’. That might result in an immediate activation of the connection.
Do you still want to save? (yes/no) [yes] yes
Connection ‘enp0s10’ (cbaf5c33-de4a-43a1-83af-7f51103706bd) successfully saved.
nmcli>

 

Setting up NAT NETWORK on Oracle VB on CentOS

 

first define a nat network under tools — preferences – network and give it a name, I called it NatNetwork

 

and then right click on properties, and define the ip of the subnet – a new one just for the nat network, I chose 10.0.5.0

 

next go to each VM and add a network adapter connected to NAT Network

 

and select the network you created.

 

To enable IP packet forwarding please edit /etc/sysctl.conf with your editor of choice and set:

 

# Controls IP packet forwarding
net.ipv4.ip_forward = 1
You can then verify your settings with:
/sbin/sysctl -p

 

on each machine

 

sysctl -w net.ipv4.ip_forward=1

 

you also have to put it in the /etc/sysctl.d/sysctl.conf file! otherwise it does not take effect -and do:

 

root@router:/etc/netplan# sysctl –system

 

I did it with:

 

 

[root@clusterserver sysctl.d]# sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

[root@clusterserver sysctl.d]#

 

[root@clusterserver sysctl.d]# /sbin/sysctl -p
net.ipv4.ip_forward = 1

[root@clusterserver sysctl.d]#
root@router:/etc/netplan# sysctl –system

 

 

NOTE with centos and nmcli you have to first add a new connection:

 

[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
enp0s10 ethernet disconnected —
lo loopback unmanaged —
virbr0-nic tun unmanaged —
[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli con add type ethernet con-name enp0s10 ifname enp0s10 ip4 10.0.5.10
Connection ‘enp0s10’ (392ee518-be1b-4498-885c-cacef2e295d9) successfully added.
[root@clusterserver network-scripts]#

 

Unter CentOS a “connection” is not the same as a network interface, I have used the same name for the connection here, but it can be labeled differently.

 

then it looks like this:

 

[root@clusterserver network-scripts]#
[root@clusterserver network-scripts]# nmcli dev status
DEVICE TYPE STATE CONNECTION
enp0s3 ethernet connected enp0s3
enp0s10 ethernet connected enp0s10
enp0s8 ethernet connected enp0s8
virbr0 bridge connected (externally) virbr0
lo loopback unmanaged —
virbr0-nic tun unmanaged

 

Note that manual changes to the ifcfg file will not be noticed by NetworkManager until the interface is next brought up.

 

So, you have to do a

 

nmcli con down enp0s10 && nmcli con up enp0s10

 

[root@clusterserver network-scripts]# nmcli con down enp0s10 && nmcli con up enp0s10
Connection ‘enp0s10’ successfully deactivated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/5)
Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/6)
[root@clusterserver network-scripts]#

 

 

To configure a static route for an existing Ethernet connection using the command line, enter a command as follows:
~]# nmcli connection modify eth0 +ipv4.routes “192.168.122.0/24 10.10.10.1”
This will direct traffic for the 192.168.122.0/24 subnet to the gateway at 10.10.10.1.

 

so, we need to do:

 

[root@clusterserver network-scripts]# nmcli connection modify enp0s10 +ipv4.routes “10.0.2.0/24 10.0.2.10”
[root@clusterserver network-scripts]#

 

Next, IMPORTANT!! do a reload of the specific connection:

 

[root@clusterserver network-scripts]# nmcli con reload enp0s10
[root@clusterserver network-scripts]#

 

otherwise the changes will not be active!

 

OR do interactively:

 

[root@clusterserver network-scripts]# nmcli con edit type ethernet con-name enp0s10

 

===| nmcli interactive connection editor |===

 

Adding a new ‘802-3-ethernet’ connection

Type ‘help’ or ‘?’ for available commands.
Type ‘print’ to show all the connection properties.
Type ‘describe [<setting>.<prop>]’ for detailed property description.

 

You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, sriov, ethtool, match, ipv4, ipv6, tc, proxy
nmcli> set ipv4.routes 10.0.5.0/24 10.0.5.10
nmcli>
nmcli>
nmcli> save persistent
Saving the connection with ‘autoconnect=yes’. That might result in an immediate activation of the connection.
Do you still want to save? (yes/no) [yes] yes
Connection ‘enp0s10’ (cbaf5c33-de4a-43a1-83af-7f51103706bd) successfully saved.
nmcli>

 

 

 

 

Continue Reading

How To Install Pacemaker and Corosync on Centos

This article sets out how to install the clustering management software Pacemaker and the cluster membership software Corosync on Centos version 8.

 

For this example, we are setting up a three node cluster using virtual machines on the Linux KVM hypervisor platform.

 

The virtual machines have the KVM names and hostnames centos1, centos2, and centos3.

 

Each node has two network interfaces: one for the KVM bridged NAT network (KVM network name: default via eth0) and the other for the cluster subnet 10.0.8.0 (KVM network name:network-10.0.8.0 via eth1). DHCP is not used for either of these interfaces. Pacemaker and Corosync require static IP addresses.

 

The machine centos1 will be our current designated co-ordinator (DC) cluster node.

 

First, make sure you have first created an ssh-key for root on the first node:

 

[root@centos1 .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:********** root@centos1.localdomain

 

then copy the ssh key to the other nodes:

 

ssh-copy-id centos2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

 

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
(if you think this is a mistake, you may want to use -f option)

 

[root@centos1 .ssh]#
First you need to enable the HighAvailability repository

 

[root@centos1 ~]# yum repolist all | grep -i HighAvailability
ha CentOS Stream 8 – HighAvailability disabled
[root@centos1 ~]# dnf config-manager –set-enabled ha
[root@centos1 ~]# yum repolist all | grep -i HighAvailability
ha CentOS Stream 8 – HighAvailability enabled
[root@centos1 ~]#

 

Next, install the following packages:

 

[root@centos1 ~]# yum install epel-release

 

[root@centos1 ~]# yum install pcs fence-agents-all

 

Next, STOP and DISABLE Firewall for lab testing convenience:

 

[root@centos1 ~]# systemctl stop firewalld
[root@centos1 ~]#
[root@centos1 ~]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@centos1 ~]#

 

then check with:

 

[root@centos1 ~]# systemctl status firewalld
● firewalld.service – firewalld – dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)

 

Next we enable pcsd This is the Pacemaker daemon service:

 

[root@centos1 ~]# systemctl enable –now pcsd
Created symlink /etc/systemd/system/multi-user.target.wants/pcsd.service → /usr/lib/systemd/system/pcsd.service.
[root@centos1 ~]#

 

then change the default password for user hacluster:

 

echo | passwd –stdin hacluster

 

Changing password for user hacluster.

passwd: all authentication tokens updated successfully.
[root@centos2 ~]#

 

Then, on only ONE of the nodes, I am doing it on centos1 on the KVM cluster, as this will be the default DC for the cluster:

 

pcs host auth centos1.localdomain centos2.localdomain centos3.localdomain

 

NOTE the correct command is pcs host auth – not pcs cluster auth unlike in some instruction material, the syntax has since changed.

 

[root@centos1 .ssh]# pcs host auth centos1.localdomain suse1.localdomain ubuntu4.localdomain
Username: hacluster
Password:
centos1.localdomain: Authorized
centos2.localdomain: Authorized
centos3.localdomain: Authorized
[root@centos1 .ssh]#

 

Next, on centos1, as this will be our default DC (designated coordinator node) we create a corosync secret key:

 

[root@centos1 corosync]# corosync-keygen
Corosync Cluster Engine Authentication key generator.
Gathering 2048 bits for key from /dev/urandom.
Writing corosync key to /etc/corosync/authkey.
[root@centos1 corosync]#

 

Then copy the key to the other 2nodes:

 

scp /etc/corosync/authkey centos2:/etc/corosync/
scp /etc/corosync/authkey centos3:/etc/corosync/

 

[root@centos1 corosync]# pcs cluster setup hacluster centos1.localdomain addr=10.0.8.11 centos2.localdomain addr=10.0.8.12 centos3.localdomain addr=10.0.8.13
Sending ‘corosync authkey’, ‘pacemaker authkey’ to ‘centos1.localdomain’, ‘centos2.localdomain’, ‘centos3.localdomain’
centos1.localdomain: successful distribution of the file ‘corosync authkey’
centos1.localdomain: successful distribution of the file ‘pacemaker authkey’
centos2.localdomain: successful distribution of the file ‘corosync authkey’
centos2.localdomain: successful distribution of the file ‘pacemaker authkey’
centos3.localdomain: successful distribution of the file ‘corosync authkey’
centos3.localdomain: successful distribution of the file ‘pacemaker authkey’
Sending ‘corosync.conf’ to ‘centos1.localdomain’, ‘centos2.localdomain’, ‘centos3.localdomain’
centos1.localdomain: successful distribution of the file ‘corosync.conf’
centos2.localdomain: successful distribution of the file ‘corosync.conf’
centos3.localdomain: successful distribution of the file ‘corosync.conf’
Cluster has been successfully set up.
[root@centos1 corosync]#

 

Note I had to specify the IP addresses for the nodes. This is because these nodes each have TWO network interfaces with separate IP addresses. If the nodes only had one network interface, then you can leave out the addr= setting.

 

Next you can start the cluster:

 

[root@centos1 corosync]# pcs cluster start
Starting Cluster…
[root@centos1 corosync]#
[root@centos1 corosync]#
[root@centos1 corosync]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: unknown
* Current DC: NONE
* Last updated: Mon Feb 22 12:57:37 2021
* Last change: Mon Feb 22 12:57:35 2021 by hacluster via crmd on centos1.localdomain
* 3 nodes configured
* 0 resource instances configured
Node List:
* Node centos1.localdomain: UNCLEAN (offline)
* Node centos2.localdomain: UNCLEAN (offline)
* Node centos3.localdomain: UNCLEAN (offline)

 

PCSD Status:
centos1.localdomain: Online
centos3.localdomain: Online
centos2.localdomain: Online
[root@centos1 corosync]#

 

 

The Node List says “UNCLEAN”.

 

So I did:

 

pcs cluster start centos1.localdomain
pcs cluster start centos2.localdomain
pcs cluster start centos3.localdomain
pcs cluster status

 

then the cluster was started in clean running state:

 

[root@centos1 cluster]# pcs cluster status
Cluster Status:
Cluster Summary:
* Stack: corosync
* Current DC: centos1.localdomain (version 2.0.5-7.el8-ba59be7122) – partition with quorum
* Last updated: Mon Feb 22 13:22:29 2021
* Last change: Mon Feb 22 13:17:44 2021 by hacluster via crmd on centos1.localdomain
* 3 nodes configured
* 0 resource instances configured
Node List:
* Online: [ centos1.localdomain centos2.localdomain centos3.localdomain ]

 

PCSD Status:
centos1.localdomain: Online
centos2.localdomain: Online
centos3.localdomain: Online
[root@centos1 cluster]#

Continue Reading