Tags Archives: suse

LPIC3 DIPLOMA Linux Clustering – LAB NOTES LESSON 4 & 5: Installing Pacemaker and Corosync on SuSe

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.
 

 
installing on Open Suse Leap 15.2
 

 
For openSUSE Leap 15.2 run the following as root:
 

 
zypper addrepo https://download.opensuse.org/repositories/network:ha-clustering:Factory/openSUSE_Leap_15.2/network:ha-clustering:Factory.repo
 

 
zypper refresh

zypper update
 

 
IMPORTANT!
 
INSTALL ha-cluster-bootstrap on ALL nodes – but ONLY execute the ha-cluster-init script on the DC node!
 
ON ALL NODES:
 
ha-cluster-bootstrap
 
NOTE- important! the network bind address is your network cluster lan NOT a network interface, ie my cluster it is 10.0.7.0
 
suse61:/etc/sysconfig/network # zypper se ha-cluster
Loading repository data…
Reading installed packages…
 
S | Name | Summary | Type
—+———————-+————————————-+——–
i+ | ha-cluster-bootstrap | Pacemaker HA Cluster Bootstrap Tool | package
suse61:/etc/sysconfig/network # zypper install ha-cluster-bootstrap
Loading repository data…
Reading installed packages…
‘ha-cluster-bootstrap’ is already installed.
No update candidate for ‘ha-cluster-bootstrap-0.5-lp152.6.3.noarch’. The highest available version is already installed.
Resolving package dependencies…
Nothing to do.
suse61:/etc/sysconfig/network #
 
ALSO on the other 2 nodes:
 
on the one DC node ONLY!
 
suse1:~ # ha-cluster-init
WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a watchdog.
Do you want to continue anyway (y/n)? y
Generating SSH key
Configuring csync2
Generating csync2 shared key (this may take a while)…done
csync2 checking files…done
 
Configure Corosync:
This will configure the cluster messaging layer. You will need
to specify a network address over which to communicate (default
is eth0’s network, but you can use the network address of any
active interface).
 
IP or network address to bind to [192.168.122.173]10.0.7.0
Multicast address [239.161.58.55]
Multicast port [5405]
 
Configure SBD:
If you have shared storage, for example a SAN or iSCSI target,
you can use it avoid split-brain scenarios by configuring SBD.
This requires a 1 MB partition, accessible to all nodes in the
cluster. The device path must be persistent and consistent
across all nodes in the cluster, so /dev/disk/by-id/* devices
are a good choice. Note that all data on the partition you
specify here will be destroyed.
 
Do you wish to use SBD (y/n)? n
WARNING: Not configuring SBD – STONITH will be disabled.
Hawk cluster interface is now running. To see cluster status, open:
https://192.168.122.173:7630/
Log in with username ‘hacluster’, password ‘linux’
WARNING: You should change the hacluster password to something more secure!
Waiting for cluster…………..done
Loading initial cluster configuration
 
Configure Administration IP Address:
Optionally configure an administration virtual IP
address. The purpose of this IP address is to
provide a single IP that can be used to interact
with the cluster, rather than using the IP address
of any specific cluster node.
 
Do you wish to configure a virtual IP address (y/n)? y
Virtual IP []10.0.7.100
Configuring virtual IP (10.0.7.100)….done
Done (log saved to /var/log/crmsh/ha-cluster-bootstrap.log)
suse1:~ #
 
NOTE you need to change the hacluster password to something else! default is linux
 
make sure SSH is configured for login!
 
Then run ha-cluster-init on the FIRST, ie DC node ONLY!!!
 
Then on the other nodes, ie NOT the DC!
 
in other words, you INSTALL the ha-cluster-bootstrap on ALL the nodes, but do not execute it on the other nodes, only on the first, DC node
 
Then on the OTHER nodes:
 
ha-cluster-join -c
 
suse2:/etc # ha-cluster-join -c 10.0.6.61 (make sure you set the correct network interface)
 
suse62:~ # ha-cluster-join
WARNING: No watchdog device found. If SBD is used, the cluster will be unable to start without a watchdog.
Do you want to continue anyway (y/n)? y
Join This Node to Cluster:
You will be asked for the IP address of an existing node, from which
configuration will be copied. If you have not already configured
passwordless ssh between nodes, you will be prompted for the root
password of the existing node.
 
IP address or hostname of existing node (e.g.: 192.168.1.1) []10.0.6.61
Configuring csync2…done
Merging known_hosts
Probing for new partitions…done
Hawk cluster interface is now running. To see cluster status, open:
https://192.168.122.62:7630/
Log in with username ‘hacluster’
Waiting for cluster…..done
Reloading cluster configuration…done
Done (log saved to /var/log/crmsh/ha-cluster-bootstrap.log)
suse62:~ #
 
do this for each other node, ie NOT for the DC node!

Continue Reading