LPIC3 DIPLOMA Linux Clustering – LAB NOTES LESSON 9 DRBD on SUSE

You are here:
< All Topics

LAB for installing and configuring DRBD on SuSe

 

 

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

Overview

 

 

The cluster comprises three nodes installed with SuSe Leap (version 15) and housed on a KVM virtual machine system on a Linux Ubuntu host.  We are using suse61 as DRBD master and suse62 as DRBD slave.

 

 

Install DRBD Packages

 

 

suse61:/etc/modules-load.d # zypper se drbd
Loading repository data…
Reading installed packages…

S | Name | Summary | Type
–+————————–+————————————————————+———–
| drbd | Linux driver for the “Distributed Replicated Block Device” | package
| drbd | Linux driver for the “Distributed Replicated Block Device” | srcpackage
| drbd-formula | DRBD deployment salt formula | package
| drbd-formula | DRBD deployment salt formula | srcpackage
| drbd-kmp-default | Kernel driver | package
| drbd-kmp-preempt | Kernel driver | package
| drbd-utils | Distributed Replicated Block Device | package
| drbd-utils | Distributed Replicated Block Device | srcpackage
| drbdmanage | DRBD distributed resource management utility | package
| monitoring-plugins-drbd9 | Plugin for monitoring DRBD 9 resources | package
| yast2-drbd | YaST2 – DRBD Configuration | package
suse61:/etc/modules-load.d #

 

we install on both nodes:
 
suse61:/etc/modules-load.d # zypper in drbd drbd-utils
Loading repository data…
Reading installed packages…
Resolving package dependencies…
 
The following 3 NEW packages are going to be installed:
drbd drbd-kmp-default drbd-utils

3 new packages to install.
Overall download size: 1020.2 KiB. Already cached: 0 B. After the operation, additional 3.0 MiB will be used.
Continue? [y/n/v/…? shows all options] (y): y

 

Create the DRBD Drives on Both Nodes

 

we need to create a DRBD device – we are going to create a 20GB SCSI disk
 
on both suse61 and suse62 but dont partition
 
on suse61 it is /dev/sdc and on suse62 /dev/sdb
 
(this is just because of the drive creation being different on one machine)

 

Create the drbd .res Configuration File

 

 

next create the /etc/drbd.d/drbd0.res
 
suse61:/etc/drbd.d #
suse61:/etc/drbd.d # cat drbd0.res

resource drbd0 {
protocol C;
disk {
on-io-error pass_on;
}
 
on suse61 {
disk /dev/sdc;
device /dev/drbd0;
address 10.0.6.61:7676;
meta-disk internal;
}
 
on suse62 {
disk /dev/sdb;
device /dev/drbd0;
address 10.0.6.62:7676;
meta-disk internal;
}
}
suse61:/etc/drbd.d #

 

do a drbdadm dump to check syntax.

 
 
then copy to the other node:
 
suse61:/etc/drbd.d # scp drbd0.res suse62:/etc/drbd.d/
drbd0.res 100% 263 291.3KB/s 00:00
suse61:/etc/drbd.d #
 
 

Create the DRBD Device on Both Nodes

 

 

next, create the device:
 
suse61:/etc/drbd.d # drbdadm — –ignore-sanity-checks create-md drbd0
initializing activity log
initializing bitmap (640 KB) to all zero
Writing meta data…
New drbd meta data block successfully created.
success
suse61:/etc/drbd.d #
 
then also on the other machine:
 
suse62:/etc/modules-load.d # drbdadm — –ignore-sanity-checks create-md drbd0
initializing activity log
initializing bitmap (640 KB) to all zero
Writing meta data…
New drbd meta data block successfully created.
suse62:/etc/modules-load.d #

 

Start DRBD

 

then, ON ONE OF THE nodes only!
 
drbdadm up drbd0

 

then do the same on the other node
 
then make the one node to primary
 
on suse61:
drbdadm primary –force drbd0
 
BUT PROBLEM:
 
suse62:/etc/drbd.d # drbdadm status
drbd0 role:Secondary
disk:Inconsistent
suse61 connection:Connecting

 

SOLUTION…
 
the firewall was causing the problem. So stop and disable firewall:
 
suse62:/etc/drbd.d # systemctl stop firewall
Failed to stop firewall.service: Unit firewall.service not loaded.
suse62:/etc/drbd.d # systemctl stop firewalld
suse62:/etc/drbd.d # systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

 

it is now working ok…
 
suse62:/etc/drbd.d # drbdadm status
drbd0 role:Secondary
disk:Inconsistent
suse61 role:Primary
replication:SyncTarget peer-disk:UpToDate done:4.99
 
suse62:/etc/drbd.d #
 
suse61:/etc/drbd.d # drbdadm status
drbd0 role:Primary
disk:UpToDate
suse62 role:Secondary
replication:SyncSource peer-disk:Inconsistent done:50.22
 
suse61:/etc/drbd.d #
 
you have to wait for the syncing to finish (20GB) and then you can create a filesystem
 
the disk can now be seen in fdisk -l
 
Disk /dev/drbd0: 20 GiB, 21474144256 bytes, 41941688 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
suse61:/etc/drbd.d #

 

a while later it looks like this:
 
suse62:/etc/drbd.d # drbdadm status
drbd0 role:Secondary
disk:UpToDate
suse61 role:Primary
peer-disk:UpToDate
 
suse62:/etc/drbd.d #
 
suse61:/etc/drbd.d # drbdadm status
drbd0 role:Primary
disk:UpToDate
suse62 role:Secondary
peer-disk:UpToDate
 
suse61:/etc/drbd.d #

 

 

next you can build a filesystem on drbd0:
 
suse61:/etc/drbd.d # mkfs.ext4 -t ext4 /dev/drbd0
mke2fs 1.43.8 (1-Jan-2018)
Discarding device blocks: done
Creating filesystem with 5242711 4k blocks and 1310720 inodes
Filesystem UUID: 36fe742a-171d-42e6-bc96-bb3a9a8a8cd8
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000
 
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
 
suse61:/etc/drbd.d #

 

 

NOTE, at no point have we created a partition – drbd works differently!

 

then, on the primary node, you can mount:
 
/dev/drbd0 20510636 45080 19400632 1% /mnt

 

 

END OF LAB

Disclosure: The above article may include affiliate links for products and services for which this site may receive remuneration.
Table of Contents