LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Ceph on Centos8

You are here:
< All Topics

Notes in preparation – not yet complete

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering.

They are in “rough format”, presented as they were written.

 

 

LAB on Ceph Clustering on Centos 8

 

 

The cluster comprises four nodes installed with Centos 8 and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

centos4 is the admin-node and ceph-deploy server

 

centos1 is the MON (monitor) server

 

centos2 is OSD0 (Object Store Daemon server)

 

centos3 is OSD1 (Object Store Daemon server)

 

 

Ceph Installation

 

Instructions below are for installing on Centos 8.

 

NOTE: Ceph comes with an installation utility called ceph-deploy which can traditionally be executed on the admin node to install Ceph onto the other nodes in the cluster. However, ceph-deploy is now an outdated tool and is no longer maintained. It is also not available for Centos8. You should theerfore either use an installation method such as the above, or alternatively, use the cephadm tool for installing ceph on cluster nodes.

 

However, in this lab we are installing Ceph directly onto each node without using cephadm.

 

 

Install the ceph packages and dependency package repos:

 

On centos4:

 

[root@centos4 yum.repos.d]# dnf -y install centos-release-ceph-octopus epel-release; dnf -y install ceph
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 1 day, 2:40:00 ago on Sun Apr 18 19:34:24 2021.
Dependencies resolved.

 

 

Having successfully checked that it installs ok with this command, I then executed it for the rest of the centos ceph cluster from asus laptop using:

 

root@asus:~# for NODE in centos1 centos2 centos3
> do
ssh $NODE “dnf -y install centos-release-ceph-octopus epel-release; dnf -y install ceph”
done

 

 

 

Configure Ceph-Monitor 

 

 

Next configure the monitor daemon on the admin node centos4:

 

[root@centos4 ~]# uuidgen
9b45c9d5-3055-4089-9a97-f488fffda1b4
[root@centos4 ~]#

 

# create new config
# file name ⇒ (any Cluster Name).conf

 

# set Cluster Name [ceph] (default) on this example ⇒ [ceph.conf]

 

configure /etc/ceph/ceph.conf

 

[root@centos4 ceph]# nano ceph.conf

 

[global]
# specify cluster network for monitoring
cluster network = 10.0.8.0/24
# specify public network
public network = 10.0.8.0/24
# specify UUID genarated above
fsid = 9b45c9d5-3055-4089-9a97-f488fffda1b4
# specify IP address of Monitor Daemon
mon host = 10.0.8.14
# specify Hostname of Monitor Daemon
mon initial members = centos4
osd pool default crush rule = -1

 

 

# mon.(Node name)
[mon.centos4]
# specify Hostname of Monitor Daemon
host = centos4
# specify IP address of Monitor Daemon
mon addr = 10.0.8.14
# allow to delete pools
mon allow pool delete = true

 

 

next generate the keys:

 

 

# generate secret key for Cluster monitoring
[root@node01 ~]#

 

 

[root@centos4 ceph]# ceph-authtool –create-keyring /etc/ceph/ceph.mon.keyring –gen-key -n mon. –cap mon ‘allow *’
creating /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]#

 

# generate secret key for Cluster admin

 

[root@centos4 ceph]# ceph-authtool –create-keyring /etc/ceph/ceph.client.admin.keyring –gen-key -n client.admin –cap mon ‘allow *’ –cap osd ‘allow *’ –cap mds ‘allow *’ –cap mgr ‘allow *’
creating /etc/ceph/ceph.client.admin.keyring
[root@centos4 ceph]#

 

# generate key for bootstrap

 

[root@centos4 ceph]# ceph-authtool –create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring –gen-key -n client.bootstrap-osd –cap mon ‘profile bootstrap-osd’ –cap mgr ‘allow r’
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
[root@centos4 ceph]#

 

# import generated key

 

[root@centos4 ceph]# ceph-authtool /etc/ceph/ceph.mon.keyring –import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]#

 

 

[root@centos4 ceph]# ceph-authtool /etc/ceph/ceph.mon.keyring –import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]#

 

# generate monitor map

 

use following commands:

 

FSID=$(grep “^fsid” /etc/ceph/ceph.conf | awk {‘print $NF’})
NODENAME=$(grep “^mon initial” /etc/ceph/ceph.conf | awk {‘print $NF’})
NODEIP=$(grep “^mon host” /etc/ceph/ceph.conf | awk {‘print $NF’})

monmaptool –create –add $NODENAME $NODEIP –fsid $FSID /etc/ceph/monmap

 

[root@centos4 ceph]# FSID=$(grep “^fsid” /etc/ceph/ceph.conf | awk {‘print $NF’})
[root@centos4 ceph]# NODENAME=$(grep “^mon initial” /etc/ceph/ceph.conf | awk {‘print $NF’})
[root@centos4 ceph]# NODEIP=$(grep “^mon host” /etc/ceph/ceph.conf | awk {‘print $NF’})
[root@centos4 ceph]# monmaptool –create –add $NODENAME $NODEIP –fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to 9b45c9d5-3055-4089-9a97-f488fffda1b4
monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
[root@centos4 ceph]#

 

next,

 

# create a directory for Monitor Daemon
# directory name ⇒ (Cluster Name)-(Node Name)

 

[root@centos4 ceph]# mkdir /var/lib/ceph/mon/ceph-centos4

 

# associate key and monmap with Monitor Daemon
# –cluster (Cluster Name)

 

[root@centos4 ceph]# ceph-mon –cluster ceph –mkfs -i $NODENAME –monmap /etc/ceph/monmap –keyring /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]# chown ceph. /etc/ceph/ceph.*
[root@centos4 ceph]# chown -R ceph. /var/lib/ceph/mon/ceph-centos4 /var/lib/ceph/bootstrap-osd

 

 

Enable the ceph-mon service:

 

[root@centos4 ceph]# systemctl enable –now ceph-mon@$NODENAME
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@centos4.service → /usr/lib/systemd/system/ceph-mon@.service.
[root@centos4 ceph]#

 

# enable Messenger v2 Protocol

 

[root@centos4 ceph]# ceph mon enable-msgr2
[root@centos4 ceph]#

 

 

Configure Ceph-Manager

 

# enable Placement Groups auto scale module

 

[root@centos4 ceph]# ceph mgr module enable pg_autoscaler
module ‘pg_autoscaler’ is already enabled (always-on)
[root@centos4 ceph]#

 

# create a directory for Manager Daemon

 

# directory name ⇒ (Cluster Name)-(Node Name)

 

[root@centos4 ceph]# mkdir /var/lib/ceph/mgr/ceph-centos4
[root@centos4 ceph]#

 

# create auth key

 

[root@centos4 ceph]# ceph auth get-or-create mgr.$NODENAME mon ‘allow profile mgr’ osd ‘allow *’ mds ‘allow *’
[mgr.centos4]
key = AQBv7H1gSiJSNxAAWBpbuZE00TN35YZoZudNeA==
[root@centos4 ceph]#

 

[root@centos4 ceph]# ceph auth get-or-create mgr.node01 > /etc/ceph/ceph.mgr.admin.keyring

[root@centos4 ceph]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-centos4/keyring
[root@centos4 ceph]#
[root@centos4 ceph]# chown ceph. /etc/ceph/ceph.mgr.admin.keyring

 

[root@centos4 ceph]# chown -R ceph. /var/lib/ceph/mgr/ceph-centos4

 

 

Enable the ceph-mgr service:

 

[root@centos4 ceph]# systemctl enable –now ceph-mgr@$NODENAME
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@centos4.service → /usr/lib/systemd/system/ceph-mgr@.service.
[root@centos4 ceph]#

 

 

Firewalling for Ceph

 

 

Configure or disable firewall:

 

 

[root@centos4 ceph]# systemctl stop firewalld
[root@centos4 ceph]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@centos4 ceph]#

 

otherwise you need to do:

 

firewall-cmd –add-service=ceph-mon –permanent
firewall-cmd –reload

 

 

Ceph Status Check

 

 Confirm cluster status:

 

OSD (Object Storage Device) will be configured later.

 

[root@centos4 ceph]# ceph -s
cluster:
id: 9b45c9d5-3055-4089-9a97-f488fffda1b4
health: HEALTH_OK

services:
mon: 1 daemons, quorum centos4 (age 5m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

 

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

[root@centos4 ceph]#

 

Adding An Extra OSD Node:

 

I then added a third OSD, centos1:

 

 

for NODE in centos1
do

scp /etc/ceph/ceph.conf ${NODE}:/etc/ceph/ceph.conf
scp /etc/ceph/ceph.client.admin.keyring ${NODE}:/etc/ceph
scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${NODE}:/var/lib/ceph/bootstrap-osd

ssh $NODE “chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*;
parted –script /dev/sdb ‘mklabel gpt’;
parted –script /dev/sdb “mkpart primary 0% 100%”;
ceph-volume lvm create –data /dev/sdb1″
done

 

 

[root@centos4 ~]# for NODE in centos1
> do
> scp /etc/ceph/ceph.conf ${NODE}:/etc/ceph/ceph.conf
> scp /etc/ceph/ceph.client.admin.keyring ${NODE}:/etc/ceph
> scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${NODE}:/var/lib/ceph/bootstrap-osd
> ssh $NODE “chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*;
> parted –script /dev/sdb ‘mklabel gpt’;
> parted –script /dev/sdb “mkpart primary 0% 100%”;
> ceph-volume lvm create –data /dev/sdb1″
> done
ceph.conf 100% 569 459.1KB/s 00:00
ceph.client.admin.keyring 100% 151 130.4KB/s 00:00
ceph.keyring 100% 129 46.6KB/s 00:00
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new 88c09649-e489-410e-be29-333ddd29282d
Running command: /usr/sbin/vgcreate –force –yes ceph-6ac6963e-474a-4450-ab87-89d6881af0d7 /dev/sdb1
stdout: Physical volume “/dev/sdb1” successfully created.
stdout: Volume group “ceph-6ac6963e-474a-4450-ab87-89d6881af0d7” successfully created
Running command: /usr/sbin/lvcreate –yes -l 255 -n osd-block-88c09649-e489-410e-be29-333ddd29282d ceph-6ac6963e-474a-4450-ab87-89d6881af0d7
stdout: Logical volume “osd-block-88c09649-e489-410e-be29-333ddd29282d” created.
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ln -s /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
stderr: got monmap epoch 2
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring –create-keyring –name osd.2 –add-key AQAchH9gq4osHRAAFGD2AMQgQrD+UjjgciHJCw==
stdout: creating /var/lib/ceph/osd/ceph-2/keyring
added entity osd.2 auth(key=AQAchH9gq4osHRAAFGD2AMQgQrD+UjjgciHJCw==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Running command: /usr/bin/ceph-osd –cluster ceph –osd-objectstore bluestore –mkfs -i 2 –monmap /var/lib/ceph/osd/ceph-2/activate.monmap –keyfile – –osd-data /var/lib/ceph/osd/ceph-2/ –osd-uuid 88c09649-e489-410e-be29-333ddd29282d –setuser ceph –setgroup ceph
stderr: 2021-04-21T03:47:09.890+0200 7f558dbd0f40 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
stderr: 2021-04-21T03:47:09.924+0200 7f558dbd0f40 -1 freelist read_size_meta_from_db missing size meta in DB
–> ceph-volume lvm prepare successful for: /dev/sdb1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/ceph-bluestore-tool –cluster=ceph prime-osd-dir –dev /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d –path /var/lib/ceph/osd/ceph-2 –no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/systemctl enable ceph-volume@lvm-2-88c09649-e489-410e-be29-333ddd29282d
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-88c09649-e489-410e-be29-333ddd29282d.service → /usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable –runtime ceph-osd@2
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@2
–> ceph-volume lvm activate successful for osd ID: 2
–> ceph-volume lvm create successful for: /dev/sdb1
[root@centos4 ~]#

 

 

root@centos4 ceph]# systemctl status –now ceph-mgr@$NODENAME
● ceph-mgr@centos4.service – Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-04-20 17:08:39 CEST; 1min 26s ago
Main PID: 6028 (ceph-mgr)
Tasks: 70 (limit: 8165)
Memory: 336.1M
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@centos4.service
└─6028 /usr/bin/ceph-mgr -f –cluster ceph –id centos4 –setuser ceph –setgroup ceph

 

 

Apr 20 17:08:39 centos4 systemd[1]: Started Ceph cluster manager daemon.
[root@centos4 ceph]# systemctl status –now ceph-mon@$NODENAME
● ceph-mon@centos4.service – Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-04-19 22:45:12 CEST; 18h ago
Main PID: 3510 (ceph-mon)
Tasks: 27
Memory: 55.7M
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@centos4.service
└─3510 /usr/bin/ceph-mon -f –cluster ceph –id centos4 –setuser ceph –setgroup ceph

 

 

Apr 19 22:45:12 centos4 systemd[1]: Started Ceph cluster monitor daemon.
Apr 19 22:45:13 centos4 ceph-mon[3510]: 2021-04-19T22:45:13.064+0200 7fded82af700 -1 WARNING: ‘mon addr’ config option [v2:10.0.8.14:3>
Apr 19 22:45:13 centos4 ceph-mon[3510]: continuing with monmap configuration
Apr 19 22:46:14 centos4 ceph-mon[3510]: 2021-04-19T22:46:14.945+0200 7fdebf1b1700 -1 mon.centos4@0(leader) e2 stashing newest monmap >
Apr 19 22:46:14 centos4 ceph-mon[3510]: ignoring –setuser ceph since I am not root
Apr 19 22:46:14 centos4 ceph-mon[3510]: ignoring –setgroup ceph since I am not root
Apr 20 16:40:31 centos4 ceph-mon[3510]: 2021-04-20T16:40:31.572+0200 7f10e0e99700 -1 log_channel(cluster) log [ERR] : Health check fai>
Apr 20 17:08:53 centos4 sudo[6162]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -a –json=o /dev/
[root@centos4 ceph]#

 

 

[root@centos4 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00099 root default
-3 0.00099 host centos1
2 hdd 0.00099 osd.2 up 1.00000 1.00000
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000

 

[root@centos4 ~]# ceph df
— RAW STORAGE —
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1020 MiB 1014 MiB 1.6 MiB 6.2 MiB 0.61
TOTAL 1020 MiB 1014 MiB 1.6 MiB 6.2 MiB 0.61

— POOLS —
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 321 MiB

 

[root@centos4 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
2 hdd 0.00099 1.00000 1020 MiB 6.2 MiB 1.5 MiB 0 B 4.6 MiB 1014 MiB 0.61 1.00 1 up
0 0 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 down
1 0 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 down
TOTAL 1020 MiB 6.2 MiB 1.5 MiB 0 B 4.6 MiB 1014 MiB 0.61
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
[root@centos4 ~]#

 

 

[root@centos4 ~]# ceph -s
cluster:
id: 9b45c9d5-3055-4089-9a97-f488fffda1b4
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
Degraded data redundancy: 1 pg undersized

services:
mon: 1 daemons, quorum centos4 (age 47h)
mgr: centos4(active, since 29h)
osd: 3 osds: 1 up (since 18h), 1 in (since 18h)

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 6.2 MiB used, 1014 MiB / 1020 MiB avail
pgs: 100.000% pgs not active
1 undersized+peered

[root@centos4 ~]#

 

 

notes to be completed

 

 

 

Table of Contents