LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – CephFS

You are here:
< All Topics

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

Creating a Ceph MetaData Server MDS

 

A metadata or mds server node is a requirement if you want to run cephfs.

 

First add the mds server node name to the hosts name of all machines in the cluster, both mon, mgr and osds.

 

For this lab I am using the ceph-mon machine for the mds server ie not a separate additional node.

 

Note the SSH has to work, this is a prerequisite.

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph-deploy mds create ceph-mds
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mds
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f29c54e55f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7f29c54b01b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [(‘ceph-mds’, ‘ceph-mds’)]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mds:ceph-mds
The authenticity of host ‘ceph-mds (10.0.9.40)’ can’t be established.
ECDSA key fingerprint is SHA256:OOvumn9VbVuPJbDQftpI3GnpQXchomGLwQ4J/1ADy6I.
ECDSA key fingerprint is MD5:1f:dd:66:01:b0:9c:6f:9b:5e:93:f4:80:7e:ad:eb:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘ceph-mds,10.0.9.40’ (ECDSA) to the list of known hosts.
root@ceph-mds’s password:
root@ceph-mds’s password:
[ceph-mds][DEBUG ] connected to host: ceph-mds
[ceph-mds][DEBUG ] detect platform information from remote host
[ceph-mds][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mds
[ceph-mds][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mds][WARNIN] mds keyring does not exist yet, creating one
[ceph-mds][DEBUG ] create a keyring file
[ceph-mds][DEBUG ] create path if it doesn’t exist
[ceph-mds][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-mds –keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mds osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mds/keyring
[ceph-mds][INFO ] Running command: systemctl enable ceph-mds@ceph-mds
[ceph-mds][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mds.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph-mds][INFO ] Running command: systemctl start ceph-mds@ceph-mds
[ceph-mds][INFO ] Running command: systemctl enable ceph.target
[root@ceph-mon ~]#

 

 

Note the correct systemd service name used!

 

[root@ceph-mon ~]# systemctl status ceph-mds
Unit ceph-mds.service could not be found.
[root@ceph-mon ~]# systemctl status ceph-mds@ceph-mds
● ceph-mds@ceph-mds.service – Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
Active: active (running) since Mo 2021-05-03 04:14:07 CEST; 4min 5s ago
Main PID: 22897 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph-mds.service
└─22897 /usr/bin/ceph-mds -f –cluster ceph –id ceph-mds –setuser ceph –setgroup ceph

Mai 03 04:14:07 ceph-mon systemd[1]: Started Ceph metadata server daemon.
Mai 03 04:14:07 ceph-mon ceph-mds[22897]: starting mds.ceph-mds at –
[root@ceph-mon ~]#

 

Next, I used ceph-deploy to copy the configuration file and admin key to the metadata server so I can use the ceph CLI without needing to specify monitor address and ceph.client.admin.keyring for each command execution:

 

[root@ceph-mon ~]# ceph-deploy admin ceph-mds
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-mds
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa99fae82d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘ceph-mds’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7fa9a05fb488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mds
root@ceph-mds’s password:
root@ceph-mds’s password:
[ceph-mds][DEBUG ] connected to host: ceph-mds
[ceph-mds][DEBUG ] detect platform information from remote host
[ceph-mds][DEBUG ] detect machine type
[ceph-mds][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@ceph-mon ~]#

 

then set correct permissions for the ceph.client.admin.keyring:

 

[root@ceph-mon ~]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph-mon ~]#

 

 

 

How To Create a CephsFS

 

A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata.

 

Bear in mind that:

 

Using a higher replication level for the metadata pool, as any data loss in this pool can render the whole filesystem inaccessible!

 

Using lower-latency storage such as SSDs for the metadata pool, as this will directly affect the observed latency of filesystem operations on clients.

 

 

Create a data pool, one for data, one for metadata:

 

[root@ceph-mon ~]# ceph osd pool create cephfs_data 128
pool ‘cephfs_data’ created
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph osd pool create cephfs_metadata 128
pool ‘cephfs_metadata’ created
[root@ceph-mon ~]#

 

then enable the filesystem using the fs new command:

 

ceph fs new <fs_name> <metadata> <data>

 

 

so we do:

 

ceph fs new cephfs cephfs_metadata cephfs_data

 

 

then verify with:

 

ceph fs ls

 

and

 

ceph mds stat

 

 

 

[root@ceph-mon ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 5 and data pool 4
[root@ceph-mon ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph mds stat
cephfs-1/1/1 up {0=ceph-mds=up:active}
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
mds: cephfs-1/1/1 up {0=ceph-mds=up:active}
osd: 4 osds: 3 up, 3 in

data:
pools: 2 pools, 256 pgs
objects: 183 objects, 46 MiB
usage: 3.4 GiB used, 2.6 GiB / 6.0 GiB avail
pgs: 256 active+clean

[root@ceph-mon ~]#

 

Once the filesystem is created and the MDS is active you can mount the filesystem:

 

 

How To Mount Cephfs

 

To mount the Ceph file system use the mount command if you know the monitor host IP address, else use the mount.ceph utility to resolve the monitor host name to IP address. eg:

 

mkdir /mnt/cephfs
mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs

 

To mount the Ceph file system with cephx authentication enabled, you need to specify a user name and a secret.

 

mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs -o name=admin,secret=DUWEDduoeuroFDWVMWDqfdffDWLSRT==

 

However, a safer method reads the secret from a file, eg:

 

mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/admin.secret

 

To unmount cephfs simply use the umount command as per usual:

 

eg

 

umount /mnt/cephfs

 

[root@ceph-mon ~]# mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 736M 0 736M 0% /dev
tmpfs 748M 0 748M 0% /dev/shm
tmpfs 748M 8,7M 739M 2% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/mapper/centos-root 8,0G 2,4G 5,7G 30% /
/dev/vda1 1014M 172M 843M 17% /boot
tmpfs 150M 0 150M 0% /run/user/0
10.0.9.40:6789:/ 1,4G 0 1,4G 0% /mnt/cephfs
[root@ceph-mon ~]#

 

 

To mount from asus laptop had to copy

 

scp ceph.client.admin.keyring asus:/root/

 

then I could do

 

mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`

root@asus:~#
root@asus:~# mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`
root@asus:~#
root@asus:~#
root@asus:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1844344 2052 1842292 1% /run
/dev/nvme0n1p4 413839584 227723904 165024096 58% /
tmpfs 9221712 271220 8950492 3% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4096 0 4096 0% /sys/fs/cgroup
/dev/nvme0n1p1 98304 33547 64757 35% /boot/efi
tmpfs 1844340 88 1844252 1% /run/user/1000
10.0.9.40:6789:/ 1372160 0 1372160 0% /mnt/cephfs
root@asus:~#

 

 

 

Table of Contents