How Can We Help?

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Basic Ceph Installation and Config

You are here:
< All Topics

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

I first created a base installation virtual machine called ceph-base. From this I then clone the machines needed to build the cluster. ceph-base does NOT form part of the cluster.

 

 

ceph-mon 10.0.9.40 192.168.122.40   is the admin-node and ceph-deploy and MON monitor node.  We use the ceph-base vm to clone the other machines.

 

 

# ceph cluster 10.0.9.0 centos version 7

 

10.0.9.9 ceph-base
192.168.122.8 ceph-basevm # centos7

 

 

10.0.9.0 is the ceph cluster private network. We run 4 machines as follows:

10.0.9.40 ceph-mon
10.0.9.10 ceph-osd0
10.0.9.11 ceph-osd1
10.0.9.12 ceph-osd2

 

192.168.122.0 is the KVM network. Each machine also has an interface to this network.

192.168.122.40 ceph-monvm
192.168.122.50 ceph-osd0vm
192.168.122.51 ceph-osd1vm
192.168.122.52 ceph-osd2vm

 

Preparation of Ceph Cluster Machines

 

ceph-base serves as a template virtual machine for cloning the actual ceph cluster nodes. It does not form part of the cluster.

 

on ceph-base:

 

installed centos7
configured 2 ethernet interfaces for the nat networks: 10.0.9.0 and 192.168.122.0
added default route
added nameserver

added ssh keys for passwordless login for root from laptop asus

updated software packages: yum update

copied hosts file from asus to the virtual machines via scp

[root@ceph-base ~]# useradd -d /home/cephuser -m cephuser

 

created a sudoers file for the user and edited the /etc/sudoers file with sed.

[root@ceph-base ~]# chmod 0440 /etc/sudoers.d/cephuser
[root@ceph-base ~]# sed -i s’/Defaults requiretty/#Defaults requiretty’/g /etc/sudoers
[root@ceph-base ~]# echo “cephuser ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/cephuser
cephuser ALL = (root) NOPASSWD:ALL
[root@ceph-base ~]#

 

 

[root@ceph-base ~]# yum install -y ntp ntpdate ntp-doc
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.hosteurope.de
* extras: ftp.hosteurope.de
* updates: mirror.23media.com
Package ntpdate-4.2.6p5-29.el7.centos.2.x86_64 already installed and latest version
Resolving Dependencies
–> Running transaction check
—> Package ntp.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
—> Package ntp-doc.noarch 0:4.2.6p5-29.el7.centos.2 will be installed
–> Finished Dependency Resolution

 

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
ntp x86_64 4.2.6p5-29.el7.centos.2 base 549 k
ntp-doc noarch 4.2.6p5-29.el7.centos.2 base 1.0 M

Transaction Summary
==============================================================================================================================================================
Install 2 Packages

Total download size: 1.6 M
Installed size: 3.0 M
Downloading packages:
(1/2): ntp-doc-4.2.6p5-29.el7.centos.2.noarch.rpm | 1.0 MB 00:00:00
(2/2): ntp-4.2.6p5-29.el7.centos.2.x86_64.rpm | 549 kB 00:00:00
————————————————————————————————————————————————————–
Total 2.4 MB/s | 1.6 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ntp-4.2.6p5-29.el7.centos.2.x86_64 1/2
Installing : ntp-doc-4.2.6p5-29.el7.centos.2.noarch 2/2
Verifying : ntp-doc-4.2.6p5-29.el7.centos.2.noarch 1/2
Verifying : ntp-4.2.6p5-29.el7.centos.2.x86_64 2/2

 

Installed:
ntp.x86_64 0:4.2.6p5-29.el7.centos.2 ntp-doc.noarch 0:4.2.6p5-29.el7.centos.2

Complete!

 

Next, do:

[root@ceph-base ~]# ntpdate 0.us.pool.ntp.org
26 Apr 15:30:17 ntpdate[23660]: step time server 108.61.73.243 offset 0.554294 sec

[root@ceph-base ~]# hwclock –systohc

[root@ceph-base ~]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

[root@ceph-base ~]# systemctl start ntpd.service
[root@ceph-base ~]#

 

Disable SELinux Security

 

 

Disabled SELinux on all nodes by editing the SELinux configuration file with the sed stream editor. This was carried out on the ceph-base virtual machine from which we will be cloning the ceph cluster nodes, so this only needs to be done once.

 

[root@ceph-base ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
[root@ceph-base ~]#

 

 

Generate the ssh keys for ‘cephuser’.

 

[root@ceph-base ~]# su – cephuser

 

[cephuser@ceph-base ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephuser/.ssh/id_rsa):
Created directory ‘/home/cephuser/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephuser/.ssh/id_rsa.
Your public key has been saved in /home/cephuser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:PunfPQf+aF2rr3lzI0WzXJZXO5AIjX0W+aC4h+ss0E8 cephuser@ceph-base.localdomain
The key’s randomart image is:
+—[RSA 2048]—-+
| .= ..+ |
| . + B .|
| . + + +|
| . . B+|
| . S o o.*|
| . o E . .+.|
| . * o ..oo|
| o.+ . o=*+|
| ++. .=O==|
+—-[SHA256]—–+
[cephuser@ceph-base ~]$

 

 

Configure or Disable Firewalling

 

On a production cluster the firewall would remain active and the ceph ports would be opened. 

 

Monitors listen on tcp:6789 by default, so for ceph-mon you would need:

 

firewall-cmd –zone=public –add-port=6789/tcp –permanent
firewall-cmd –reload

 

OSDs listen on a range of ports, tcp:6800-7300 by default, so you would need to run on ceph-osd{0,1,2}:

 

firewall-cmd –zone=public –add-port=6800-7300/tcp –permanent
firewall-cmd –reload

 

However as this is a test lab we can stop and disable the firewall. 

 

[root@ceph-base ~]# systemctl stop firewalld

 

[root@ceph-base ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@ceph-base ~]#

 

 

 

Ceph Package Installation

 

 

install the centos-release-ceph rpm from centos-extras:

 

yum -y install –enablerepo=extras centos-release-ceph

 

[root@ceph-base ~]# yum -y install –enablerepo=extras centos-release-ceph
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.rz.uni-frankfurt.de
* extras: mirror.cuegee.com
* updates: mirror.23media.com
Resolving Dependencies
–> Running transaction check
—> Package centos-release-ceph-nautilus.noarch 0:1.2-2.el7.centos will be installed
–> Processing Dependency: centos-release-storage-common for package: centos-release-ceph-nautilus-1.2-2.el7.centos.noarch
–> Processing Dependency: centos-release-nfs-ganesha28 for package: centos-release-ceph-nautilus-1.2-2.el7.centos.noarch
–> Running transaction check
—> Package centos-release-nfs-ganesha28.noarch 0:1.0-3.el7.centos will be installed
—> Package centos-release-storage-common.noarch 0:2-2.el7.centos will be installed
–> Finished Dependency Resolution

 

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
centos-release-ceph-nautilus noarch 1.2-2.el7.centos extras 5.1 k
Installing for dependencies:
centos-release-nfs-ganesha28 noarch 1.0-3.el7.centos extras 4.3 k
centos-release-storage-common noarch 2-2.el7.centos extras 5.1 k

Transaction Summary
==============================================================================================================================================================
Install 1 Package (+2 Dependent packages)

Total download size: 15 k
Installed size: 3.0 k
Downloading packages:
(1/3): centos-release-storage-common-2-2.el7.centos.noarch.rpm | 5.1 kB 00:00:00
(2/3): centos-release-ceph-nautilus-1.2-2.el7.centos.noarch.rpm | 5.1 kB 00:00:00
(3/3): centos-release-nfs-ganesha28-1.0-3.el7.centos.noarch.rpm | 4.3 kB 00:00:00
————————————————————————————————————————————————————–
Total 52 kB/s | 15 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : centos-release-storage-common-2-2.el7.centos.noarch 1/3
Installing : centos-release-nfs-ganesha28-1.0-3.el7.centos.noarch 2/3
Installing : centos-release-ceph-nautilus-1.2-2.el7.centos.noarch 3/3
Verifying : centos-release-ceph-nautilus-1.2-2.el7.centos.noarch 1/3
Verifying : centos-release-nfs-ganesha28-1.0-3.el7.centos.noarch 2/3
Verifying : centos-release-storage-common-2-2.el7.centos.noarch 3/3

Installed:
centos-release-ceph-nautilus.noarch 0:1.2-2.el7.centos

Dependency Installed:
centos-release-nfs-ganesha28.noarch 0:1.0-3.el7.centos centos-release-storage-common.noarch 0:2-2.el7.centos

 

Complete!
[root@ceph-base ~]#

 

 

To install ceph-deploy on centos7 I had to add the following to the repo list at /etc/yum.repos.d/CentOS-Ceph-Nautilus.repo

 

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7//noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

 

 

then do a yum update:

 

[root@ceph-base yum.repos.d]# yum update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.rz.uni-frankfurt.de
* centos-ceph-nautilus: mirror2.hs-esslingen.de
* centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
* extras: ftp.halifax.rwth-aachen.de
* updates: mirror1.hs-esslingen.de
centos-ceph-nautilus | 3.0 kB 00:00:00
ceph-noarch | 1.5 kB 00:00:00
ceph-noarch/primary | 16 kB 00:00:00
ceph-noarch 170/170
Resolving Dependencies
–> Running transaction check
—> Package python-cffi.x86_64 0:1.6.0-5.el7 will be obsoleted
—> Package python-idna.noarch 0:2.4-1.el7 will be obsoleted
—> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be obsoleted
—> Package python-six.noarch 0:1.9.0-2.el7 will be obsoleted
—> Package python2-cffi.x86_64 0:1.11.2-1.el7 will be obsoleting
—> Package python2-cryptography.x86_64 0:1.7.2-2.el7 will be updated
—> Package python2-cryptography.x86_64 0:2.5-1.el7 will be an update
–> Processing Dependency: python2-asn1crypto >= 0.21 for package: python2-cryptography-2.5-1.el7.x86_64
—> Package python2-idna.noarch 0:2.5-1.el7 will be obsoleting
—> Package python2-ipaddress.noarch 0:1.0.18-5.el7 will be obsoleting
—> Package python2-six.noarch 0:1.12.0-1.el7 will be obsoleting
—> Package smartmontools.x86_64 1:7.0-2.el7 will be updated
—> Package smartmontools.x86_64 1:7.0-3.el7 will be an update
–> Running transaction check
—> Package python2-asn1crypto.noarch 0:0.23.0-2.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
python2-cffi x86_64 1.11.2-1.el7 centos-ceph-nautilus 229 k
replacing python-cffi.x86_64 1.6.0-5.el7
python2-idna noarch 2.5-1.el7 centos-ceph-nautilus 94 k
replacing python-idna.noarch 2.4-1.el7
python2-ipaddress noarch 1.0.18-5.el7 centos-ceph-nautilus 35 k
replacing python-ipaddress.noarch 1.0.16-2.el7
python2-six noarch 1.12.0-1.el7 centos-ceph-nautilus 33 k
replacing python-six.noarch 1.9.0-2.el7
Updating:
python2-cryptography x86_64 2.5-1.el7 centos-ceph-nautilus 544 k
smartmontools x86_64 1:7.0-3.el7 centos-ceph-nautilus 547 k
Installing for dependencies:
python2-asn1crypto noarch 0.23.0-2.el7 centos-ceph-nautilus 172 k

Transaction Summary
==============================================================================================================================================================
Install 4 Packages (+1 Dependent package)
Upgrade 2 Packages

Total download size: 1.6 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
warning: /var/cache/yum/x86_64/7/centos-ceph-nautilus/packages/python2-asn1crypto-0.23.0-2.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e451e5b5: NOKEY
Public key for python2-asn1crypto-0.23.0-2.el7.noarch.rpm is not installed
(1/7): python2-asn1crypto-0.23.0-2.el7.noarch.rpm | 172 kB 00:00:00
(2/7): python2-cffi-1.11.2-1.el7.x86_64.rpm | 229 kB 00:00:00
(3/7): python2-cryptography-2.5-1.el7.x86_64.rpm | 544 kB 00:00:00
(4/7): python2-ipaddress-1.0.18-5.el7.noarch.rpm | 35 kB 00:00:00
(5/7): python2-six-1.12.0-1.el7.noarch.rpm | 33 kB 00:00:00
(6/7): smartmontools-7.0-3.el7.x86_64.rpm | 547 kB 00:00:00
(7/7): python2-idna-2.5-1.el7.noarch.rpm | 94 kB 00:00:00
————————————————————————————————————————————————————–
Total 1.9 MB/s | 1.6 MB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
Importing GPG key 0xE451E5B5:
Userid : “CentOS Storage SIG (http://wiki.centos.org/SpecialInterestGroup/Storage) <security@centos.org>”
Fingerprint: 7412 9c0b 173b 071a 3775 951a d4a2 e50b e451 e5b5
Package : centos-release-storage-common-2-2.el7.centos.noarch (@extras)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python2-cffi-1.11.2-1.el7.x86_64 1/13
Installing : python2-idna-2.5-1.el7.noarch 2/13
Installing : python2-six-1.12.0-1.el7.noarch 3/13
Installing : python2-asn1crypto-0.23.0-2.el7.noarch 4/13
Installing : python2-ipaddress-1.0.18-5.el7.noarch 5/13
Updating : python2-cryptography-2.5-1.el7.x86_64 6/13
Updating : 1:smartmontools-7.0-3.el7.x86_64 7/13
Cleanup : python2-cryptography-1.7.2-2.el7.x86_64 8/13
Erasing : python-idna-2.4-1.el7.noarch 9/13
Erasing : python-ipaddress-1.0.16-2.el7.noarch 10/13
Erasing : python-six-1.9.0-2.el7.noarch 11/13
Erasing : python-cffi-1.6.0-5.el7.x86_64 12/13
Cleanup : 1:smartmontools-7.0-2.el7.x86_64 13/13
Verifying : python2-ipaddress-1.0.18-5.el7.noarch 1/13
Verifying : python2-asn1crypto-0.23.0-2.el7.noarch 2/13
Verifying : python2-six-1.12.0-1.el7.noarch 3/13
Verifying : python2-cryptography-2.5-1.el7.x86_64 4/13
Verifying : python2-idna-2.5-1.el7.noarch 5/13
Verifying : 1:smartmontools-7.0-3.el7.x86_64 6/13
Verifying : python2-cffi-1.11.2-1.el7.x86_64 7/13
Verifying : python-idna-2.4-1.el7.noarch 8/13
Verifying : python-ipaddress-1.0.16-2.el7.noarch 9/13
Verifying : 1:smartmontools-7.0-2.el7.x86_64 10/13
Verifying : python-cffi-1.6.0-5.el7.x86_64 11/13
Verifying : python-six-1.9.0-2.el7.noarch 12/13
Verifying : python2-cryptography-1.7.2-2.el7.x86_64 13/13

Installed:
python2-cffi.x86_64 0:1.11.2-1.el7 python2-idna.noarch 0:2.5-1.el7 python2-ipaddress.noarch 0:1.0.18-5.el7 python2-six.noarch 0:1.12.0-1.el7

Dependency Installed:
python2-asn1crypto.noarch 0:0.23.0-2.el7

Updated:
python2-cryptography.x86_64 0:2.5-1.el7 smartmontools.x86_64 1:7.0-3.el7

Replaced:
python-cffi.x86_64 0:1.6.0-5.el7 python-idna.noarch 0:2.4-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-six.noarch 0:1.9.0-2.el7

Complete!

 

[root@ceph-base yum.repos.d]# ceph-deploy
-bash: ceph-deploy: command not found

 

so then do:

 

ceph-base yum.repos.d]# yum -y install ceph-deploy
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.rz.uni-frankfurt.de
* centos-ceph-nautilus: de.mirrors.clouvider.net
* centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
* extras: ftp.fau.de
* updates: mirror1.hs-esslingen.de
Resolving Dependencies
–> Running transaction check
—> Package ceph-deploy.noarch 0:2.0.1-0 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
ceph-deploy noarch 2.0.1-0 ceph-noarch 286 k

Transaction Summary
==============================================================================================================================================================
Install 1 Package

Total download size: 286 k
Installed size: 1.2 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/ceph-noarch/packages/ceph-deploy-2.0.1-0.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 460f3994: NOKEY kB –:–:– ETA
Public key for ceph-deploy-2.0.1-0.noarch.rpm is not installed
ceph-deploy-2.0.1-0.noarch.rpm | 286 kB 00:00:01
Retrieving key from https://download.ceph.com/keys/release.asc
Importing GPG key 0x460F3994:
Userid : “Ceph.com (release key) <security@ceph.com>”
Fingerprint: 08b7 3419 ac32 b4e9 66c1 a330 e84a c2c0 460f 3994
From : https://download.ceph.com/keys/release.asc
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ceph-deploy-2.0.1-0.noarch 1/1
Verifying : ceph-deploy-2.0.1-0.noarch 1/1

Installed:
ceph-deploy.noarch 0:2.0.1-0

Complete!
[root@ceph-base yum.repos.d]#

 

 

With that, ceph-deploy is now installed:

 

[root@ceph-base ~]# ceph-deploy
usage: ceph-deploy [-h] [-v | -q] [–version] [–username USERNAME]
[–overwrite-conf] [–ceph-conf CEPH_CONF]
COMMAND …

 

Next step is to clone ceph-base and create the VM machines which will be used for the ceph cluster nodes. After that we can create the cluster using ceph-deploy. Machines are created using KVM.

 

We create the following machines:

ceph-mon

ceph-osd0

ceph-osd1

ceph-osd2

 

 

After this, create ssh key on ceph-mon and then copy it to the osd nodes as follows:

 

[root@ceph-mon ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9VOirKNfbuRHHA88mIOl9Q7fWf0wxvGd8eYqQwp4u0k root@ceph-mon
The key’s randomart image is:
+—[RSA 2048]—-+
| |
| o .. |
| =.=…o*|
| oo=o*o=.B|
| .S o*oB B.|
| . o.. *.+ o|
| .E=.+ . |
| o.=+ + . |
| ..+o.. o |
+—-[SHA256]—–+
[root@ceph-mon ~]#
[root@ceph-mon ~]# ssh-copy-id root@ceph-osd1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@ceph-osd1’s password:

Number of key(s) added: 1

 

Now try logging into the machine, with: “ssh ‘root@ceph-osd1′”
and check to make sure that only the key(s) you wanted were added.

[root@ceph-mon ~]#

 

Install Ceph Monitor

 

We’re installing this module on the machine we have designated for this purpose, ie ceph-mon:

 

Normally in a production environment ceph cluster you would run at least two or preferably three ceph monitor nodes to allow for failover and quorum.

 

 

[root@ceph-mon ~]# ceph-deploy install –mon ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –mon ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None

 

… long list of package installations….

[ceph-mon][DEBUG ] python2-webob.noarch 0:1.8.5-1.el7
[ceph-mon][DEBUG ] rdma-core.x86_64 0:22.4-5.el7
[ceph-mon][DEBUG ] userspace-rcu.x86_64 0:0.10.0-3.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

 

Install Ceph Manager

 

This will be installed on node ceph-mon:

 

[root@ceph-mon ~]# ceph-deploy mgr create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [(‘ceph-mon’, ‘ceph-mon’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f07237fda28>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7f0724066398>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mon:ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mon][DEBUG ] create a keyring file
[ceph-mon][DEBUG ] create path recursively if it doesn’t exist
[ceph-mon][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-mgr –keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mon mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mon/keyring
[ceph-mon][INFO ] Running command: systemctl enable ceph-mgr@ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mon.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-mgr@ceph-mon
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[root@ceph-mon ~]#

 

 

on ceph-mon, create the cluster configuration file:

 

ceph-deploy new ceph-mon

 

[root@ceph-mon ~]# ceph-deploy new ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f5d34d4a0c8>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5d344cb830>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph-mon][INFO ] Running command: /usr/sbin/ip link show
[ceph-mon][INFO ] Running command: /usr/sbin/ip addr show
[ceph-mon][DEBUG ] IP addresses found: [u’192.168.122.40′, u’10.0.9.40′]
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon
[ceph_deploy.new][DEBUG ] Monitor ceph-mon at 10.0.9.40
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘ceph-mon’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘10.0.9.40’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…
[root@ceph-mon ~]#

 

 

Add configuration directives: 1GiB journal, 2 (normal _and_ minimum) replicas per object, etc.

 

$ cat << EOF >> ceph.conf
osd_journal_size = 1000
osd_pool_default_size = 2
osd_pool_default_min_size = 2
osd_crush_chooseleaf_type = 1
osd_crush_update_on_start = true
max_open_files = 131072
osd pool default pg num = 128
osd pool default pgp num = 128
mon_pg_warn_max_per_osd = 0
EOF

 

 

[root@ceph-mon ~]# cat ceph.conf
[global]
fsid = 2e490f0d-41dc-4be2-b31f-c77627348d60
mon_initial_members = ceph-mon
mon_host = 10.0.9.40
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd_journal_size = 1000
osd_pool_default_size = 2
osd_pool_default_min_size = 2
osd_crush_chooseleaf_type = 1
osd_crush_update_on_start = true
max_open_files = 131072
osd pool default pg num = 128
osd pool default pgp num = 128
mon_pg_warn_max_per_osd = 0
[root@ceph-mon ~]#

 

 

next, create the ceph monitor on machine ceph-mon:

 

 

ceph-deploy mon create-initial

 

this does quite a lot, see below:

 

[root@ceph-mon ~]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd4742b6fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fd474290668>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon …
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph-mon][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] deploying mon to ceph-mon
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] remote hostname: ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create the mon path if it does not exist
[ceph-mon][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon/done
[ceph-mon][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon/done
[ceph-mon][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon.mon.keyring
[ceph-mon][DEBUG ] create the monitor keyring file
[ceph-mon][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i ceph-mon –keyring /var/lib/ceph/tmp/ceph-ceph-mon.mon.keyring –setuser 167 –setgroup 167
[ceph-mon][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon.mon.keyring
[ceph-mon][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon][DEBUG ] create the init path if it does not exist
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph-mon][INFO ] Running command: systemctl enable ceph-mon@ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-mon@ceph-mon
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][DEBUG ] status for monitor: mon.ceph-mon
… … … …

(edited out long list of DEBUG lines)

 

[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][INFO ] monitor: mon.ceph-mon is running
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-mon monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys…
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmp6aKZHd
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] fetch remote file
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –admin-daemon=/var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.admin
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-mds
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-mgr
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-osd
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring ‘ceph.mon.keyring’ already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp6aKZHd
[root@ceph-mon ~]#

 

 

next, also on ceph-mon, install and configure the ceph cluster cli command-line interface:

 

ceph-deploy install –cli ceph-mon

 

again, this does a lot…

 

[root@ceph-mon ~]# ceph-deploy install –cli ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –cli ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f10e0ab0320>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f10e157a848>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : True
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-mon
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-mon …
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph-mon][INFO ] installing Ceph on ceph-mon
[ceph-mon][INFO ] Running command: yum clean all
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Cleaning repos: Ceph Ceph-noarch base centos-ceph-nautilus centos-nfs-ganesha28
[ceph-mon][DEBUG ] : ceph-noarch ceph-source epel extras updates
[ceph-mon][DEBUG ] Cleaning up list of fastest mirrors
[ceph-mon][INFO ] Running command: yum -y install epel-release
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Determining fastest mirrors
[ceph-mon][DEBUG ] * base: ftp.antilo.de
[ceph-mon][DEBUG ] * centos-ceph-nautilus: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * epel: epel.mirror.nucleus.be
[ceph-mon][DEBUG ] * extras: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * updates: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] 517 packages excluded due to repository priority protections
[ceph-mon][DEBUG ] Resolving Dependencies
[ceph-mon][DEBUG ] –> Running transaction check
[ceph-mon][DEBUG ] —> Package epel-release.noarch 0:7-11 will be updated
[ceph-mon][DEBUG ] —> Package epel-release.noarch 0:7-13 will be an update
[ceph-mon][DEBUG ] –> Finished Dependency Resolution
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependencies Resolved
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Package Arch Version Repository Size
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Updating:
[ceph-mon][DEBUG ] epel-release noarch 7-13 epel 15 k
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Transaction Summary
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Upgrade 1 Package
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Total download size: 15 k
[ceph-mon][DEBUG ] Downloading packages:
[ceph-mon][DEBUG ] Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
[ceph-mon][DEBUG ] Running transaction check
[ceph-mon][DEBUG ] Running transaction test
[ceph-mon][DEBUG ] Transaction test succeeded
[ceph-mon][DEBUG ] Running transaction
[ceph-mon][DEBUG ] Updating : epel-release-7-13.noarch 1/2
[ceph-mon][DEBUG ] Cleanup : epel-release-7-11.noarch 2/2
[ceph-mon][DEBUG ] Verifying : epel-release-7-13.noarch 1/2
[ceph-mon][DEBUG ] Verifying : epel-release-7-11.noarch 2/2
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Updated:
[ceph-mon][DEBUG ] epel-release.noarch 0:7-13
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: yum -y install yum-plugin-priorities
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Loading mirror speeds from cached hostfile
[ceph-mon][DEBUG ] * base: ftp.antilo.de
[ceph-mon][DEBUG ] * centos-ceph-nautilus: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * epel: epel.mirror.nucleus.be
[ceph-mon][DEBUG ] * extras: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * updates: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] 517 packages excluded due to repository priority protections
[ceph-mon][DEBUG ] Package yum-plugin-priorities-1.1.31-54.el7_8.noarch already installed and latest version
[ceph-mon][DEBUG ] Nothing to do
[ceph-mon][DEBUG ] Configure Yum priorities to include obsoletes
[ceph-mon][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph-mon][INFO ] Running command: rpm –import https://download.ceph.com/keys/release.asc
[ceph-mon][INFO ] Running command: yum remove -y ceph-release
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Resolving Dependencies
[ceph-mon][DEBUG ] –> Running transaction check
[ceph-mon][DEBUG ] —> Package ceph-release.noarch 0:1-1.el7 will be erased
[ceph-mon][DEBUG ] –> Finished Dependency Resolution
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependencies Resolved
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Package Arch Version Repository Size
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Removing:
[ceph-mon][DEBUG ] ceph-release noarch 1-1.el7 @/ceph-release-1-0.el7.noarch 535
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Transaction Summary
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Remove 1 Package
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Installed size: 535
[ceph-mon][DEBUG ] Downloading packages:
[ceph-mon][DEBUG ] Running transaction check
[ceph-mon][DEBUG ] Running transaction test
[ceph-mon][DEBUG ] Transaction test succeeded
[ceph-mon][DEBUG ] Running transaction
[ceph-mon][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[ceph-mon][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Removed:
[ceph-mon][DEBUG ] ceph-release.noarch 0:1-1.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: yum install -y https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Examining /var/tmp/yum-root-mTn5ik/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[ceph-mon][DEBUG ] Marking /var/tmp/yum-root-mTn5ik/ceph-release-1-0.el7.noarch.rpm to be installed
[ceph-mon][DEBUG ] Resolving Dependencies
[ceph-mon][DEBUG ] –> Running transaction check
[ceph-mon][DEBUG ] —> Package ceph-release.noarch 0:1-1.el7 will be installed
[ceph-mon][DEBUG ] –> Finished Dependency Resolution
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependencies Resolved
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Package Arch Version Repository Size
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Installing:
[ceph-mon][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Transaction Summary
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Install 1 Package
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Total size: 535
[ceph-mon][DEBUG ] Installed size: 535
[ceph-mon][DEBUG ] Downloading packages:
[ceph-mon][DEBUG ] Running transaction check
[ceph-mon][DEBUG ] Running transaction test
[ceph-mon][DEBUG ] Transaction test succeeded
[ceph-mon][DEBUG ] Running transaction
[ceph-mon][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Installed:
[ceph-mon][DEBUG ] ceph-release.noarch 0:1-1.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph-mon][WARNIN] altered ceph.repo priorities to contain: priority=1
[ceph-mon][INFO ] Running command: yum -y install ceph-common
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Loading mirror speeds from cached hostfile
[ceph-mon][DEBUG ] * base: ftp.antilo.de
[ceph-mon][DEBUG ] * centos-ceph-nautilus: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * epel: epel.mirror.nucleus.be
[ceph-mon][DEBUG ] * extras: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * updates: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] 517 packages excluded due to repository priority protections
[ceph-mon][DEBUG ] Package 2:ceph-common-13.2.10-0.el7.x86_64 already installed and latest version
[ceph-mon][DEBUG ] Nothing to do
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

then do:

 

ceph-deploy admin ceph-mon

 

[root@ceph-mon ~]# ceph-deploy admin ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcbddacd2d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7fcbde5e0488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph-deploy mon create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ffafa7fffc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] func : <function mon at 0x7ffafa7d9668>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon …
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph-mon][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] deploying mon to ceph-mon
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] remote hostname: ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create the mon path if it does not exist
[ceph-mon][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon/done
[ceph-mon][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon][DEBUG ] create the init path if it does not exist
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph-mon][INFO ] Running command: systemctl enable ceph-mon@ceph-mon
[ceph-mon][INFO ] Running command: systemctl start ceph-mon@ceph-mon
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][DEBUG ] status for monitor: mon.ceph-mon 
[ceph-mon][DEBUG ] }

…. … (edited out long list of DEBUG line output)

[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][INFO ] monitor: mon.ceph-mon is running
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[root@ceph-mon ~]#

 

 

Since we are not doing an upgrade, switch CRUSH tunables to optimal:

 

ceph osd crush tunables optimal

 

 

[root@ceph-mon ~]# ceph osd crush tunables optimal
adjusted tunables profile to optimal
[root@ceph-mon ~]#

 

Create the  OSDs

 

Any new OSDs (e.g., when the cluster is expanded) can be deployed using BlueStore.

 

This is the default behavior so no specific change is needed.

 

first do:

 

ceph-deploy install –osd ceph-osd0 ceph-osd1 ceph-osd2

 

To create an OSD on a remote node, run:

 

cephdeploy osd create HOST data /path/to/device

 

NOTE that partitions aren’t created by this tool, they must be created beforehand. 

 

So we need to first create 2 x 2GB SCSI disks on each OSD machine.

 

These have the designations sda and sdb since our root OS system disk has the drive designation vda.

If necessary, to erase each partition, you would use the ceph-deploy zap command, eg:

 

ceph-deploy disk zap ceph-osd0:sda

 

but here we have created completely new disks so not required.

 

 

you can list the available disks on the OSDs as follows:

 

[root@ceph-mon ~]# ceph-deploy disk list ceph-osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f890c8506c8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : [‘ceph-osd0’]
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f890c892b90>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph-osd0][DEBUG ] connected to host: ceph-osd0
[ceph-osd0][DEBUG ] detect platform information from remote host
[ceph-osd0][DEBUG ] detect machine type
[ceph-osd0][DEBUG ] find the location of an executable
[ceph-osd0][INFO ] Running command: fdisk -l
[ceph-osd0][INFO ] Disk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectors
[ceph-osd0][INFO ] Disk /dev/sda: 2147 MB, 2147483648 bytes, 4194304 sectors
[ceph-osd0][INFO ] Disk /dev/sdb: 2147 MB, 2147483648 bytes, 4194304 sectors
[ceph-osd0][INFO ] Disk /dev/mapper/centos-root: 8585 MB, 8585740288 bytes, 16769024 sectors
[ceph-osd0][INFO ] Disk /dev/mapper/centos-swap: 1073 MB, 1073741824 bytes, 2097152 sectors
[root@ceph-mon ~]#

 

Create the 100% partitions for each disk on each OSD ie sda and sdb will be sda and sdb1:

 

NOTE we do not create a partition for data sda but we do require one for the journal ie sdb1

from ceph-mon, install and configure the OSDs, using sda as datastore (this is normally a RAID0 of big rotational disks) and sdb1 as its journal (normally a partition on a SSD):

 

 

ceph-deploy osd create –data /dev/sda ceph-osd0

 

[root@ceph-mon ~]# ceph-deploy osd create –data /dev/sda ceph-osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create –data /dev/sda ceph-osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc2d30c47e8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-osd0
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fc2d30ffb18>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/sda
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[ceph-osd0][DEBUG ] connected to host: ceph-osd0
[ceph-osd0][DEBUG ] detect platform information from remote host
[ceph-osd0][DEBUG ] detect machine type
[ceph-osd0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-osd0
[ceph-osd0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-osd0][WARNIN] osd keyring does not exist yet, creating one
[ceph-osd0][DEBUG ] create a keyring file
[ceph-osd0][DEBUG ] find the location of an executable
[ceph-osd0][INFO ] Running command: /usr/sbin/ceph-volume –cluster ceph lvm create –bluestore –data /dev/sda
[ceph-osd0][WARNIN] Running command: /bin/ceph-authtool –gen-print-key
[ceph-osd0][WARNIN] Running command: /bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new 045a03af-bc98-46e7-868e-35b474fb0e09
[ceph-osd0][WARNIN] Running command: /usr/sbin/vgcreate –force –yes ceph-316d6de8-7741-4776-b000-0239cc0b0429 /dev/sda
[ceph-osd0][WARNIN] stdout: Physical volume “/dev/sda” successfully created.
[ceph-osd0][WARNIN] stdout: Volume group “ceph-316d6de8-7741-4776-b000-0239cc0b0429” successfully created
[ceph-osd0][WARNIN] Running command: /usr/sbin/lvcreate –yes -l 100%FREE -n osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 ceph-316d6de8-7741-4776-b000-0239cc0b0429
[ceph-osd0][WARNIN] stdout: Logical volume “osd-block-045a03af-bc98-46e7-868e-35b474fb0e09” created.
[ceph-osd0][WARNIN] Running command: /bin/ceph-authtool –gen-print-key
[ceph-osd0][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-osd0][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-osd0][WARNIN] Running command: /bin/ln -s /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 /var/lib/ceph/osd/ceph-0/block
[ceph-osd0][WARNIN] Running command: /bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-osd0][WARNIN] stderr: got monmap epoch 1
[ceph-osd0][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring –create-keyring –name osd.0 –add-key AQBHCodguXDvGRAAvnenjHrWDTAdWBz0QJujzQ==
[ceph-osd0][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-osd0][WARNIN] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQBHCodguXDvGRAAvnenjHrWDTAdWBz0QJujzQ== with 0 caps)
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-osd0][WARNIN] Running command: /bin/ceph-osd –cluster ceph –osd-objectstore bluestore –mkfs -i 0 –monmap /var/lib/ceph/osd/ceph-0/activate.monmap –keyfile – –osd-data /var/lib/ceph/osd/ceph-0/ –osd-uuid 045a03af-bc98-46e7-868e-35b474fb0e09 –setuser ceph –setgroup ceph
[ceph-osd0][WARNIN] –> ceph-volume lvm prepare successful for: /dev/sda
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-osd0][WARNIN] Running command: /bin/ceph-bluestore-tool –cluster=ceph prime-osd-dir –dev /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 –path /var/lib/ceph/osd/ceph-0 –no-mon-config
[ceph-osd0][WARNIN] Running command: /bin/ln -snf /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 /var/lib/ceph/osd/ceph-0/block
[ceph-osd0][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-osd0][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-045a03af-bc98-46e7-868e-35b474fb0e09
[ceph-osd0][WARNIN] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-045a03af-bc98-46e7-868e-35b474fb0e09.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph-osd0][WARNIN] Running command: /bin/systemctl enable –runtime ceph-osd@0
[ceph-osd0][WARNIN] stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph-osd0][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-osd0][WARNIN] –> ceph-volume lvm activate successful for osd ID: 0
[ceph-osd0][WARNIN] –> ceph-volume lvm create successful for: /dev/sda
[ceph-osd0][INFO ] checking OSD status…
[ceph-osd0][DEBUG ] find the location of an executable
[ceph-osd0][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host ceph-osd0 is now ready for osd use.
[root@ceph-mon ~]#

 

do the same for the other nodes osd1 and osd2:

 

example for osd0:

parted –script /dev/sda ‘mklabel gpt’
parted –script /dev/sda “mkpart primary 0% 100%”

 

then do:

 

ceph-volume lvm create –data /dev/sda1

 

 

so we can do:

 

 

[root@ceph-osd0 ~]# ceph-volume lvm create –data /dev/sda1
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new be29e0ff-73e4-47cb-8b2c-f4caa10e08a4
Running command: /usr/sbin/vgcreate –force –yes ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f /dev/sda1
stdout: Physical volume “/dev/sda1” successfully created.
stdout: Volume group “ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f” successfully created
Running command: /usr/sbin/lvcreate –yes -l 100%FREE -n osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f
stdout: Logical volume “osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4” created.
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ln -s /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring –create-keyring –name osd.3 –add-key AQCFDYdgcHaFJxAA2BAlk+JwDg22eVrhA5WGcg==
stdout: creating /var/lib/ceph/osd/ceph-3/keyring
added entity osd.3 auth auth(auid = 18446744073709551615 key=AQCFDYdgcHaFJxAA2BAlk+JwDg22eVrhA5WGcg== with 0 caps)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Running command: /usr/bin/ceph-osd –cluster ceph –osd-objectstore bluestore –mkfs -i 3 –monmap /var/lib/ceph/osd/ceph-3/activate.monmap –keyfile – –osd-data /var/lib/ceph/osd/ceph-3/ –osd-uuid be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 –setuser ceph –setgroup ceph
–> ceph-volume lvm prepare successful for: /dev/sda1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/ceph-bluestore-tool –cluster=ceph prime-osd-dir –dev /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 –path /var/lib/ceph/osd/ceph-3 –no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/systemctl enable ceph-volume@lvm-3-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4
stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable –runtime ceph-osd@3
stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to /usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@3
–> ceph-volume lvm activate successful for osd ID: 3
–> ceph-volume lvm create successful for: /dev/sda1
[root@ceph-osd0 ~]#

 

current status is now:

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_WARN
1 osds down
no active mgr

services:
mon: 1 daemons, quorum ceph-mon
mgr: no daemons active
osd: 4 osds: 3 up, 4 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

[root@ceph-mon ~]# ceph health
HEALTH_WARN 1 osds down; no active mgr
[root@ceph-mon ~]#

 

 

now have to repeat for the other 2 OSDs:

 

for node in ceph-osd1 ceph-osd2 ;
do
ssh $node
parted –script /dev/sda ‘mklabel gpt’ ;
parted –script /dev/sda “mkpart primary 0% 100%” ;
ceph-volume lvm create –data /dev/sda1 ;
done

 

 

The ceph cluster now looks like this:

 

(still have pools and crush to create and config)

 

Note the OSDs have to be “in” the cluster ie as cluster node members, and “up” ie active and running Ceph.

 

How To Check System Status

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

 

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

 

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 3.0 GiB / 6.0 GiB avail
pgs:

 

[root@ceph-mon ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00757 root default
-3 0.00378 host ceph-osd0
0 hdd 0.00189 osd.0 down 0 1.00000
3 hdd 0.00189 osd.3 up 1.00000 1.00000
-5 0.00189 host ceph-osd1
1 hdd 0.00189 osd.1 up 1.00000 1.00000
-7 0.00189 host ceph-osd2
2 hdd 0.00189 osd.2 up 1.00000 1.00000
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS TYPE NAME
-1 0.00757 – 6.0 GiB 3.0 GiB 12 MiB 0 B 3 GiB 3.0 GiB 50.30 1.00 – root default
-3 0.00378 – 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 – host ceph-osd0
0 hdd 0.00189 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 osd.0
3 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 0 osd.3
-5 0.00189 – 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 – host ceph-osd1
1 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 1 osd.1
-7 0.00189 – 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 – host ceph-osd2
2 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 1 osd.2
TOTAL 6.0 GiB 3.0 GiB 12 MiB 0 B 3 GiB 3.0 GiB 50.30
MIN/MAX VAR: 1.00/1.00 STDDEV: 0.00
[root@ceph-mon ~]#

 

 

 

[root@ceph-mon ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool ‘datapool’
use ‘ceph osd pool application enable <pool-name> <app-name>’, where <app-name> is ‘cephfs’, ‘rbd’, ‘rgw’, or freeform for custom applications.
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS
0 hdd 0.00189 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0
3 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 3.7 MiB 0 B 1 GiB 1016 MiB 50.28 1.00 0
1 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 3.7 MiB 0 B 1 GiB 1016 MiB 50.28 1.00 0
2 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 3.7 MiB 0 B 1 GiB 1016 MiB 50.28 1.00 0
TOTAL 6.0 GiB 3.0 GiB 11 MiB 0 B 3 GiB 3.0 GiB 50.28
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
[root@ceph-mon ~]#

 

 

For more Ceph admin commands, see https://sabaini.at/pages/ceph-cheatsheet.html#monit

 

Create a Storage Pool

 

 

To create a pool:

 

ceph osd pool create datapool 1

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
6.0 GiB 3.0 GiB 3.0 GiB 50.30
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
datapool 1 0 B 0 1.8 GiB 0
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool ‘datapool’
use ‘ceph osd pool application enable <pool-name> <app-name>’, where <app-name> is ‘cephfs’, ‘rbd’, ‘rgw’, or freeform for custom applications.
[root@ceph-mon ~]#

 

so we need to enable the pool:

 

[root@ceph-mon ~]# ceph osd pool application enable datapool rbd
enabled application ‘rbd’ on pool ‘datapool’
[root@ceph-mon ~]#

[root@ceph-mon ~]# ceph health detail
HEALTH_OK
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:
pools: 1 pools, 1 pgs
objects: 1 objects, 10 B
usage: 3.0 GiB used, 3.0 GiB / 6.0 GiB avail
pgs: 1 active+clean

[root@ceph-mon ~]#

 

 

 

How To Check All Ceph Services Are Running

 

Use 

 

ceph -s 

 

 

 

 

 

or alternatively:

 

 

[root@ceph-mon ~]# systemctl status ceph\*.service
● ceph-mon@ceph-mon.service – Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 989 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon.service
└─989 /usr/bin/ceph-mon -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.

 

● ceph-mgr@ceph-mon.service – Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 992 (ceph-mgr)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph-mon.service
└─992 /usr/bin/ceph-mgr -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster manager daemon.
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root

 

● ceph-crash.service – Ceph crash dump collector
Loaded: loaded (/usr/lib/systemd/system/ceph-crash.service; enabled; vendor preset: enabled)
Active: active (running) since Di 2021-04-27 11:47:34 CEST; 6h ago
Main PID: 695 (ceph-crash)
CGroup: /system.slice/ceph-crash.service
└─695 /usr/bin/python2.7 /usr/bin/ceph-crash

 

Apr 27 11:47:34 ceph-mon systemd[1]: Started Ceph crash dump collector.
Apr 27 11:47:34 ceph-mon ceph-crash[695]: INFO:__main__:monitoring path /var/lib/ceph/crash, delay 600s
[root@ceph-mon ~]#

 

 

Object Manipulation

 

 

To create an object and upload a file into that object:

 

Example:

 

echo “test data” > testfile
rados put -p datapool testfile testfile
rados -p datapool ls
testfile

 

To set a key/value pair to that object:

 

rados -p datapool setomapval testfile mykey myvalue
rados -p datapool getomapval testfile mykey
(length 7) : 0000 : 6d 79 76 61 6c 75 65 : myvalue

 

To download the file:

 

rados get -p datapool testfile testfile2
md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2

 

 

 

[root@ceph-mon ~]# echo “test data” > testfile
[root@ceph-mon ~]# rados put -p datapool testfile testfile
[root@ceph-mon ~]# rados -p datapool ls
testfile
[root@ceph-mon ~]# rados -p datapool setomapval testfile mykey myvalue
[root@ceph-mon ~]# rados -p datapool getomapval testfile mykey
value (7 bytes) :
00000000 6d 79 76 61 6c 75 65 |myvalue|
00000007

 

[root@ceph-mon ~]# rados get -p datapool testfile testfile2
[root@ceph-mon ~]# md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2
[root@ceph-mon ~]#

 

 

How To Check If Your Datastore is BlueStore or FileStore

 

[root@ceph-mon ~]# ceph osd metadata 0 | grep -e id -e hostname -e osd_objectstore
“id”: 0,
“hostname”: “ceph-osd0”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 1 | grep -e id -e hostname -e osd_objectstore
“id”: 1,
“hostname”: “ceph-osd1”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 2 | grep -e id -e hostname -e osd_objectstore
“id”: 2,
“hostname”: “ceph-osd2”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

You can also display a large amount of information with this command:

 

[root@ceph-mon ~]# ceph osd metadata 2
{
“id”: 2,
“arch”: “x86_64”,
“back_addr”: “10.0.9.12:6801/1138”,
“back_iface”: “eth1”,
“bluefs”: “1”,
“bluefs_single_shared_device”: “1”,
“bluestore_bdev_access_mode”: “blk”,
“bluestore_bdev_block_size”: “4096”,
“bluestore_bdev_dev”: “253:2”,
“bluestore_bdev_dev_node”: “dm-2”,
“bluestore_bdev_driver”: “KernelDevice”,
“bluestore_bdev_model”: “”,
“bluestore_bdev_partition_path”: “/dev/dm-2”,
“bluestore_bdev_rotational”: “1”,
“bluestore_bdev_size”: “2143289344”,
“bluestore_bdev_type”: “hdd”,
“ceph_release”: “mimic”,
“ceph_version”: “ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)”,
“ceph_version_short”: “13.2.10”,
“cpu”: “AMD EPYC-Rome Processor”,
“default_device_class”: “hdd”,
“devices”: “dm-2,sda”,
“distro”: “centos”,
“distro_description”: “CentOS Linux 7 (Core)”,
“distro_version”: “7”,
“front_addr”: “10.0.9.12:6800/1138”,
“front_iface”: “eth1”,
“hb_back_addr”: “10.0.9.12:6802/1138”,
“hb_front_addr”: “10.0.9.12:6803/1138”,
“hostname”: “ceph-osd2”,
“journal_rotational”: “1”,
“kernel_description”: “#1 SMP Thu Apr 8 19:51:47 UTC 2021”,
“kernel_version”: “3.10.0-1160.24.1.el7.x86_64”,
“mem_swap_kb”: “1048572”,
“mem_total_kb”: “1530760”,
“os”: “Linux”,
“osd_data”: “/var/lib/ceph/osd/ceph-2”,
“osd_objectstore”: “bluestore”,
“rotational”: “1”
}
[root@ceph-mon ~]#

 

or you can use:

 

[root@ceph-mon ~]# ceph osd metadata osd.0 | grep osd_objectstore
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

Which Version of Ceph Is Your Cluster Running?

 

[root@ceph-mon ~]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

How To List Your Cluster Pools

 

To list your cluster pools, execute:

 

ceph osd lspools

 

[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]#

 

 

Placement Groups PG Information

 

To display the number of placement groups in a pool:

 

ceph osd pool get {pool-name} pg_num

 

 

To display statistics for the placement groups in the cluster:

 

ceph pg dump [–format {format}]

 

To display pool statistics:

 

[root@ceph-mon ~]# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
datapool 10 B 1 0 2 0 0 0 2 2 KiB 2 2 KiB

 

total_objects 1
total_used 3.0 GiB
total_avail 3.0 GiB
total_space 6.0 GiB
[root@ceph-mon ~]#

 

 

How To Repair a Placement Group PG

 

Ascertain with ceph -s which PG has a problem

 

To identify stuck placement groups:

 

ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]

 

Then do:

 

ceph pg repair <PG ID>

For more info on troubleshooting PGs see https://documentation.suse.com/ses/7/html/ses-all/bp-troubleshooting-pgs.html

 

 

How To Activate Ceph Dashboard

 

The Ceph Dashboard runs without an Apache or other webserver active, the functionality is provided by the Ceph system.

 

All HTTP connections to the Ceph dashboard use SSL/TLS by default.

 

For testing lab purposes you can simply generate and install a self-signed certificate as follows:

 

ceph dashboard create-self-signed-cert

 

However in production environments this is unsuitable since web browsers will object to self-signed certificates and require explicit confirmation from a certificate authority or CA before opening a connection to the Ceph dashboard.

 

You can use your own certificate authority to ensure the certificate warning does not appear.

 

For example by doing:

 

$ openssl req -new -nodes -x509 -subj “/O=IT/CN=ceph-mgr-dashboard” -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca

 

The generated dashboard.crt file then needs to be signed by a CA. Once signed, it can then be enabled for all Ceph manager instances as follows:

 

ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt

 

After changing the SSL certificate and key you must restart the Ceph manager processes manually. Either by:

 

ceph mgr fail mgr

 

or by disabling and re-enabling the dashboard module:

 

ceph mgr module disable dashboard
ceph mgr module enable dashboard

 

By default, the ceph-mgr daemon that runs the dashboard (i.e., the currently active manager) binds to TCP port 8443 (or 8080 if SSL is disabled).

 

You can change these ports by doing:

ceph config set mgr mgr/dashboard/server_addr $IP
ceph config set mgr mgr/dashboard/server_port $PORT

 

For the purposes of this lab I did:

 

[root@ceph-mon ~]# ceph mgr module enable dashboard
[root@ceph-mon ~]# ceph dashboard create-self-signed-cert
Self-signed certificate created
[root@ceph-mon ~]#

 

Dashboard enabling can be automated by adding following to ceph.conf:

 

[mon]
mgr initial modules = dashboard

 

 

 

[root@ceph-mon ~]# ceph mgr module ls | grep -A 5 enabled_modules
“enabled_modules”: [
“balancer”,
“crash”,
“dashboard”,
“iostat”,
“restful”,
[root@ceph-mon ~]#

 

check SSL is installed correctly. You should see the keys displayed in output from these commands:

 

 

ceph config-key get mgr/dashboard/key
ceph config-key get mgr/dashboard/crt

 

This command does not work on Centos7, Ceph Mimic version as the full functionality was not implemented by the Ceph project for this version.

 

 

ceph dashboard ac-user-create admin password administrator

 

 

Use this command instead:

 

 

[root@ceph-mon etc]# ceph dashboard set-login-credentials cephuser <password not shown here>
Username and password updated
[root@ceph-mon etc]#

 

Also make sure you have the respective firewall ports open for the dashboard, ie 8443 for SSL/TLS https (or 8080 for http – latter however not advisable due to insecure unencrypted connection – password interception risk)

 

 

Logging in to the Ceph Dashboard

 

To log in, open the URL:

 

 

To display the current URL and port for the Ceph dashboard, do:

 

[root@ceph-mon ~]# ceph mgr services
{
“dashboard”: “https://ceph-mon:8443/”
}
[root@ceph-mon ~]#

 

and enter the user name and password you set as above.

 

 

Pools and Placement Groups In More Detail

 

Remember that pools are not PGs. PGs go inside pools.

 

To create a pool:

 

 

ceph osd pool create <pool name> <PG_NUM> <PGP_NUM>

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pools by default are replicated.

 

There are two kinds:

 

replicated

 

erasure coding EC

 

For replicated you set the number of data copies or replicas that each data obkect will have. The number of copies that can be lost will be one less than the number of replicas.

 

For EC its more complicated.

 

you have

 

k : number of data chunks
m : number of coding chunks

 

 

Pools have to be associated with an application. Pools to be used with CephFS, or pools automatically created by Object Gateway are automatically associated with cephfs or rgw respectively.

 

For CephFS the name associated application name is cephfs,
for RADOS Block Device it is rbd,
and for Object Gateway it is rgw.

 

Otherwise, the format to associate a free-form application name with a pool is:

 

ceph osd pool application enable POOL_NAME APPLICATION_NAME

To see which applications a pool is associated with use:

 

ceph osd pool application get pool_name

 

 

To set pool quotas for the maximum number of bytes and/or the maximum number of objects permitted per pool:

 

ceph osd pool set-quota POOL_NAME MAX_OBJECTS OBJ_COUNT MAX_BYTES BYTES

 

eg

 

ceph osd pool set-quota data max_objects 20000

 

To set the number of object replicas on a replicated pool use:

 

ceph osd pool set poolname size num-replicas

 

Important:
The num-replicas value includes the object itself. So if you want the object and two replica copies of the object for a total of three instances of the object, you need to specify 3. You should not set this value to anything less than 3! Also bear in mind that setting 4 replicas for a pool will increase the reliability by 25%.

 

To display the number of object replicas, use:

 

ceph osd dump | grep ‘replicated size’

 

 

If you want to remove a quota, set this value to 0.

 

To set pool values, use:

 

ceph osd pool set POOL_NAME KEY VALUE

 

To display a pool’s stats use:

 

rados df

 

To list all values related to a specific pool use:

 

ceph osd pool get POOL_NAME all

 

You can also display specific pool values as follows:

 

ceph osd pool get POOL_NAME KEY

 

The number of placement groups for the pool.

 

ceph osd pool get POOL_NAME KEY

In particular:

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pool Created:

 

[root@ceph-mon ~]# ceph osd pool create datapool 128 128 replicated
pool ‘datapool’ created
[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:Block Lists
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 3.2 GiB used, 2.8 GiB / 6.0 GiB avail
pgs: 34.375% pgs unknown
84 active+clean
44 unknown

[root@ceph-mon ~]#

 

To remove a Placement Pool

 

two ways, ie two different commands can be used:

 

[root@ceph-mon ~]# rados rmpool datapool –yes-i-really-really-mean-it
WARNING:
This will PERMANENTLY DESTROY an entire pool of objects with no way back.
To confirm, pass the pool to remove twice, followed by
–yes-i-really-really-mean-it

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.

[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
[root@ceph-mon ~]#

 

 

You have to set the mon_allow_pool_delete option first to true

 

first get the value of

 

ceph osd pool get pool_name nodelete

 

[root@ceph-mon ~]# ceph osd pool get datapool nodelete
nodelete: false
[root@ceph-mon ~]#

 

Because inadvertent pool deletion is a real danger, Ceph implements two mechanisms that prevent pools from being deleted. Both mechanisms must be disabled before a pool can be deleted.

 

The first mechanism is the NODELETE flag. Each pool has this flag, and its default value is ‘false’. To find out the value of this flag on a pool, run the following command:

 

ceph osd pool get pool_name nodelete

If it outputs nodelete: true, it is not possible to delete the pool until you change the flag using the following command:

 

ceph osd pool set pool_name nodelete false

 

 

The second mechanism is the cluster-wide configuration parameter mon allow pool delete, which defaults to ‘false’. This means that, by default, it is not possible to delete a pool. The error message displayed is:

 

Error EPERM: pool deletion is disabled; you must first set the
mon_allow_pool_delete config option to true before you can destroy a pool

 

To delete the pool despite this safety setting, you can temporarily set value of mon allow pool delete to ‘true’, then delete the pool. Then afterwards reset the value back to ‘false’:

 

ceph tell mon.* injectargs –mon-allow-pool-delete=true
ceph osd pool delete pool_name pool_name –yes-i-really-really-mean-it
ceph tell mon.* injectargs –mon-allow-pool-delete=false

 

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=true
injectargs:
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.
[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
pool ‘datapool’ removed
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=false
injectargs:mon_allow_pool_delete = ‘false’
[root@ceph-mon ~]#

 

NOTE The injectargs command displays following to confirm the command was carried out ok, this is NOT an error:

 

injectargs:mon_allow_pool_delete = ‘true’ (not observed, change may require restart)

 

 

 

Creating a Ceph MetaData Server MDS

 

A metadata or mds server node is a requirement if you want to run cephfs.

 

First add the mds server node name to the hosts name of all machines in the cluster, both mon, mgr and osds.

 

For this lab I am using the ceph-mon machine for the mds server ie not a separate additional node.

 

Note the SSH has to work, this is a prerequisite.

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph-deploy mds create ceph-mds
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mds
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f29c54e55f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7f29c54b01b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [(‘ceph-mds’, ‘ceph-mds’)]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mds:ceph-mds
The authenticity of host ‘ceph-mds (10.0.9.40)’ can’t be established.
ECDSA key fingerprint is SHA256:OOvumn9VbVuPJbDQftpI3GnpQXchomGLwQ4J/1ADy6I.
ECDSA key fingerprint is MD5:1f:dd:66:01:b0:9c:6f:9b:5e:93:f4:80:7e:ad:eb:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘ceph-mds,10.0.9.40’ (ECDSA) to the list of known hosts.
root@ceph-mds’s password:
root@ceph-mds’s password:
[ceph-mds][DEBUG ] connected to host: ceph-mds
[ceph-mds][DEBUG ] detect platform information from remote host
[ceph-mds][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mds
[ceph-mds][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mds][WARNIN] mds keyring does not exist yet, creating one
[ceph-mds][DEBUG ] create a keyring file
[ceph-mds][DEBUG ] create path if it doesn’t exist
[ceph-mds][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-mds –keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mds osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mds/keyring
[ceph-mds][INFO ] Running command: systemctl enable ceph-mds@ceph-mds
[ceph-mds][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mds.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph-mds][INFO ] Running command: systemctl start ceph-mds@ceph-mds
[ceph-mds][INFO ] Running command: systemctl enable ceph.target
[root@ceph-mon ~]#

 

 

Note the correct systemd service name used!

 

[root@ceph-mon ~]# systemctl status ceph-mds
Unit ceph-mds.service could not be found.
[root@ceph-mon ~]# systemctl status ceph-mds@ceph-mds
● ceph-mds@ceph-mds.service – Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
Active: active (running) since Mo 2021-05-03 04:14:07 CEST; 4min 5s ago
Main PID: 22897 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph-mds.service
└─22897 /usr/bin/ceph-mds -f –cluster ceph –id ceph-mds –setuser ceph –setgroup ceph

Mai 03 04:14:07 ceph-mon systemd[1]: Started Ceph metadata server daemon.
Mai 03 04:14:07 ceph-mon ceph-mds[22897]: starting mds.ceph-mds at –
[root@ceph-mon ~]#

 

Next, I used ceph-deploy to copy the configuration file and admin key to the metadata server so I can use the ceph CLI without needing to specify monitor address and ceph.client.admin.keyring for each command execution:

 

[root@ceph-mon ~]# ceph-deploy admin ceph-mds
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-mds
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa99fae82d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘ceph-mds’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7fa9a05fb488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mds
root@ceph-mds’s password:
root@ceph-mds’s password:
[ceph-mds][DEBUG ] connected to host: ceph-mds
[ceph-mds][DEBUG ] detect platform information from remote host
[ceph-mds][DEBUG ] detect machine type
[ceph-mds][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@ceph-mon ~]#

 

then set correct permissions for the ceph.client.admin.keyring:

 

[root@ceph-mon ~]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph-mon ~]#

 

 

 

How To Create a CephsFS

 

A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata.

 

Bear in mind that:

 

Using a higher replication level for the metadata pool, as any data loss in this pool can render the whole filesystem inaccessible!

 

Using lower-latency storage such as SSDs for the metadata pool, as this will directly affect the observed latency of filesystem operations on clients.

 

 

Create a data pool, one for data, one for metadata:

 

[root@ceph-mon ~]# ceph osd pool create cephfs_data 128
pool ‘cephfs_data’ created
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph osd pool create cephfs_metadata 128
pool ‘cephfs_metadata’ created
[root@ceph-mon ~]#

 

then enable the filesystem using the fs new command:

 

ceph fs new <fs_name> <metadata> <data>

 

 

so we do:

 

ceph fs new cephfs cephfs_metadata cephfs_data

 

 

then verify with:

 

ceph fs ls

 

and

 

ceph mds stat

 

 

 

[root@ceph-mon ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 5 and data pool 4
[root@ceph-mon ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph mds stat
cephfs-1/1/1 up {0=ceph-mds=up:active}
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
mds: cephfs-1/1/1 up {0=ceph-mds=up:active}
osd: 4 osds: 3 up, 3 in

data:
pools: 2 pools, 256 pgs
objects: 183 objects, 46 MiB
usage: 3.4 GiB used, 2.6 GiB / 6.0 GiB avail
pgs: 256 active+clean

[root@ceph-mon ~]#

 

Once the filesystem is created and the MDS is active you can mount the filesystem:

 

 

How To Mount Cephfs

 

To mount the Ceph file system use the mount command if you know the monitor host IP address, else use the mount.ceph utility to resolve the monitor host name to IP address. eg:

 

mkdir /mnt/cephfs
mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs

 

To mount the Ceph file system with cephx authentication enabled, you need to specify a user name and a secret.

 

mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs -o name=admin,secret=DUWEDduoeuroFDWVMWDqfdffDWLSRT==

 

However, a safer method reads the secret from a file, eg:

 

mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/admin.secret

 

To unmount cephfs simply use the umount command as per usual:

 

eg

 

umount /mnt/cephfs

 

[root@ceph-mon ~]# mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 736M 0 736M 0% /dev
tmpfs 748M 0 748M 0% /dev/shm
tmpfs 748M 8,7M 739M 2% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/mapper/centos-root 8,0G 2,4G 5,7G 30% /
/dev/vda1 1014M 172M 843M 17% /boot
tmpfs 150M 0 150M 0% /run/user/0
10.0.9.40:6789:/ 1,4G 0 1,4G 0% /mnt/cephfs
[root@ceph-mon ~]#

 

 

To mount from asus laptop had to copy

 

scp ceph.client.admin.keyring asus:/root/

 

then I could do

 

mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`

root@asus:~#
root@asus:~# mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`
root@asus:~#
root@asus:~#
root@asus:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1844344 2052 1842292 1% /run
/dev/nvme0n1p4 413839584 227723904 165024096 58% /
tmpfs 9221712 271220 8950492 3% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4096 0 4096 0% /sys/fs/cgroup
/dev/nvme0n1p1 98304 33547 64757 35% /boot/efi
tmpfs 1844340 88 1844252 1% /run/user/1000
10.0.9.40:6789:/ 1372160 0 1372160 0% /mnt/cephfs
root@asus:~#

 

 

rbd block devices

 

 

You must create a pool first before you can specify it as a source.

 

[root@ceph-mon ~]# ceph osd pool create rbdpool 128 128
Error ERANGE: pg_num 128 size 2 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
[root@ceph-mon ~]# ceph osd pool create rbdpool 64 64
pool ‘rbdpool’ created
[root@ceph-mon ~]# ceph osd lspools
4 cephfs_data
5 cephfs_metadata
6 rbdpool
[root@ceph-mon ~]# rbd -p rbdpool create rbimage –size 5120
[root@ceph-mon ~]# rbd ls rbdpool
rbimage
[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
rbd: error opening image rbdimage: (2) No such file or directory
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd feature disable rbdpool/rbimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 5G 0 disk
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbimage – /dev/rbd0
[root@ceph-mon ~]# rbd –image rbimage -p rbdpool info
rbd image ‘rbimage’:
size 5 GiB in 1280 objects
order 22 (4 MiB objects)
id: d3956b8b4567
block_name_prefix: rbd_data.d3956b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 15:32:48 2021
[root@ceph-mon ~]#

 

 

 

to remove an image:

 

rbd rm {pool-name}/{image-name}

[root@ceph-mon ~]# rbd rm rbdpool/rbimage
Removing image: 100% complete…done.
[root@ceph-mon ~]# rbd rm rbdpool/image
Removing image: 100% complete…done.
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
[root@ceph-mon ~]#

 

 

To create an image

 

rbd create –size {megabytes} {pool-name}/{image-name}

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd create –size 2048 rbdpool/rbdimage
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbdimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 2G 0 disk
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]# mkfs.xfs /dev/rbd0
Discarding blocks…Done.
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ceph-mon ~]#

 

[root@ceph-mon mnt]# mkdir /mnt/rbd
[root@ceph-mon mnt]# mount /dev/rbd0 /mnt/rbd
[root@ceph-mon mnt]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753596 0 753596 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8844 756536 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2441472 5932800 30% /
/dev/vda1 1038336 175296 863040 17% /boot
tmpfs 153076 0 153076 0% /run/user/0
/dev/rbd0 2086912 33184 2053728 2% /mnt/rbd
[root@ceph-mon mnt]#

 

 

 

How to resize an rbd image

eg to 10GB.

rbd resize –size 10000 mypool/myimage

Resizing image: 100% complete…done.

Grow the file system to fill up the new size of the device.

xfs_growfs /mnt
[…]
data blocks changed from 2097152 to 2560000

 

Creating rbd snapshots

An RBD snapshot is a snapshot of a RADOS Block Device image. An rbd snapshot creates a history of the image’s state.

It is important to stop input and output operations and flush all pending writes before creating a snapshot of an rbd image.

If the image contains a file system, the file system must be in a consistent state before creating the snapshot.

rbd –pool pool-name snap create –snap snap-name image-name

rbd snap create pool-name/image-name@snap-name

eg

rbd –pool rbd snap create –snap snapshot1 image1
rbd snap create rbd/image1@snapshot1

 

To list snapshots of an image, specify the pool name and the image name.

rbd –pool pool-name snap ls image-name
rbd snap ls pool-name/image-name

eg

rbd –pool rbd snap ls image1
rbd snap ls rbd/image1

 

How to rollback to a snapshot

To rollback to a snapshot with rbd, specify the snap rollback option, the pool name, the image name, and the snapshot name.

rbd –pool pool-name snap rollback –snap snap-name image-name
rbd snap rollback pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rollback –snap snapshot1 image1
rbd snap rollback pool1/image1@snapshot1

IMPORTANT NOTE:

Note that it is faster to clone from a snapshot than to rollback an image to a snapshot. This is actually the preferred method of returning to a pre-existing state rather than rolling back a snapshot.

 

To delete a snapshot

To delete a snapshot with rbd, specify the snap rm option, the pool name, the image name, and the user name.

rbd –pool pool-name snap rm –snap snap-name image-name
rbd snap rm pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rm –snap snapshot1 image1
rbd snap rm pool1/image1@snapshot1

Note also that Ceph OSDs delete data asynchronously, so deleting a snapshot will not free the disk space straight away.

To delete or purge all snapshots

To delete all snapshots for an image with rbd, specify the snap purge option and the image name.

rbd –pool pool-name snap purge image-name
rbd snap purge pool-name/image-name

eg

rbd –pool pool1 snap purge image1
rbd snap purge pool1/image1

 

Important when cloning!

Note that clones access the parent snapshots. This means all clones will break if a user deletes the parent snapshot. To prevent this happening, you must protect the snapshot before you can clone it.

 

do this by:

 

rbd –pool pool-name snap protect –image image-name –snap snapshot-name
rbd snap protect pool-name/image-name@snapshot-name

 

eg

 

rbd –pool pool1 snap protect –image image1 –snap snapshot1
rbd snap protect pool1/image1@snapshot1

 

Note that you cannot delete a protected snapshot.

How to clone a snapshot

To clone a snapshot, you must specify the parent pool, image, snapshot, the child pool, and the image name.

 

You must also protect the snapshot before you can clone it.

 

rbd clone –pool pool-name –image parent-image –snap snap-name –dest-pool pool-name –dest child-image

rbd clone pool-name/parent-image@snap-name pool-name/child-image-name

eg

 

rbd clone pool1/image1@snapshot1 pool1/image2

 

 

To delete a snapshot, you must unprotect it first.

 

However, you cannot delete snapshots that have references from clones unless you first “flatten” each clone of a snapshot.

 

rbd –pool pool-name snap unprotect –image image-name –snap snapshot-name
rbd snap unprotect pool-name/image-name@snapshot-name

 

eg

rbd –pool pool1 snap unprotect –image image1 –snap snapshot1
rbd snap unprotect pool1/image1@snapshot1

 

 

To list the children of a snapshot

 

rbd –pool pool-name children –image image-name –snap snap-name

 

eg

 

rbd –pool pool1 children –image image1 –snap snapshot1
rbd children pool1/image1@snapshot1

 

 

RGW Rados Object Gateway

 

 

first, install the ceph rgw package:

 

[root@ceph-mon ~]# ceph-deploy install –rgw ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –rgw ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f33f0221320>

 

… long list of package install output

….

[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependency Installed:
[ceph-mon][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

check which package is installed with

 

[root@ceph-mon ~]# rpm -q ceph-radosgw
ceph-radosgw-13.2.10-0.el7.x86_64
[root@ceph-mon ~]#

 

next do:

 

[root@ceph-mon ~]# ceph-deploy rgw create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [(‘ceph-mon’, ‘rgw.ceph-mon’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3bc2dd9e18>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f3bc38a62a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-mon:rgw.ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create path recursively if it doesn’t exist
[ceph-mon][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-rgw –keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-mon osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-mon/keyring
[ceph-mon][INFO ] Running command: systemctl enable ceph-radosgw@rgw.ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-mon.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-radosgw@rgw.ceph-mon
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph-mon and default port 7480
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: active (running) since Mi 2021-05-05 21:54:57 CEST; 531ms ago
Main PID: 7041 (radosgw)
CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-mon.service
└─7041 /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph

Mai 05 21:54:57 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:54:57 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:54:57 ceph-mon systemd[1]: Started Ceph rados gateway.
[root@ceph-mon ~]#

 

but then stops:

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mi 2021-05-05 21:55:01 CEST; 16s ago
Process: 7143 ExecStart=/usr/bin/radosgw -f –cluster ${CLUSTER} –name client.%i –setuser ceph –setgroup ceph (code=exited, status=5)
Main PID: 7143 (code=exited, status=5)

 

Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service: main process exited, code=exited, status=5/NOTINSTALLED
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:55:01 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: start request repeated too quickly for ceph-radosgw@rgw.ceph-mon.service
Mai 05 21:55:01 ceph-mon systemd[1]: Failed to start Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
[root@ceph-mon ~]#

 

 

why…

 

[root@ceph-mon ~]# /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph
2021-05-05 22:45:41.994 7fc9e6388440 -1 Couldn’t init storage provider (RADOS)
[root@ceph-mon ~]#

 

[root@ceph-mon ceph]# radosgw-admin user create –uid=cephuser –key-type=s3 –access-key cephuser –secret-key cephuser –display-name=”cephuser”
2021-05-05 22:13:54.255 7ff4152ec240 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2021-05-05 22:13:54.255 7ff4152ec240 0 failed reading realm info: ret -34 (34) Numerical result out of range
couldn’t init storage provider
[root@ceph-mon ceph]#

 

 

 

Table of Contents