Tags Archives: Ceph

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Ceph CRUSH Map

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

Overview of Cluster Environment 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

CRUSH is a crucial part of Ceph’s storage system as it’s the algorithm Ceph uses to determine how data is stored across the nodes in a Ceph cluster.

 

Ceph stores client data as objects within storage pools.  Using the CRUSH algorithm, Ceph calculates in which placement group the object should best be stored and then also calculates which Ceph OSD node should store the placement group.

The CRUSH algorithm also enables the Ceph Storage Cluster to scale, rebalance, and recover dynamically from faults.

 

The CRUSH map is a hierarchical cluster storage resource map representing the available storage resources.  CRUSH empowers Ceph clients to communicate with OSDs directly rather than through a centralized server. As CRUSH uses an algorithmically determined method of storing and retrieving data, the CRUSH map allows Ceph to scale without performance bottlenecks, scalability problems or single points of failure.

 

Ceph use three storage concepts for data management:

 

Pools
Placement Groups, and
CRUSH Map

 

Pools

 

Ceph stores data within logical storage groups called pools. Pools manage the number of placement groups, the number of replicas, and the ruleset deployed for the pool.

 

Placement Groups

 

Placement groups (PGs) are the shards or fragments of a logical object pool that store objects as a group on OSDs. Placement groups reduce the amount of metadata to be processed whenever Ceph reads or writes data to OSDs.

 

NOTE: Deploying a larger number of placement groups (e.g. 100 PGs per OSD) will result in better load balancing.

 

The CRUSH map contains a list of OSDs (physical disks), a list of buckets for aggregating the devices into physical locations, and a list of rules that define how CRUSH will replicate data in the Ceph cluster.

 

Buckets can contain any number of OSDs. Buckets can themselves also contain other buckets, enabling them to form interior nodes in a storage hierarchy.

 

OSDs and buckets have numerical identifiers and weight values associated with them.

 

This structure can be used to reflect the actual physical organization of the cluster installation, taking into account such characteristics as physical proximity, common power sources, and shared networks.

 

When you deploy OSDs they are automatically added to the CRUSH map under a host bucket named for the node on which they run. This ensures that replicas or erasure code shards are distributed across hosts and that a single host or other failure will not affect service availability.

 

The main practical advantages of CRUSH are:

 

Avoiding consequences of device failure. This is a big advantage over RAID.

 

Fast — read/writes occur in microseconds.

 

Stability and Reliability— since very little data movement occurs when topology changes.

 

Flexibility — replication, erasure codes, complex placement schemes are all possible.

 

 

The CRUSH Map Structure

 

The CRUSH map consists of a hierarchy that describes the physical topology of the cluster and a set of rules defining data placement policy.

 

The hierarchy has devices (OSDs) at the leaves, and internal nodes corresponding to other physical features or groupings:

 

hosts, racks, rows, datacenters, etc.

 

The rules describe how replicas are placed in terms of that hierarchy (e.g., ‘three replicas in different racks’).

 

Devices

 

Devices are individual OSDs that store data, usually one for each storage drive. Devices are identified by an id (a non-negative integer) and a name, normally osd.N where N is the device id.

 

Types and Buckets

 

A bucket is the CRUSH term for internal nodes in the hierarchy: hosts, racks, rows, etc.

 

The CRUSH map defines a series of types used to describe these nodes.

 

The default types include:

 

osd (or device)

 

host

 

chassis

 

rack

 

row

 

pdu

 

pod

 

room

 

datacenter

 

zone

 

region

 

root

 

Most clusters use only a handful of these types, and others can be defined as needed.

 

 

CRUSH Rules

 

CRUSH Rules define policy about how data is distributed across the devices in the hierarchy. They define placement and replication strategies or distribution policies that allow you to specify exactly how CRUSH places data replicas.

 

To display what rules are defined in the cluster:

 

ceph osd crush rule ls

 

You can view the contents of the rules with:

 

ceph osd crush rule dump

 

The weights associated with each node in the hierarchy can be displayed with:

 

ceph osd tree

 

 

To modify the CRUSH MAP

 

To add or move an OSD in the CRUSH map of a running cluster:

 

ceph osd crush set {name} {weight} root={root} [{bucket-type}={bucket-name} …]

 

 

eg

 

The following example adds osd.0 to the hierarchy, or moves the OSD from a previous location.

 

ceph osd crush set osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1

 

To Remove an OSD from the CRUSH Map

 

To remove an OSD from the CRUSH map of a running cluster, execute the following:

 

ceph osd crush remove {name}

 

To Add, Move or Remove a Bucket to the CRUSH Map

 

To add a bucket in the CRUSH map of a running cluster, execute the ceph osd crush add-bucket command:

 

ceph osd crush add-bucket {bucket-name} {bucket-type}

 

To move a bucket to a different location or position in the CRUSH map hierarchy:

 

ceph osd crush move {bucket-name} {bucket-type}={bucket-name}, […]

 

 

To remove a bucket from the CRUSH hierarchy, use:

 

ceph osd crush remove {bucket-name}

 

Note: A bucket must be empty before removing it from the CRUSH hierarchy.

 

 

 

How To Tune CRUSH 

 

 

Crush uses matched profile sets known as tunables in order to tune the CRUSH map.

 

As of the Octopus release these are:

 

legacy: the legacy behavior from argonaut and earlier.

 

argonaut: the legacy values supported by the original argonaut release

 

bobtail: the values supported by the bobtail release

 

firefly: the values supported by the firefly release

 

hammer: the values supported by the hammer release

 

jewel: the values supported by the jewel release

 

optimal: the best (ie optimal) values of the current version of Ceph

 

default: the default values of a new cluster installed from scratch. These values, which depend on the current version of Ceph, are hardcoded and are generally a mix of optimal and legacy values. These generally match the optimal profile of the previous LTS release, or the most recent release for which most users will be likely to have up-to-date clients for.

 

You can apply a profile to a running cluster with the command:

 

ceph osd crush tunables {PROFILE}

 

 

How To Determine a CRUSH Location

 

The location of an OSD within the CRUSH map’s hierarchy is known as the CRUSH location.

 

This location specifier takes the form of a list of key and value pairs.

 

Eg if an OSD is in a specific row, rack, chassis and host, and is part of the ‘default’ CRUSH root (as usual for most clusters), its CRUSH location will be:

 

root=default row=a rack=a2 chassis=a2a host=a2a1

 

The CRUSH location for an OSD can be defined by adding the crush location option in ceph.conf.

 

Each time the OSD starts, it checks that it is in the correct location in the CRUSH map. If it is not then it moves itself.

 

To disable this automatic CRUSH map management, edit ceph.conf and add the following in the [osd] section:

 

osd crush update on start = false

 

 

 

However, note that in most cases it is not necessary to manually configure this.

 

 

How To Edit and Modify the CRUSH Map

 

It is more convenient to modify the CRUSH map at runtime with the Ceph CLI than editing the CRUSH map manually.

 

However you may sometimes wish to edit the CRUSH map manually, for example in order to change the default bucket types, or to use an alternativce bucket algorithm to straw.

 

 

The steps in overview:

 

Get the CRUSH map.

 

Decompile the CRUSH map.

 

Edit at least one: Devices, Buckets or Rules.

 

Recompile the CRUSH map.

 

Set the CRUSH map.

 

 

Get a CRUSH Map

 

ceph osd getcrushmap -o {compiled-crushmap-filename}

 

This writes (-o) a compiled CRUSH map to the filename you specify.

 

However, as the CRUSH map is in compiled form, you first need to decompile it.

 

Decompile a CRUSH Map

 

use the crushtool:

 

crushtool -d {compiled-crushmap-filename}-o {decompiled-crushmap-filename}

 

 

 

The CRUSH Map has six sections:

 

tunables: The preamble at the top of the map described any _tunables_for CRUSH behavior that vary from the historical/legacy CRUSH behavior. These correct for old bugs, optimizations, or other changes in behavior made over the years to CRUSH.

 

devices: Devices are individual ceph-osd daemons that store data.

 

types: Bucket types define the types of buckets used in the CRUSH hierarchy. Buckets consist of a hierarchical aggregation of storage locations (e.g., rows, racks, chassis, hosts, etc.) together with their assigned weights.

 

buckets: Once you define bucket types, you must define each node in the hierarchy, its type, and which devices or other nodes it contains.

 

rules: Rules define policy about how data is distributed across devices in the hierarchy.

 

choose_args: Choose_args are alternative weights associated with the hierarchy that have been adjusted to optimize data placement.

 

A single choose_args map can be used for the entire cluster, or alternatively one can be created for each individual pool.

 

 

Display the current crush hierarchy with:

 

ceph osd tree

 

[root@ceph-mon ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00757 root default
-3 0.00378 host ceph-osd0
0 hdd 0.00189 osd.0 down 0 1.00000
3 hdd 0.00189 osd.3 up 1.00000 1.00000
-5 0.00189 host ceph-osd1
1 hdd 0.00189 osd.1 up 1.00000 1.00000
-7 0.00189 host ceph-osd2
2 hdd 0.00189 osd.2 up 1.00000 1.00000
[root@ceph-mon ~]#

 

 

 

To edit the CRUSH map:

 

ceph osd getcrushmap -o crushmap.txt

 

crushtool -d crushmap.txt -o crushmap-decompile

 

nano crushmap-decompile

 

 

 

Edit at least one of Devices, Buckets and Rules:

 

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

 

# devices
device 0 osd.0 class hdd
device 1 osd.1 class hdd
device 2 osd.2 class hdd
device 3 osd.3 class hdd

 

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph-osd0 {
id -3 # do not change unnecessarily
id -4 class hdd # do not change unnecessarily
# weight 0.004
alg straw2
hash 0 # rjenkins1
item osd.0 weight 0.002
item osd.3 weight 0.002
}
host ceph-osd1 {
id -5 # do not change unnecessarily
id -6 class hdd # do not change unnecessarily
# weight 0.002
alg straw2
hash 0 # rjenkins1
item osd.1 weight 0.002
}
host ceph-osd2 {
id -7 # do not change unnecessarily
id -8 class hdd # do not change unnecessarily
# weight 0.002
alg straw2
hash 0 # rjenkins1
item osd.2 weight 0.002
}
root default {
id -1 # do not change unnecessarily
id -2 class hdd # do not change unnecessarily
# weight 0.008
alg straw2
hash 0 # rjenkins1
item ceph-osd0 weight 0.004
item ceph-osd1 weight 0.002
item ceph-osd2 weight 0.002
}

 

# rules
rule replicated_rule {
id 0
type replicated
min_size 1
max_size 10
step take default
step chooseleaf firstn 0 type host
step emit
}

 

# end crush map

 

 

To add racks to the cluster CRUSH layout:

 

ceph osd crush add-bucket rack01 rack
ceph osd crush add-bucket rack02 rack

 

[root@ceph-mon ~]# ceph osd crush add-bucket rack01 rack
added bucket rack01 type rack to crush map
[root@ceph-mon ~]# ceph osd crush add-bucket rack02 rack
added bucket rack02 type rack to crush map
[root@ceph-mon ~]#

 

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Ceph RGW Gateway

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

 

 

RGW Rados Object Gateway

 

 

first, install the ceph rgw package:

 

[root@ceph-mon ~]# ceph-deploy install –rgw ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –rgw ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f33f0221320>

 

… long list of package install output

….

[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependency Installed:
[ceph-mon][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

check which package is installed with

 

[root@ceph-mon ~]# rpm -q ceph-radosgw
ceph-radosgw-13.2.10-0.el7.x86_64
[root@ceph-mon ~]#

 

next do:

 

[root@ceph-mon ~]# ceph-deploy rgw create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [(‘ceph-mon’, ‘rgw.ceph-mon’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3bc2dd9e18>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f3bc38a62a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-mon:rgw.ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create path recursively if it doesn’t exist
[ceph-mon][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-rgw –keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-mon osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-mon/keyring
[ceph-mon][INFO ] Running command: systemctl enable ceph-radosgw@rgw.ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-mon.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-radosgw@rgw.ceph-mon
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph-mon and default port 7480
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: active (running) since Mi 2021-05-05 21:54:57 CEST; 531ms ago
Main PID: 7041 (radosgw)
CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-mon.service
└─7041 /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph

Mai 05 21:54:57 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:54:57 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:54:57 ceph-mon systemd[1]: Started Ceph rados gateway.
[root@ceph-mon ~]#

 

but then stops:

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mi 2021-05-05 21:55:01 CEST; 16s ago
Process: 7143 ExecStart=/usr/bin/radosgw -f –cluster ${CLUSTER} –name client.%i –setuser ceph –setgroup ceph (code=exited, status=5)
Main PID: 7143 (code=exited, status=5)

 

Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service: main process exited, code=exited, status=5/NOTINSTALLED
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:55:01 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: start request repeated too quickly for ceph-radosgw@rgw.ceph-mon.service
Mai 05 21:55:01 ceph-mon systemd[1]: Failed to start Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
[root@ceph-mon ~]#

 

 

why…

 

[root@ceph-mon ~]# /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph
2021-05-05 22:45:41.994 7fc9e6388440 -1 Couldn’t init storage provider (RADOS)
[root@ceph-mon ~]#

 

[root@ceph-mon ceph]# radosgw-admin user create –uid=cephuser –key-type=s3 –access-key cephuser –secret-key cephuser –display-name=”cephuser”
2021-05-05 22:13:54.255 7ff4152ec240 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2021-05-05 22:13:54.255 7ff4152ec240 0 failed reading realm info: ret -34 (34) Numerical result out of range
couldn’t init storage provider
[root@ceph-mon ceph]#

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Ceph RDB Block Devices

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

 

Ceph RDB Block Devices

 

 

You must create a pool first before you can specify it as a source.

 

[root@ceph-mon ~]# ceph osd pool create rbdpool 128 128
Error ERANGE: pg_num 128 size 2 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
[root@ceph-mon ~]# ceph osd pool create rbdpool 64 64
pool ‘rbdpool’ created
[root@ceph-mon ~]# ceph osd lspools
4 cephfs_data
5 cephfs_metadata
6 rbdpool
[root@ceph-mon ~]# rbd -p rbdpool create rbimage –size 5120
[root@ceph-mon ~]# rbd ls rbdpool
rbimage
[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
rbd: error opening image rbdimage: (2) No such file or directory
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd feature disable rbdpool/rbimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 5G 0 disk
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbimage – /dev/rbd0
[root@ceph-mon ~]# rbd –image rbimage -p rbdpool info
rbd image ‘rbimage’:
size 5 GiB in 1280 objects
order 22 (4 MiB objects)
id: d3956b8b4567
block_name_prefix: rbd_data.d3956b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 15:32:48 2021
[root@ceph-mon ~]#

 

 

 

to remove an image:

 

rbd rm {pool-name}/{image-name}

[root@ceph-mon ~]# rbd rm rbdpool/rbimage
Removing image: 100% complete…done.
[root@ceph-mon ~]# rbd rm rbdpool/image
Removing image: 100% complete…done.
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
[root@ceph-mon ~]#

 

 

To create an image

 

rbd create –size {megabytes} {pool-name}/{image-name}

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd create –size 2048 rbdpool/rbdimage
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbdimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 2G 0 disk
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]# mkfs.xfs /dev/rbd0
Discarding blocks…Done.
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ceph-mon ~]#

 

[root@ceph-mon mnt]# mkdir /mnt/rbd
[root@ceph-mon mnt]# mount /dev/rbd0 /mnt/rbd
[root@ceph-mon mnt]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753596 0 753596 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8844 756536 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2441472 5932800 30% /
/dev/vda1 1038336 175296 863040 17% /boot
tmpfs 153076 0 153076 0% /run/user/0
/dev/rbd0 2086912 33184 2053728 2% /mnt/rbd
[root@ceph-mon mnt]#

 

 

 

How to resize an rbd image

eg to 10GB.

rbd resize –size 10000 mypool/myimage

Resizing image: 100% complete…done.

Grow the file system to fill up the new size of the device.

xfs_growfs /mnt
[…]
data blocks changed from 2097152 to 2560000

 

Creating rbd snapshots

An RBD snapshot is a snapshot of a RADOS Block Device image. An rbd snapshot creates a history of the image’s state.

It is important to stop input and output operations and flush all pending writes before creating a snapshot of an rbd image.

If the image contains a file system, the file system must be in a consistent state before creating the snapshot.

rbd –pool pool-name snap create –snap snap-name image-name

rbd snap create pool-name/image-name@snap-name

eg

rbd –pool rbd snap create –snap snapshot1 image1
rbd snap create rbd/image1@snapshot1

 

To list snapshots of an image, specify the pool name and the image name.

rbd –pool pool-name snap ls image-name
rbd snap ls pool-name/image-name

eg

rbd –pool rbd snap ls image1
rbd snap ls rbd/image1

 

How to rollback to a snapshot

To rollback to a snapshot with rbd, specify the snap rollback option, the pool name, the image name, and the snapshot name.

rbd –pool pool-name snap rollback –snap snap-name image-name
rbd snap rollback pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rollback –snap snapshot1 image1
rbd snap rollback pool1/image1@snapshot1

IMPORTANT NOTE:

Note that it is faster to clone from a snapshot than to rollback an image to a snapshot. This is actually the preferred method of returning to a pre-existing state rather than rolling back a snapshot.

 

To delete a snapshot

To delete a snapshot with rbd, specify the snap rm option, the pool name, the image name, and the user name.

rbd –pool pool-name snap rm –snap snap-name image-name
rbd snap rm pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rm –snap snapshot1 image1
rbd snap rm pool1/image1@snapshot1

Note also that Ceph OSDs delete data asynchronously, so deleting a snapshot will not free the disk space straight away.

To delete or purge all snapshots

To delete all snapshots for an image with rbd, specify the snap purge option and the image name.

rbd –pool pool-name snap purge image-name
rbd snap purge pool-name/image-name

eg

rbd –pool pool1 snap purge image1
rbd snap purge pool1/image1

 

Important when cloning!

Note that clones access the parent snapshots. This means all clones will break if a user deletes the parent snapshot. To prevent this happening, you must protect the snapshot before you can clone it.

 

do this by:

 

rbd –pool pool-name snap protect –image image-name –snap snapshot-name
rbd snap protect pool-name/image-name@snapshot-name

 

eg

 

rbd –pool pool1 snap protect –image image1 –snap snapshot1
rbd snap protect pool1/image1@snapshot1

 

Note that you cannot delete a protected snapshot.

How to clone a snapshot

To clone a snapshot, you must specify the parent pool, image, snapshot, the child pool, and the image name.

 

You must also protect the snapshot before you can clone it.

 

rbd clone –pool pool-name –image parent-image –snap snap-name –dest-pool pool-name –dest child-image

rbd clone pool-name/parent-image@snap-name pool-name/child-image-name

eg

 

rbd clone pool1/image1@snapshot1 pool1/image2

 

 

To delete a snapshot, you must unprotect it first.

 

However, you cannot delete snapshots that have references from clones unless you first “flatten” each clone of a snapshot.

 

rbd –pool pool-name snap unprotect –image image-name –snap snapshot-name
rbd snap unprotect pool-name/image-name@snapshot-name

 

eg

rbd –pool pool1 snap unprotect –image image1 –snap snapshot1
rbd snap unprotect pool1/image1@snapshot1

 

 

To list the children of a snapshot

 

rbd –pool pool-name children –image image-name –snap snap-name

 

eg

 

rbd –pool pool1 children –image image1 –snap snapshot1
rbd children pool1/image1@snapshot1

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Pools & Placement Groups

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

Create a Storage Pool

 

 

To create a pool:

 

ceph osd pool create datapool 1

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
6.0 GiB 3.0 GiB 3.0 GiB 50.30
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
datapool 1 0 B 0 1.8 GiB 0
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool ‘datapool’
use ‘ceph osd pool application enable <pool-name> <app-name>’, where <app-name> is ‘cephfs’, ‘rbd’, ‘rgw’, or freeform for custom applications.
[root@ceph-mon ~]#

 

so we need to enable the pool:

 

[root@ceph-mon ~]# ceph osd pool application enable datapool rbd
enabled application ‘rbd’ on pool ‘datapool’
[root@ceph-mon ~]#

[root@ceph-mon ~]# ceph health detail
HEALTH_OK
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:
pools: 1 pools, 1 pgs
objects: 1 objects, 10 B
usage: 3.0 GiB used, 3.0 GiB / 6.0 GiB avail
pgs: 1 active+clean

[root@ceph-mon ~]#

 

 

 

How To Check All Ceph Services Are Running

 

Use 

 

ceph -s 

 

 

 

 

 

or alternatively:

 

 

[root@ceph-mon ~]# systemctl status ceph\*.service
● ceph-mon@ceph-mon.service – Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 989 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon.service
└─989 /usr/bin/ceph-mon -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.

 

● ceph-mgr@ceph-mon.service – Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 992 (ceph-mgr)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph-mon.service
└─992 /usr/bin/ceph-mgr -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster manager daemon.
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root

 

● ceph-crash.service – Ceph crash dump collector
Loaded: loaded (/usr/lib/systemd/system/ceph-crash.service; enabled; vendor preset: enabled)
Active: active (running) since Di 2021-04-27 11:47:34 CEST; 6h ago
Main PID: 695 (ceph-crash)
CGroup: /system.slice/ceph-crash.service
└─695 /usr/bin/python2.7 /usr/bin/ceph-crash

 

Apr 27 11:47:34 ceph-mon systemd[1]: Started Ceph crash dump collector.
Apr 27 11:47:34 ceph-mon ceph-crash[695]: INFO:__main__:monitoring path /var/lib/ceph/crash, delay 600s
[root@ceph-mon ~]#

 

 

Object Manipulation

 

 

To create an object and upload a file into that object:

 

Example:

 

echo “test data” > testfile
rados put -p datapool testfile testfile
rados -p datapool ls
testfile

 

To set a key/value pair to that object:

 

rados -p datapool setomapval testfile mykey myvalue
rados -p datapool getomapval testfile mykey
(length 7) : 0000 : 6d 79 76 61 6c 75 65 : myvalue

 

To download the file:

 

rados get -p datapool testfile testfile2
md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2

 

 

 

[root@ceph-mon ~]# echo “test data” > testfile
[root@ceph-mon ~]# rados put -p datapool testfile testfile
[root@ceph-mon ~]# rados -p datapool ls
testfile
[root@ceph-mon ~]# rados -p datapool setomapval testfile mykey myvalue
[root@ceph-mon ~]# rados -p datapool getomapval testfile mykey
value (7 bytes) :
00000000 6d 79 76 61 6c 75 65 |myvalue|
00000007

 

[root@ceph-mon ~]# rados get -p datapool testfile testfile2
[root@ceph-mon ~]# md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2
[root@ceph-mon ~]#

 

 

How To Check If Your Datastore is BlueStore or FileStore

 

[root@ceph-mon ~]# ceph osd metadata 0 | grep -e id -e hostname -e osd_objectstore
“id”: 0,
“hostname”: “ceph-osd0”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 1 | grep -e id -e hostname -e osd_objectstore
“id”: 1,
“hostname”: “ceph-osd1”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 2 | grep -e id -e hostname -e osd_objectstore
“id”: 2,
“hostname”: “ceph-osd2”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

You can also display a large amount of information with this command:

 

[root@ceph-mon ~]# ceph osd metadata 2
{
“id”: 2,
“arch”: “x86_64”,
“back_addr”: “10.0.9.12:6801/1138”,
“back_iface”: “eth1”,
“bluefs”: “1”,
“bluefs_single_shared_device”: “1”,
“bluestore_bdev_access_mode”: “blk”,
“bluestore_bdev_block_size”: “4096”,
“bluestore_bdev_dev”: “253:2”,
“bluestore_bdev_dev_node”: “dm-2”,
“bluestore_bdev_driver”: “KernelDevice”,
“bluestore_bdev_model”: “”,
“bluestore_bdev_partition_path”: “/dev/dm-2”,
“bluestore_bdev_rotational”: “1”,
“bluestore_bdev_size”: “2143289344”,
“bluestore_bdev_type”: “hdd”,
“ceph_release”: “mimic”,
“ceph_version”: “ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)”,
“ceph_version_short”: “13.2.10”,
“cpu”: “AMD EPYC-Rome Processor”,
“default_device_class”: “hdd”,
“devices”: “dm-2,sda”,
“distro”: “centos”,
“distro_description”: “CentOS Linux 7 (Core)”,
“distro_version”: “7”,
“front_addr”: “10.0.9.12:6800/1138”,
“front_iface”: “eth1”,
“hb_back_addr”: “10.0.9.12:6802/1138”,
“hb_front_addr”: “10.0.9.12:6803/1138”,
“hostname”: “ceph-osd2”,
“journal_rotational”: “1”,
“kernel_description”: “#1 SMP Thu Apr 8 19:51:47 UTC 2021”,
“kernel_version”: “3.10.0-1160.24.1.el7.x86_64”,
“mem_swap_kb”: “1048572”,
“mem_total_kb”: “1530760”,
“os”: “Linux”,
“osd_data”: “/var/lib/ceph/osd/ceph-2”,
“osd_objectstore”: “bluestore”,
“rotational”: “1”
}
[root@ceph-mon ~]#

 

or you can use:

 

[root@ceph-mon ~]# ceph osd metadata osd.0 | grep osd_objectstore
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

Which Version of Ceph Is Your Cluster Running?

 

[root@ceph-mon ~]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

How To List Your Cluster Pools

 

To list your cluster pools, execute:

 

ceph osd lspools

 

[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]#

 

 

Placement Groups PG Information

 

To display the number of placement groups in a pool:

 

ceph osd pool get {pool-name} pg_num

 

 

To display statistics for the placement groups in the cluster:

 

ceph pg dump [–format {format}]

 

To display pool statistics:

 

[root@ceph-mon ~]# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
datapool 10 B 1 0 2 0 0 0 2 2 KiB 2 2 KiB

 

total_objects 1
total_used 3.0 GiB
total_avail 3.0 GiB
total_space 6.0 GiB
[root@ceph-mon ~]#

 

 

How To Repair a Placement Group PG

 

Ascertain with ceph -s which PG has a problem

 

To identify stuck placement groups:

 

ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]

 

Then do:

 

ceph pg repair <PG ID>

For more info on troubleshooting PGs see https://documentation.suse.com/ses/7/html/ses-all/bp-troubleshooting-pgs.html

 

 

How To Activate Ceph Dashboard

 

The Ceph Dashboard runs without an Apache or other webserver active, the functionality is provided by the Ceph system.

 

All HTTP connections to the Ceph dashboard use SSL/TLS by default.

 

For testing lab purposes you can simply generate and install a self-signed certificate as follows:

 

ceph dashboard create-self-signed-cert

 

However in production environments this is unsuitable since web browsers will object to self-signed certificates and require explicit confirmation from a certificate authority or CA before opening a connection to the Ceph dashboard.

 

You can use your own certificate authority to ensure the certificate warning does not appear.

 

For example by doing:

 

$ openssl req -new -nodes -x509 -subj “/O=IT/CN=ceph-mgr-dashboard” -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca

 

The generated dashboard.crt file then needs to be signed by a CA. Once signed, it can then be enabled for all Ceph manager instances as follows:

 

ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt

 

After changing the SSL certificate and key you must restart the Ceph manager processes manually. Either by:

 

ceph mgr fail mgr

 

or by disabling and re-enabling the dashboard module:

 

ceph mgr module disable dashboard
ceph mgr module enable dashboard

 

By default, the ceph-mgr daemon that runs the dashboard (i.e., the currently active manager) binds to TCP port 8443 (or 8080 if SSL is disabled).

 

You can change these ports by doing:

ceph config set mgr mgr/dashboard/server_addr $IP
ceph config set mgr mgr/dashboard/server_port $PORT

 

For the purposes of this lab I did:

 

[root@ceph-mon ~]# ceph mgr module enable dashboard
[root@ceph-mon ~]# ceph dashboard create-self-signed-cert
Self-signed certificate created
[root@ceph-mon ~]#

 

Dashboard enabling can be automated by adding following to ceph.conf:

 

[mon]
mgr initial modules = dashboard

 

 

 

[root@ceph-mon ~]# ceph mgr module ls | grep -A 5 enabled_modules
“enabled_modules”: [
“balancer”,
“crash”,
“dashboard”,
“iostat”,
“restful”,
[root@ceph-mon ~]#

 

check SSL is installed correctly. You should see the keys displayed in output from these commands:

 

 

ceph config-key get mgr/dashboard/key
ceph config-key get mgr/dashboard/crt

 

This command does not work on Centos7, Ceph Mimic version as the full functionality was not implemented by the Ceph project for this version.

 

 

ceph dashboard ac-user-create admin password administrator

 

 

Use this command instead:

 

 

[root@ceph-mon etc]# ceph dashboard set-login-credentials cephuser <password not shown here>
Username and password updated
[root@ceph-mon etc]#

 

Also make sure you have the respective firewall ports open for the dashboard, ie 8443 for SSL/TLS https (or 8080 for http – latter however not advisable due to insecure unencrypted connection – password interception risk)

 

 

Logging in to the Ceph Dashboard

 

To log in, open the URL:

 

 

To display the current URL and port for the Ceph dashboard, do:

 

[root@ceph-mon ~]# ceph mgr services
{
“dashboard”: “https://ceph-mon:8443/”
}
[root@ceph-mon ~]#

 

and enter the user name and password you set as above.

 

 

Pools and Placement Groups In More Detail

 

Remember that pools are not PGs. PGs go inside pools.

 

To create a pool:

 

 

ceph osd pool create <pool name> <PG_NUM> <PGP_NUM>

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pools by default are replicated.

 

There are two kinds:

 

replicated

 

erasure coding EC

 

For replicated you set the number of data copies or replicas that each data obkect will have. The number of copies that can be lost will be one less than the number of replicas.

 

For EC its more complicated.

 

you have

 

k : number of data chunks
m : number of coding chunks

 

 

Pools have to be associated with an application. Pools to be used with CephFS, or pools automatically created by Object Gateway are automatically associated with cephfs or rgw respectively.

 

For CephFS the name associated application name is cephfs,
for RADOS Block Device it is rbd,
and for Object Gateway it is rgw.

 

Otherwise, the format to associate a free-form application name with a pool is:

 

ceph osd pool application enable POOL_NAME APPLICATION_NAME

To see which applications a pool is associated with use:

 

ceph osd pool application get pool_name

 

 

To set pool quotas for the maximum number of bytes and/or the maximum number of objects permitted per pool:

 

ceph osd pool set-quota POOL_NAME MAX_OBJECTS OBJ_COUNT MAX_BYTES BYTES

 

eg

 

ceph osd pool set-quota data max_objects 20000

 

To set the number of object replicas on a replicated pool use:

 

ceph osd pool set poolname size num-replicas

 

Important:
The num-replicas value includes the object itself. So if you want the object and two replica copies of the object for a total of three instances of the object, you need to specify 3. You should not set this value to anything less than 3! Also bear in mind that setting 4 replicas for a pool will increase the reliability by 25%.

 

To display the number of object replicas, use:

 

ceph osd dump | grep ‘replicated size’

 

 

If you want to remove a quota, set this value to 0.

 

To set pool values, use:

 

ceph osd pool set POOL_NAME KEY VALUE

 

To display a pool’s stats use:

 

rados df

 

To list all values related to a specific pool use:

 

ceph osd pool get POOL_NAME all

 

You can also display specific pool values as follows:

 

ceph osd pool get POOL_NAME KEY

 

The number of placement groups for the pool.

 

ceph osd pool get POOL_NAME KEY

In particular:

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pool Created:

 

[root@ceph-mon ~]# ceph osd pool create datapool 128 128 replicated
pool ‘datapool’ created
[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:Block Lists
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 3.2 GiB used, 2.8 GiB / 6.0 GiB avail
pgs: 34.375% pgs unknown
84 active+clean
44 unknown

[root@ceph-mon ~]#

 

To remove a Placement Pool

 

two ways, ie two different commands can be used:

 

[root@ceph-mon ~]# rados rmpool datapool –yes-i-really-really-mean-it
WARNING:
This will PERMANENTLY DESTROY an entire pool of objects with no way back.
To confirm, pass the pool to remove twice, followed by
–yes-i-really-really-mean-it

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.

[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
[root@ceph-mon ~]#

 

 

You have to set the mon_allow_pool_delete option first to true

 

first get the value of

 

ceph osd pool get pool_name nodelete

 

[root@ceph-mon ~]# ceph osd pool get datapool nodelete
nodelete: false
[root@ceph-mon ~]#

 

Because inadvertent pool deletion is a real danger, Ceph implements two mechanisms that prevent pools from being deleted. Both mechanisms must be disabled before a pool can be deleted.

 

The first mechanism is the NODELETE flag. Each pool has this flag, and its default value is ‘false’. To find out the value of this flag on a pool, run the following command:

 

ceph osd pool get pool_name nodelete

If it outputs nodelete: true, it is not possible to delete the pool until you change the flag using the following command:

 

ceph osd pool set pool_name nodelete false

 

 

The second mechanism is the cluster-wide configuration parameter mon allow pool delete, which defaults to ‘false’. This means that, by default, it is not possible to delete a pool. The error message displayed is:

 

Error EPERM: pool deletion is disabled; you must first set the
mon_allow_pool_delete config option to true before you can destroy a pool

 

To delete the pool despite this safety setting, you can temporarily set value of mon allow pool delete to ‘true’, then delete the pool. Then afterwards reset the value back to ‘false’:

 

ceph tell mon.* injectargs –mon-allow-pool-delete=true
ceph osd pool delete pool_name pool_name –yes-i-really-really-mean-it
ceph tell mon.* injectargs –mon-allow-pool-delete=false

 

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=true
injectargs:
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.
[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
pool ‘datapool’ removed
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=false
injectargs:mon_allow_pool_delete = ‘false’
[root@ceph-mon ~]#

 

NOTE The injectargs command displays following to confirm the command was carried out ok, this is NOT an error:

 

injectargs:mon_allow_pool_delete = ‘true’ (not observed, change may require restart)

 

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Basic Ceph Installation and Config

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

I first created a base installation virtual machine called ceph-base. From this I then clone the machines needed to build the cluster. ceph-base does NOT form part of the cluster.

 

 

ceph-mon 10.0.9.40 192.168.122.40   is the admin-node and ceph-deploy and MON monitor node.  We use the ceph-base vm to clone the other machines.

 

 

# ceph cluster 10.0.9.0 centos version 7

 

10.0.9.9 ceph-base
192.168.122.8 ceph-basevm # centos7

 

 

10.0.9.0 is the ceph cluster private network. We run 4 machines as follows:

10.0.9.40 ceph-mon
10.0.9.10 ceph-osd0
10.0.9.11 ceph-osd1
10.0.9.12 ceph-osd2

 

192.168.122.0 is the KVM network. Each machine also has an interface to this network.

192.168.122.40 ceph-monvm
192.168.122.50 ceph-osd0vm
192.168.122.51 ceph-osd1vm
192.168.122.52 ceph-osd2vm

 

Preparation of Ceph Cluster Machines

 

ceph-base serves as a template virtual machine for cloning the actual ceph cluster nodes. It does not form part of the cluster.

 

on ceph-base:

 

installed centos7
configured 2 ethernet interfaces for the nat networks: 10.0.9.0 and 192.168.122.0
added default route
added nameserver

added ssh keys for passwordless login for root from laptop asus

updated software packages: yum update

copied hosts file from asus to the virtual machines via scp

[root@ceph-base ~]# useradd -d /home/cephuser -m cephuser

 

created a sudoers file for the user and edited the /etc/sudoers file with sed.

[root@ceph-base ~]# chmod 0440 /etc/sudoers.d/cephuser
[root@ceph-base ~]# sed -i s’/Defaults requiretty/#Defaults requiretty’/g /etc/sudoers
[root@ceph-base ~]# echo “cephuser ALL = (root) NOPASSWD:ALL” | sudo tee /etc/sudoers.d/cephuser
cephuser ALL = (root) NOPASSWD:ALL
[root@ceph-base ~]#

 

 

[root@ceph-base ~]# yum install -y ntp ntpdate ntp-doc
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.hosteurope.de
* extras: ftp.hosteurope.de
* updates: mirror.23media.com
Package ntpdate-4.2.6p5-29.el7.centos.2.x86_64 already installed and latest version
Resolving Dependencies
–> Running transaction check
—> Package ntp.x86_64 0:4.2.6p5-29.el7.centos.2 will be installed
—> Package ntp-doc.noarch 0:4.2.6p5-29.el7.centos.2 will be installed
–> Finished Dependency Resolution

 

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
ntp x86_64 4.2.6p5-29.el7.centos.2 base 549 k
ntp-doc noarch 4.2.6p5-29.el7.centos.2 base 1.0 M

Transaction Summary
==============================================================================================================================================================
Install 2 Packages

Total download size: 1.6 M
Installed size: 3.0 M
Downloading packages:
(1/2): ntp-doc-4.2.6p5-29.el7.centos.2.noarch.rpm | 1.0 MB 00:00:00
(2/2): ntp-4.2.6p5-29.el7.centos.2.x86_64.rpm | 549 kB 00:00:00
————————————————————————————————————————————————————–
Total 2.4 MB/s | 1.6 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ntp-4.2.6p5-29.el7.centos.2.x86_64 1/2
Installing : ntp-doc-4.2.6p5-29.el7.centos.2.noarch 2/2
Verifying : ntp-doc-4.2.6p5-29.el7.centos.2.noarch 1/2
Verifying : ntp-4.2.6p5-29.el7.centos.2.x86_64 2/2

 

Installed:
ntp.x86_64 0:4.2.6p5-29.el7.centos.2 ntp-doc.noarch 0:4.2.6p5-29.el7.centos.2

Complete!

 

Next, do:

[root@ceph-base ~]# ntpdate 0.us.pool.ntp.org
26 Apr 15:30:17 ntpdate[23660]: step time server 108.61.73.243 offset 0.554294 sec

[root@ceph-base ~]# hwclock –systohc

[root@ceph-base ~]# systemctl enable ntpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.

[root@ceph-base ~]# systemctl start ntpd.service
[root@ceph-base ~]#

 

Disable SELinux Security

 

 

Disabled SELinux on all nodes by editing the SELinux configuration file with the sed stream editor. This was carried out on the ceph-base virtual machine from which we will be cloning the ceph cluster nodes, so this only needs to be done once.

 

[root@ceph-base ~]# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g’ /etc/selinux/config
[root@ceph-base ~]#

 

 

Generate the ssh keys for ‘cephuser’.

 

[root@ceph-base ~]# su – cephuser

 

[cephuser@ceph-base ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cephuser/.ssh/id_rsa):
Created directory ‘/home/cephuser/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/cephuser/.ssh/id_rsa.
Your public key has been saved in /home/cephuser/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:PunfPQf+aF2rr3lzI0WzXJZXO5AIjX0W+aC4h+ss0E8 cephuser@ceph-base.localdomain
The key’s randomart image is:
+—[RSA 2048]—-+
| .= ..+ |
| . + B .|
| . + + +|
| . . B+|
| . S o o.*|
| . o E . .+.|
| . * o ..oo|
| o.+ . o=*+|
| ++. .=O==|
+—-[SHA256]—–+
[cephuser@ceph-base ~]$

 

 

Configure or Disable Firewalling

 

On a production cluster the firewall would remain active and the ceph ports would be opened. 

 

Monitors listen on tcp:6789 by default, so for ceph-mon you would need:

 

firewall-cmd –zone=public –add-port=6789/tcp –permanent
firewall-cmd –reload

 

OSDs listen on a range of ports, tcp:6800-7300 by default, so you would need to run on ceph-osd{0,1,2}:

 

firewall-cmd –zone=public –add-port=6800-7300/tcp –permanent
firewall-cmd –reload

 

However as this is a test lab we can stop and disable the firewall. 

 

[root@ceph-base ~]# systemctl stop firewalld

 

[root@ceph-base ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@ceph-base ~]#

 

 

 

Ceph Package Installation

 

 

install the centos-release-ceph rpm from centos-extras:

 

yum -y install –enablerepo=extras centos-release-ceph

 

[root@ceph-base ~]# yum -y install –enablerepo=extras centos-release-ceph
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.rz.uni-frankfurt.de
* extras: mirror.cuegee.com
* updates: mirror.23media.com
Resolving Dependencies
–> Running transaction check
—> Package centos-release-ceph-nautilus.noarch 0:1.2-2.el7.centos will be installed
–> Processing Dependency: centos-release-storage-common for package: centos-release-ceph-nautilus-1.2-2.el7.centos.noarch
–> Processing Dependency: centos-release-nfs-ganesha28 for package: centos-release-ceph-nautilus-1.2-2.el7.centos.noarch
–> Running transaction check
—> Package centos-release-nfs-ganesha28.noarch 0:1.0-3.el7.centos will be installed
—> Package centos-release-storage-common.noarch 0:2-2.el7.centos will be installed
–> Finished Dependency Resolution

 

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
centos-release-ceph-nautilus noarch 1.2-2.el7.centos extras 5.1 k
Installing for dependencies:
centos-release-nfs-ganesha28 noarch 1.0-3.el7.centos extras 4.3 k
centos-release-storage-common noarch 2-2.el7.centos extras 5.1 k

Transaction Summary
==============================================================================================================================================================
Install 1 Package (+2 Dependent packages)

Total download size: 15 k
Installed size: 3.0 k
Downloading packages:
(1/3): centos-release-storage-common-2-2.el7.centos.noarch.rpm | 5.1 kB 00:00:00
(2/3): centos-release-ceph-nautilus-1.2-2.el7.centos.noarch.rpm | 5.1 kB 00:00:00
(3/3): centos-release-nfs-ganesha28-1.0-3.el7.centos.noarch.rpm | 4.3 kB 00:00:00
————————————————————————————————————————————————————–
Total 52 kB/s | 15 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : centos-release-storage-common-2-2.el7.centos.noarch 1/3
Installing : centos-release-nfs-ganesha28-1.0-3.el7.centos.noarch 2/3
Installing : centos-release-ceph-nautilus-1.2-2.el7.centos.noarch 3/3
Verifying : centos-release-ceph-nautilus-1.2-2.el7.centos.noarch 1/3
Verifying : centos-release-nfs-ganesha28-1.0-3.el7.centos.noarch 2/3
Verifying : centos-release-storage-common-2-2.el7.centos.noarch 3/3

Installed:
centos-release-ceph-nautilus.noarch 0:1.2-2.el7.centos

Dependency Installed:
centos-release-nfs-ganesha28.noarch 0:1.0-3.el7.centos centos-release-storage-common.noarch 0:2-2.el7.centos

 

Complete!
[root@ceph-base ~]#

 

 

To install ceph-deploy on centos7 I had to add the following to the repo list at /etc/yum.repos.d/CentOS-Ceph-Nautilus.repo

 

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-nautilus/el7//noarch
enabled=1
priority=2
gpgcheck=1
gpgkey=https://download.ceph.com/keys/release.asc

 

 

then do a yum update:

 

[root@ceph-base yum.repos.d]# yum update
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.rz.uni-frankfurt.de
* centos-ceph-nautilus: mirror2.hs-esslingen.de
* centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
* extras: ftp.halifax.rwth-aachen.de
* updates: mirror1.hs-esslingen.de
centos-ceph-nautilus | 3.0 kB 00:00:00
ceph-noarch | 1.5 kB 00:00:00
ceph-noarch/primary | 16 kB 00:00:00
ceph-noarch 170/170
Resolving Dependencies
–> Running transaction check
—> Package python-cffi.x86_64 0:1.6.0-5.el7 will be obsoleted
—> Package python-idna.noarch 0:2.4-1.el7 will be obsoleted
—> Package python-ipaddress.noarch 0:1.0.16-2.el7 will be obsoleted
—> Package python-six.noarch 0:1.9.0-2.el7 will be obsoleted
—> Package python2-cffi.x86_64 0:1.11.2-1.el7 will be obsoleting
—> Package python2-cryptography.x86_64 0:1.7.2-2.el7 will be updated
—> Package python2-cryptography.x86_64 0:2.5-1.el7 will be an update
–> Processing Dependency: python2-asn1crypto >= 0.21 for package: python2-cryptography-2.5-1.el7.x86_64
—> Package python2-idna.noarch 0:2.5-1.el7 will be obsoleting
—> Package python2-ipaddress.noarch 0:1.0.18-5.el7 will be obsoleting
—> Package python2-six.noarch 0:1.12.0-1.el7 will be obsoleting
—> Package smartmontools.x86_64 1:7.0-2.el7 will be updated
—> Package smartmontools.x86_64 1:7.0-3.el7 will be an update
–> Running transaction check
—> Package python2-asn1crypto.noarch 0:0.23.0-2.el7 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
python2-cffi x86_64 1.11.2-1.el7 centos-ceph-nautilus 229 k
replacing python-cffi.x86_64 1.6.0-5.el7
python2-idna noarch 2.5-1.el7 centos-ceph-nautilus 94 k
replacing python-idna.noarch 2.4-1.el7
python2-ipaddress noarch 1.0.18-5.el7 centos-ceph-nautilus 35 k
replacing python-ipaddress.noarch 1.0.16-2.el7
python2-six noarch 1.12.0-1.el7 centos-ceph-nautilus 33 k
replacing python-six.noarch 1.9.0-2.el7
Updating:
python2-cryptography x86_64 2.5-1.el7 centos-ceph-nautilus 544 k
smartmontools x86_64 1:7.0-3.el7 centos-ceph-nautilus 547 k
Installing for dependencies:
python2-asn1crypto noarch 0.23.0-2.el7 centos-ceph-nautilus 172 k

Transaction Summary
==============================================================================================================================================================
Install 4 Packages (+1 Dependent package)
Upgrade 2 Packages

Total download size: 1.6 M
Is this ok [y/d/N]: y
Downloading packages:
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
warning: /var/cache/yum/x86_64/7/centos-ceph-nautilus/packages/python2-asn1crypto-0.23.0-2.el7.noarch.rpm: Header V4 RSA/SHA1 Signature, key ID e451e5b5: NOKEY
Public key for python2-asn1crypto-0.23.0-2.el7.noarch.rpm is not installed
(1/7): python2-asn1crypto-0.23.0-2.el7.noarch.rpm | 172 kB 00:00:00
(2/7): python2-cffi-1.11.2-1.el7.x86_64.rpm | 229 kB 00:00:00
(3/7): python2-cryptography-2.5-1.el7.x86_64.rpm | 544 kB 00:00:00
(4/7): python2-ipaddress-1.0.18-5.el7.noarch.rpm | 35 kB 00:00:00
(5/7): python2-six-1.12.0-1.el7.noarch.rpm | 33 kB 00:00:00
(6/7): smartmontools-7.0-3.el7.x86_64.rpm | 547 kB 00:00:00
(7/7): python2-idna-2.5-1.el7.noarch.rpm | 94 kB 00:00:00
————————————————————————————————————————————————————–
Total 1.9 MB/s | 1.6 MB 00:00:00
Retrieving key from file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
Importing GPG key 0xE451E5B5:
Userid : “CentOS Storage SIG (http://wiki.centos.org/SpecialInterestGroup/Storage) <security@centos.org>”
Fingerprint: 7412 9c0b 173b 071a 3775 951a d4a2 e50b e451 e5b5
Package : centos-release-storage-common-2-2.el7.centos.noarch (@extras)
From : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-SIG-Storage
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : python2-cffi-1.11.2-1.el7.x86_64 1/13
Installing : python2-idna-2.5-1.el7.noarch 2/13
Installing : python2-six-1.12.0-1.el7.noarch 3/13
Installing : python2-asn1crypto-0.23.0-2.el7.noarch 4/13
Installing : python2-ipaddress-1.0.18-5.el7.noarch 5/13
Updating : python2-cryptography-2.5-1.el7.x86_64 6/13
Updating : 1:smartmontools-7.0-3.el7.x86_64 7/13
Cleanup : python2-cryptography-1.7.2-2.el7.x86_64 8/13
Erasing : python-idna-2.4-1.el7.noarch 9/13
Erasing : python-ipaddress-1.0.16-2.el7.noarch 10/13
Erasing : python-six-1.9.0-2.el7.noarch 11/13
Erasing : python-cffi-1.6.0-5.el7.x86_64 12/13
Cleanup : 1:smartmontools-7.0-2.el7.x86_64 13/13
Verifying : python2-ipaddress-1.0.18-5.el7.noarch 1/13
Verifying : python2-asn1crypto-0.23.0-2.el7.noarch 2/13
Verifying : python2-six-1.12.0-1.el7.noarch 3/13
Verifying : python2-cryptography-2.5-1.el7.x86_64 4/13
Verifying : python2-idna-2.5-1.el7.noarch 5/13
Verifying : 1:smartmontools-7.0-3.el7.x86_64 6/13
Verifying : python2-cffi-1.11.2-1.el7.x86_64 7/13
Verifying : python-idna-2.4-1.el7.noarch 8/13
Verifying : python-ipaddress-1.0.16-2.el7.noarch 9/13
Verifying : 1:smartmontools-7.0-2.el7.x86_64 10/13
Verifying : python-cffi-1.6.0-5.el7.x86_64 11/13
Verifying : python-six-1.9.0-2.el7.noarch 12/13
Verifying : python2-cryptography-1.7.2-2.el7.x86_64 13/13

Installed:
python2-cffi.x86_64 0:1.11.2-1.el7 python2-idna.noarch 0:2.5-1.el7 python2-ipaddress.noarch 0:1.0.18-5.el7 python2-six.noarch 0:1.12.0-1.el7

Dependency Installed:
python2-asn1crypto.noarch 0:0.23.0-2.el7

Updated:
python2-cryptography.x86_64 0:2.5-1.el7 smartmontools.x86_64 1:7.0-3.el7

Replaced:
python-cffi.x86_64 0:1.6.0-5.el7 python-idna.noarch 0:2.4-1.el7 python-ipaddress.noarch 0:1.0.16-2.el7 python-six.noarch 0:1.9.0-2.el7

Complete!

 

[root@ceph-base yum.repos.d]# ceph-deploy
-bash: ceph-deploy: command not found

 

so then do:

 

ceph-base yum.repos.d]# yum -y install ceph-deploy
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: ftp.rz.uni-frankfurt.de
* centos-ceph-nautilus: de.mirrors.clouvider.net
* centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
* extras: ftp.fau.de
* updates: mirror1.hs-esslingen.de
Resolving Dependencies
–> Running transaction check
—> Package ceph-deploy.noarch 0:2.0.1-0 will be installed
–> Finished Dependency Resolution

Dependencies Resolved

==============================================================================================================================================================
Package Arch Version Repository Size
==============================================================================================================================================================
Installing:
ceph-deploy noarch 2.0.1-0 ceph-noarch 286 k

Transaction Summary
==============================================================================================================================================================
Install 1 Package

Total download size: 286 k
Installed size: 1.2 M
Downloading packages:
warning: /var/cache/yum/x86_64/7/ceph-noarch/packages/ceph-deploy-2.0.1-0.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 460f3994: NOKEY kB –:–:– ETA
Public key for ceph-deploy-2.0.1-0.noarch.rpm is not installed
ceph-deploy-2.0.1-0.noarch.rpm | 286 kB 00:00:01
Retrieving key from https://download.ceph.com/keys/release.asc
Importing GPG key 0x460F3994:
Userid : “Ceph.com (release key) <security@ceph.com>”
Fingerprint: 08b7 3419 ac32 b4e9 66c1 a330 e84a c2c0 460f 3994
From : https://download.ceph.com/keys/release.asc
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : ceph-deploy-2.0.1-0.noarch 1/1
Verifying : ceph-deploy-2.0.1-0.noarch 1/1

Installed:
ceph-deploy.noarch 0:2.0.1-0

Complete!
[root@ceph-base yum.repos.d]#

 

 

With that, ceph-deploy is now installed:

 

[root@ceph-base ~]# ceph-deploy
usage: ceph-deploy [-h] [-v | -q] [–version] [–username USERNAME]
[–overwrite-conf] [–ceph-conf CEPH_CONF]
COMMAND …

 

Next step is to clone ceph-base and create the VM machines which will be used for the ceph cluster nodes. After that we can create the cluster using ceph-deploy. Machines are created using KVM.

 

We create the following machines:

ceph-mon

ceph-osd0

ceph-osd1

ceph-osd2

 

 

After this, create ssh key on ceph-mon and then copy it to the osd nodes as follows:

 

[root@ceph-mon ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:9VOirKNfbuRHHA88mIOl9Q7fWf0wxvGd8eYqQwp4u0k root@ceph-mon
The key’s randomart image is:
+—[RSA 2048]—-+
| |
| o .. |
| =.=…o*|
| oo=o*o=.B|
| .S o*oB B.|
| . o.. *.+ o|
| .E=.+ . |
| o.=+ + . |
| ..+o.. o |
+—-[SHA256]—–+
[root@ceph-mon ~]#
[root@ceph-mon ~]# ssh-copy-id root@ceph-osd1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
root@ceph-osd1’s password:

Number of key(s) added: 1

 

Now try logging into the machine, with: “ssh ‘root@ceph-osd1′”
and check to make sure that only the key(s) you wanted were added.

[root@ceph-mon ~]#

 

Install Ceph Monitor

 

We’re installing this module on the machine we have designated for this purpose, ie ceph-mon:

 

Normally in a production environment ceph cluster you would run at least two or preferably three ceph monitor nodes to allow for failover and quorum.

 

 

[root@ceph-mon ~]# ceph-deploy install –mon ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –mon ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None

 

… long list of package installations….

[ceph-mon][DEBUG ] python2-webob.noarch 0:1.8.5-1.el7
[ceph-mon][DEBUG ] rdma-core.x86_64 0:22.4-5.el7
[ceph-mon][DEBUG ] userspace-rcu.x86_64 0:0.10.0-3.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

 

Install Ceph Manager

 

This will be installed on node ceph-mon:

 

[root@ceph-mon ~]# ceph-deploy mgr create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mgr create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] mgr : [(‘ceph-mon’, ‘ceph-mon’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f07237fda28>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mgr at 0x7f0724066398>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mgr][DEBUG ] Deploying mgr, cluster ceph hosts ceph-mon:ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.mgr][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mgr][DEBUG ] remote host will use systemd
[ceph_deploy.mgr][DEBUG ] deploying mgr bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][WARNIN] mgr keyring does not exist yet, creating one
[ceph-mon][DEBUG ] create a keyring file
[ceph-mon][DEBUG ] create path recursively if it doesn’t exist
[ceph-mon][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-mgr –keyring /var/lib/ceph/bootstrap-mgr/ceph.keyring auth get-or-create mgr.ceph-mon mon allow profile mgr osd allow * mds allow * -o /var/lib/ceph/mgr/ceph-ceph-mon/keyring
[ceph-mon][INFO ] Running command: systemctl enable ceph-mgr@ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@ceph-mon.service to /usr/lib/systemd/system/ceph-mgr@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-mgr@ceph-mon
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[root@ceph-mon ~]#

 

 

on ceph-mon, create the cluster configuration file:

 

ceph-deploy new ceph-mon

 

[root@ceph-mon ~]# ceph-deploy new ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy new ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f5d34d4a0c8>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5d344cb830>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph-mon][INFO ] Running command: /usr/sbin/ip link show
[ceph-mon][INFO ] Running command: /usr/sbin/ip addr show
[ceph-mon][DEBUG ] IP addresses found: [u’192.168.122.40′, u’10.0.9.40′]
[ceph_deploy.new][DEBUG ] Resolving host ceph-mon
[ceph_deploy.new][DEBUG ] Monitor ceph-mon at 10.0.9.40
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘ceph-mon’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘10.0.9.40’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…
[root@ceph-mon ~]#

 

 

Add configuration directives: 1GiB journal, 2 (normal _and_ minimum) replicas per object, etc.

 

$ cat << EOF >> ceph.conf
osd_journal_size = 1000
osd_pool_default_size = 2
osd_pool_default_min_size = 2
osd_crush_chooseleaf_type = 1
osd_crush_update_on_start = true
max_open_files = 131072
osd pool default pg num = 128
osd pool default pgp num = 128
mon_pg_warn_max_per_osd = 0
EOF

 

 

[root@ceph-mon ~]# cat ceph.conf
[global]
fsid = 2e490f0d-41dc-4be2-b31f-c77627348d60
mon_initial_members = ceph-mon
mon_host = 10.0.9.40
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

osd_journal_size = 1000
osd_pool_default_size = 2
osd_pool_default_min_size = 2
osd_crush_chooseleaf_type = 1
osd_crush_update_on_start = true
max_open_files = 131072
osd pool default pg num = 128
osd pool default pgp num = 128
mon_pg_warn_max_per_osd = 0
[root@ceph-mon ~]#

 

 

next, create the ceph monitor on machine ceph-mon:

 

 

ceph-deploy mon create-initial

 

this does quite a lot, see below:

 

[root@ceph-mon ~]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd4742b6fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7fd474290668>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon …
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph-mon][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] deploying mon to ceph-mon
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] remote hostname: ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create the mon path if it does not exist
[ceph-mon][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon/done
[ceph-mon][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-ceph-mon/done
[ceph-mon][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-ceph-mon.mon.keyring
[ceph-mon][DEBUG ] create the monitor keyring file
[ceph-mon][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i ceph-mon –keyring /var/lib/ceph/tmp/ceph-ceph-mon.mon.keyring –setuser 167 –setgroup 167
[ceph-mon][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-ceph-mon.mon.keyring
[ceph-mon][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon][DEBUG ] create the init path if it does not exist
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph-mon][INFO ] Running command: systemctl enable ceph-mon@ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@ceph-mon.service to /usr/lib/systemd/system/ceph-mon@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-mon@ceph-mon
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][DEBUG ] status for monitor: mon.ceph-mon
… … … …

(edited out long list of DEBUG lines)

 

[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][INFO ] monitor: mon.ceph-mon is running
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph_deploy.mon][INFO ] mon.ceph-mon monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys…
[ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmp6aKZHd
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] fetch remote file
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –admin-daemon=/var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.admin
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-mds
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-mgr
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-osd
[ceph-mon][INFO ] Running command: /usr/bin/ceph –connect-timeout=25 –cluster=ceph –name mon. –keyring=/var/lib/ceph/mon/ceph-ceph-mon/keyring auth get client.bootstrap-rgw
[ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring
[ceph_deploy.gatherkeys][INFO ] keyring ‘ceph.mon.keyring’ already exists
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring
[ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmp6aKZHd
[root@ceph-mon ~]#

 

 

next, also on ceph-mon, install and configure the ceph cluster cli command-line interface:

 

ceph-deploy install –cli ceph-mon

 

again, this does a lot…

 

[root@ceph-mon ~]# ceph-deploy install –cli ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –cli ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f10e0ab0320>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f10e157a848>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : True
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version mimic on cluster ceph hosts ceph-mon
[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-mon …
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph-mon][INFO ] installing Ceph on ceph-mon
[ceph-mon][INFO ] Running command: yum clean all
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Cleaning repos: Ceph Ceph-noarch base centos-ceph-nautilus centos-nfs-ganesha28
[ceph-mon][DEBUG ] : ceph-noarch ceph-source epel extras updates
[ceph-mon][DEBUG ] Cleaning up list of fastest mirrors
[ceph-mon][INFO ] Running command: yum -y install epel-release
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Determining fastest mirrors
[ceph-mon][DEBUG ] * base: ftp.antilo.de
[ceph-mon][DEBUG ] * centos-ceph-nautilus: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * epel: epel.mirror.nucleus.be
[ceph-mon][DEBUG ] * extras: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * updates: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] 517 packages excluded due to repository priority protections
[ceph-mon][DEBUG ] Resolving Dependencies
[ceph-mon][DEBUG ] –> Running transaction check
[ceph-mon][DEBUG ] —> Package epel-release.noarch 0:7-11 will be updated
[ceph-mon][DEBUG ] —> Package epel-release.noarch 0:7-13 will be an update
[ceph-mon][DEBUG ] –> Finished Dependency Resolution
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependencies Resolved
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Package Arch Version Repository Size
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Updating:
[ceph-mon][DEBUG ] epel-release noarch 7-13 epel 15 k
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Transaction Summary
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Upgrade 1 Package
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Total download size: 15 k
[ceph-mon][DEBUG ] Downloading packages:
[ceph-mon][DEBUG ] Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
[ceph-mon][DEBUG ] Running transaction check
[ceph-mon][DEBUG ] Running transaction test
[ceph-mon][DEBUG ] Transaction test succeeded
[ceph-mon][DEBUG ] Running transaction
[ceph-mon][DEBUG ] Updating : epel-release-7-13.noarch 1/2
[ceph-mon][DEBUG ] Cleanup : epel-release-7-11.noarch 2/2
[ceph-mon][DEBUG ] Verifying : epel-release-7-13.noarch 1/2
[ceph-mon][DEBUG ] Verifying : epel-release-7-11.noarch 2/2
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Updated:
[ceph-mon][DEBUG ] epel-release.noarch 0:7-13
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: yum -y install yum-plugin-priorities
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Loading mirror speeds from cached hostfile
[ceph-mon][DEBUG ] * base: ftp.antilo.de
[ceph-mon][DEBUG ] * centos-ceph-nautilus: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * epel: epel.mirror.nucleus.be
[ceph-mon][DEBUG ] * extras: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * updates: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] 517 packages excluded due to repository priority protections
[ceph-mon][DEBUG ] Package yum-plugin-priorities-1.1.31-54.el7_8.noarch already installed and latest version
[ceph-mon][DEBUG ] Nothing to do
[ceph-mon][DEBUG ] Configure Yum priorities to include obsoletes
[ceph-mon][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[ceph-mon][INFO ] Running command: rpm –import https://download.ceph.com/keys/release.asc
[ceph-mon][INFO ] Running command: yum remove -y ceph-release
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Resolving Dependencies
[ceph-mon][DEBUG ] –> Running transaction check
[ceph-mon][DEBUG ] —> Package ceph-release.noarch 0:1-1.el7 will be erased
[ceph-mon][DEBUG ] –> Finished Dependency Resolution
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependencies Resolved
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Package Arch Version Repository Size
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Removing:
[ceph-mon][DEBUG ] ceph-release noarch 1-1.el7 @/ceph-release-1-0.el7.noarch 535
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Transaction Summary
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Remove 1 Package
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Installed size: 535
[ceph-mon][DEBUG ] Downloading packages:
[ceph-mon][DEBUG ] Running transaction check
[ceph-mon][DEBUG ] Running transaction test
[ceph-mon][DEBUG ] Transaction test succeeded
[ceph-mon][DEBUG ] Running transaction
[ceph-mon][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[ceph-mon][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Removed:
[ceph-mon][DEBUG ] ceph-release.noarch 0:1-1.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: yum install -y https://download.ceph.com/rpm-mimic/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Examining /var/tmp/yum-root-mTn5ik/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[ceph-mon][DEBUG ] Marking /var/tmp/yum-root-mTn5ik/ceph-release-1-0.el7.noarch.rpm to be installed
[ceph-mon][DEBUG ] Resolving Dependencies
[ceph-mon][DEBUG ] –> Running transaction check
[ceph-mon][DEBUG ] —> Package ceph-release.noarch 0:1-1.el7 will be installed
[ceph-mon][DEBUG ] –> Finished Dependency Resolution
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependencies Resolved
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Package Arch Version Repository Size
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Installing:
[ceph-mon][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Transaction Summary
[ceph-mon][DEBUG ] ================================================================================
[ceph-mon][DEBUG ] Install 1 Package
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Total size: 535
[ceph-mon][DEBUG ] Installed size: 535
[ceph-mon][DEBUG ] Downloading packages:
[ceph-mon][DEBUG ] Running transaction check
[ceph-mon][DEBUG ] Running transaction test
[ceph-mon][DEBUG ] Transaction test succeeded
[ceph-mon][DEBUG ] Running transaction
[ceph-mon][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Installed:
[ceph-mon][DEBUG ] ceph-release.noarch 0:1-1.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[ceph-mon][WARNIN] altered ceph.repo priorities to contain: priority=1
[ceph-mon][INFO ] Running command: yum -y install ceph-common
[ceph-mon][DEBUG ] Loaded plugins: fastestmirror, langpacks, priorities
[ceph-mon][DEBUG ] Loading mirror speeds from cached hostfile
[ceph-mon][DEBUG ] * base: ftp.antilo.de
[ceph-mon][DEBUG ] * centos-ceph-nautilus: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * centos-nfs-ganesha28: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * epel: epel.mirror.nucleus.be
[ceph-mon][DEBUG ] * extras: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] * updates: ftp.rz.uni-frankfurt.de
[ceph-mon][DEBUG ] 517 packages excluded due to repository priority protections
[ceph-mon][DEBUG ] Package 2:ceph-common-13.2.10-0.el7.x86_64 already installed and latest version
[ceph-mon][DEBUG ] Nothing to do
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

then do:

 

ceph-deploy admin ceph-mon

 

[root@ceph-mon ~]# ceph-deploy admin ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcbddacd2d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7fcbde5e0488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph-deploy mon create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mon create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7ffafa7fffc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : [‘ceph-mon’]
[ceph_deploy.cli][INFO ] func : <function mon at 0x7ffafa7d9668>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts ceph-mon
[ceph_deploy.mon][DEBUG ] detecting platform for host ceph-mon …
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph-mon][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.9.2009 Core
[ceph-mon][DEBUG ] determining if provided host has same hostname in remote
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] deploying mon to ceph-mon
[ceph-mon][DEBUG ] get remote short hostname
[ceph-mon][DEBUG ] remote hostname: ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create the mon path if it does not exist
[ceph-mon][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-ceph-mon/done
[ceph-mon][DEBUG ] create a done file to avoid re-doing the mon deployment
[ceph-mon][DEBUG ] create the init path if it does not exist
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph-mon][INFO ] Running command: systemctl enable ceph-mon@ceph-mon
[ceph-mon][INFO ] Running command: systemctl start ceph-mon@ceph-mon
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][DEBUG ] status for monitor: mon.ceph-mon 
[ceph-mon][DEBUG ] }

…. … (edited out long list of DEBUG line output)

[ceph-mon][DEBUG ] ********************************************************************************
[ceph-mon][INFO ] monitor: mon.ceph-mon is running
[ceph-mon][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.ceph-mon.asok mon_status
[root@ceph-mon ~]#

 

 

Since we are not doing an upgrade, switch CRUSH tunables to optimal:

 

ceph osd crush tunables optimal

 

 

[root@ceph-mon ~]# ceph osd crush tunables optimal
adjusted tunables profile to optimal
[root@ceph-mon ~]#

 

Create the  OSDs

 

Any new OSDs (e.g., when the cluster is expanded) can be deployed using BlueStore.

 

This is the default behavior so no specific change is needed.

 

first do:

 

ceph-deploy install –osd ceph-osd0 ceph-osd1 ceph-osd2

 

To create an OSD on a remote node, run:

 

cephdeploy osd create HOST data /path/to/device

 

NOTE that partitions aren’t created by this tool, they must be created beforehand. 

 

So we need to first create 2 x 2GB SCSI disks on each OSD machine.

 

These have the designations sda and sdb since our root OS system disk has the drive designation vda.

If necessary, to erase each partition, you would use the ceph-deploy zap command, eg:

 

ceph-deploy disk zap ceph-osd0:sda

 

but here we have created completely new disks so not required.

 

 

you can list the available disks on the OSDs as follows:

 

[root@ceph-mon ~]# ceph-deploy disk list ceph-osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy disk list ceph-osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : list
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f890c8506c8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] host : [‘ceph-osd0’]
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f890c892b90>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph-osd0][DEBUG ] connected to host: ceph-osd0
[ceph-osd0][DEBUG ] detect platform information from remote host
[ceph-osd0][DEBUG ] detect machine type
[ceph-osd0][DEBUG ] find the location of an executable
[ceph-osd0][INFO ] Running command: fdisk -l
[ceph-osd0][INFO ] Disk /dev/vda: 10.7 GB, 10737418240 bytes, 20971520 sectors
[ceph-osd0][INFO ] Disk /dev/sda: 2147 MB, 2147483648 bytes, 4194304 sectors
[ceph-osd0][INFO ] Disk /dev/sdb: 2147 MB, 2147483648 bytes, 4194304 sectors
[ceph-osd0][INFO ] Disk /dev/mapper/centos-root: 8585 MB, 8585740288 bytes, 16769024 sectors
[ceph-osd0][INFO ] Disk /dev/mapper/centos-swap: 1073 MB, 1073741824 bytes, 2097152 sectors
[root@ceph-mon ~]#

 

Create the 100% partitions for each disk on each OSD ie sda and sdb will be sda and sdb1:

 

NOTE we do not create a partition for data sda but we do require one for the journal ie sdb1

from ceph-mon, install and configure the OSDs, using sda as datastore (this is normally a RAID0 of big rotational disks) and sdb1 as its journal (normally a partition on a SSD):

 

 

ceph-deploy osd create –data /dev/sda ceph-osd0

 

[root@ceph-mon ~]# ceph-deploy osd create –data /dev/sda ceph-osd0
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy osd create –data /dev/sda ceph-osd0
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc2d30c47e8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] block_wal : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] journal : None
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] host : ceph-osd0
[ceph_deploy.cli][INFO ] filestore : None
[ceph_deploy.cli][INFO ] func : <function osd at 0x7fc2d30ffb18>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.cli][INFO ] data : /dev/sda
[ceph_deploy.cli][INFO ] block_db : None
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] debug : False
[ceph_deploy.osd][DEBUG ] Creating OSD on cluster ceph with data device /dev/sda
[ceph-osd0][DEBUG ] connected to host: ceph-osd0
[ceph-osd0][DEBUG ] detect platform information from remote host
[ceph-osd0][DEBUG ] detect machine type
[ceph-osd0][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.osd][DEBUG ] Deploying osd to ceph-osd0
[ceph-osd0][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-osd0][WARNIN] osd keyring does not exist yet, creating one
[ceph-osd0][DEBUG ] create a keyring file
[ceph-osd0][DEBUG ] find the location of an executable
[ceph-osd0][INFO ] Running command: /usr/sbin/ceph-volume –cluster ceph lvm create –bluestore –data /dev/sda
[ceph-osd0][WARNIN] Running command: /bin/ceph-authtool –gen-print-key
[ceph-osd0][WARNIN] Running command: /bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new 045a03af-bc98-46e7-868e-35b474fb0e09
[ceph-osd0][WARNIN] Running command: /usr/sbin/vgcreate –force –yes ceph-316d6de8-7741-4776-b000-0239cc0b0429 /dev/sda
[ceph-osd0][WARNIN] stdout: Physical volume “/dev/sda” successfully created.
[ceph-osd0][WARNIN] stdout: Volume group “ceph-316d6de8-7741-4776-b000-0239cc0b0429” successfully created
[ceph-osd0][WARNIN] Running command: /usr/sbin/lvcreate –yes -l 100%FREE -n osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 ceph-316d6de8-7741-4776-b000-0239cc0b0429
[ceph-osd0][WARNIN] stdout: Logical volume “osd-block-045a03af-bc98-46e7-868e-35b474fb0e09” created.
[ceph-osd0][WARNIN] Running command: /bin/ceph-authtool –gen-print-key
[ceph-osd0][WARNIN] Running command: /bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
[ceph-osd0][WARNIN] Running command: /bin/chown -h ceph:ceph /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-osd0][WARNIN] Running command: /bin/ln -s /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 /var/lib/ceph/osd/ceph-0/block
[ceph-osd0][WARNIN] Running command: /bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
[ceph-osd0][WARNIN] stderr: got monmap epoch 1
[ceph-osd0][WARNIN] Running command: /bin/ceph-authtool /var/lib/ceph/osd/ceph-0/keyring –create-keyring –name osd.0 –add-key AQBHCodguXDvGRAAvnenjHrWDTAdWBz0QJujzQ==
[ceph-osd0][WARNIN] stdout: creating /var/lib/ceph/osd/ceph-0/keyring
[ceph-osd0][WARNIN] added entity osd.0 auth auth(auid = 18446744073709551615 key=AQBHCodguXDvGRAAvnenjHrWDTAdWBz0QJujzQ== with 0 caps)
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
[ceph-osd0][WARNIN] Running command: /bin/ceph-osd –cluster ceph –osd-objectstore bluestore –mkfs -i 0 –monmap /var/lib/ceph/osd/ceph-0/activate.monmap –keyfile – –osd-data /var/lib/ceph/osd/ceph-0/ –osd-uuid 045a03af-bc98-46e7-868e-35b474fb0e09 –setuser ceph –setgroup ceph
[ceph-osd0][WARNIN] –> ceph-volume lvm prepare successful for: /dev/sda
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-osd0][WARNIN] Running command: /bin/ceph-bluestore-tool –cluster=ceph prime-osd-dir –dev /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 –path /var/lib/ceph/osd/ceph-0 –no-mon-config
[ceph-osd0][WARNIN] Running command: /bin/ln -snf /dev/ceph-316d6de8-7741-4776-b000-0239cc0b0429/osd-block-045a03af-bc98-46e7-868e-35b474fb0e09 /var/lib/ceph/osd/ceph-0/block
[ceph-osd0][WARNIN] Running command: /bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /dev/dm-2
[ceph-osd0][WARNIN] Running command: /bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
[ceph-osd0][WARNIN] Running command: /bin/systemctl enable ceph-volume@lvm-0-045a03af-bc98-46e7-868e-35b474fb0e09
[ceph-osd0][WARNIN] stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-0-045a03af-bc98-46e7-868e-35b474fb0e09.service to /usr/lib/systemd/system/ceph-volume@.service.
[ceph-osd0][WARNIN] Running command: /bin/systemctl enable –runtime ceph-osd@0
[ceph-osd0][WARNIN] stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@0.service to /usr/lib/systemd/system/ceph-osd@.service.
[ceph-osd0][WARNIN] Running command: /bin/systemctl start ceph-osd@0
[ceph-osd0][WARNIN] –> ceph-volume lvm activate successful for osd ID: 0
[ceph-osd0][WARNIN] –> ceph-volume lvm create successful for: /dev/sda
[ceph-osd0][INFO ] checking OSD status…
[ceph-osd0][DEBUG ] find the location of an executable
[ceph-osd0][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host ceph-osd0 is now ready for osd use.
[root@ceph-mon ~]#

 

do the same for the other nodes osd1 and osd2:

 

example for osd0:

parted –script /dev/sda ‘mklabel gpt’
parted –script /dev/sda “mkpart primary 0% 100%”

 

then do:

 

ceph-volume lvm create –data /dev/sda1

 

 

so we can do:

 

 

[root@ceph-osd0 ~]# ceph-volume lvm create –data /dev/sda1
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new be29e0ff-73e4-47cb-8b2c-f4caa10e08a4
Running command: /usr/sbin/vgcreate –force –yes ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f /dev/sda1
stdout: Physical volume “/dev/sda1” successfully created.
stdout: Volume group “ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f” successfully created
Running command: /usr/sbin/lvcreate –yes -l 100%FREE -n osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f
stdout: Logical volume “osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4” created.
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ln -s /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-3/activate.monmap
stderr: got monmap epoch 1
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-3/keyring –create-keyring –name osd.3 –add-key AQCFDYdgcHaFJxAA2BAlk+JwDg22eVrhA5WGcg==
stdout: creating /var/lib/ceph/osd/ceph-3/keyring
added entity osd.3 auth auth(auid = 18446744073709551615 key=AQCFDYdgcHaFJxAA2BAlk+JwDg22eVrhA5WGcg== with 0 caps)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3/
Running command: /usr/bin/ceph-osd –cluster ceph –osd-objectstore bluestore –mkfs -i 3 –monmap /var/lib/ceph/osd/ceph-3/activate.monmap –keyfile – –osd-data /var/lib/ceph/osd/ceph-3/ –osd-uuid be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 –setuser ceph –setgroup ceph
–> ceph-volume lvm prepare successful for: /dev/sda1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/ceph-bluestore-tool –cluster=ceph prime-osd-dir –dev /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 –path /var/lib/ceph/osd/ceph-3 –no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-797fe6cc-3cf0-4b62-aae1-3222a8fb802f/osd-block-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4 /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-3/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-3
Running command: /usr/bin/systemctl enable ceph-volume@lvm-3-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4
stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-3-be29e0ff-73e4-47cb-8b2c-f4caa10e08a4.service to /usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable –runtime ceph-osd@3
stderr: Created symlink from /run/systemd/system/ceph-osd.target.wants/ceph-osd@3.service to /usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@3
–> ceph-volume lvm activate successful for osd ID: 3
–> ceph-volume lvm create successful for: /dev/sda1
[root@ceph-osd0 ~]#

 

current status is now:

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_WARN
1 osds down
no active mgr

services:
mon: 1 daemons, quorum ceph-mon
mgr: no daemons active
osd: 4 osds: 3 up, 4 in

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

[root@ceph-mon ~]# ceph health
HEALTH_WARN 1 osds down; no active mgr
[root@ceph-mon ~]#

 

 

now have to repeat for the other 2 OSDs:

 

for node in ceph-osd1 ceph-osd2 ;
do
ssh $node
parted –script /dev/sda ‘mklabel gpt’ ;
parted –script /dev/sda “mkpart primary 0% 100%” ;
ceph-volume lvm create –data /dev/sda1 ;
done

 

 

The ceph cluster now looks like this:

 

(still have pools and crush to create and config)

 

Note the OSDs have to be “in” the cluster ie as cluster node members, and “up” ie active and running Ceph.

 

How To Check System Status

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

 

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

 

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 3.0 GiB used, 3.0 GiB / 6.0 GiB avail
pgs:

 

[root@ceph-mon ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00757 root default
-3 0.00378 host ceph-osd0
0 hdd 0.00189 osd.0 down 0 1.00000
3 hdd 0.00189 osd.3 up 1.00000 1.00000
-5 0.00189 host ceph-osd1
1 hdd 0.00189 osd.1 up 1.00000 1.00000
-7 0.00189 host ceph-osd2
2 hdd 0.00189 osd.2 up 1.00000 1.00000
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS TYPE NAME
-1 0.00757 – 6.0 GiB 3.0 GiB 12 MiB 0 B 3 GiB 3.0 GiB 50.30 1.00 – root default
-3 0.00378 – 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 – host ceph-osd0
0 hdd 0.00189 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 osd.0
3 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 0 osd.3
-5 0.00189 – 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 – host ceph-osd1
1 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 1 osd.1
-7 0.00189 – 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 – host ceph-osd2
2 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 4.1 MiB 0 B 1 GiB 1016 MiB 50.30 1.00 1 osd.2
TOTAL 6.0 GiB 3.0 GiB 12 MiB 0 B 3 GiB 3.0 GiB 50.30
MIN/MAX VAR: 1.00/1.00 STDDEV: 0.00
[root@ceph-mon ~]#

 

 

 

[root@ceph-mon ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool ‘datapool’
use ‘ceph osd pool application enable <pool-name> <app-name>’, where <app-name> is ‘cephfs’, ‘rbd’, ‘rgw’, or freeform for custom applications.
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS
0 hdd 0.00189 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0
3 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 3.7 MiB 0 B 1 GiB 1016 MiB 50.28 1.00 0
1 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 3.7 MiB 0 B 1 GiB 1016 MiB 50.28 1.00 0
2 hdd 0.00189 1.00000 2.0 GiB 1.0 GiB 3.7 MiB 0 B 1 GiB 1016 MiB 50.28 1.00 0
TOTAL 6.0 GiB 3.0 GiB 11 MiB 0 B 3 GiB 3.0 GiB 50.28
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
[root@ceph-mon ~]#

 

 

For more Ceph admin commands, see https://sabaini.at/pages/ceph-cheatsheet.html#monit

 

Create a Storage Pool

 

 

To create a pool:

 

ceph osd pool create datapool 1

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
6.0 GiB 3.0 GiB 3.0 GiB 50.30
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
datapool 1 0 B 0 1.8 GiB 0
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool ‘datapool’
use ‘ceph osd pool application enable <pool-name> <app-name>’, where <app-name> is ‘cephfs’, ‘rbd’, ‘rgw’, or freeform for custom applications.
[root@ceph-mon ~]#

 

so we need to enable the pool:

 

[root@ceph-mon ~]# ceph osd pool application enable datapool rbd
enabled application ‘rbd’ on pool ‘datapool’
[root@ceph-mon ~]#

[root@ceph-mon ~]# ceph health detail
HEALTH_OK
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:
pools: 1 pools, 1 pgs
objects: 1 objects, 10 B
usage: 3.0 GiB used, 3.0 GiB / 6.0 GiB avail
pgs: 1 active+clean

[root@ceph-mon ~]#

 

 

 

How To Check All Ceph Services Are Running

 

Use 

 

ceph -s 

 

 

 

 

 

or alternatively:

 

 

[root@ceph-mon ~]# systemctl status ceph\*.service
● ceph-mon@ceph-mon.service – Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 989 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon.service
└─989 /usr/bin/ceph-mon -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.

 

● ceph-mgr@ceph-mon.service – Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 992 (ceph-mgr)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph-mon.service
└─992 /usr/bin/ceph-mgr -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster manager daemon.
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root

 

● ceph-crash.service – Ceph crash dump collector
Loaded: loaded (/usr/lib/systemd/system/ceph-crash.service; enabled; vendor preset: enabled)
Active: active (running) since Di 2021-04-27 11:47:34 CEST; 6h ago
Main PID: 695 (ceph-crash)
CGroup: /system.slice/ceph-crash.service
└─695 /usr/bin/python2.7 /usr/bin/ceph-crash

 

Apr 27 11:47:34 ceph-mon systemd[1]: Started Ceph crash dump collector.
Apr 27 11:47:34 ceph-mon ceph-crash[695]: INFO:__main__:monitoring path /var/lib/ceph/crash, delay 600s
[root@ceph-mon ~]#

 

 

Object Manipulation

 

 

To create an object and upload a file into that object:

 

Example:

 

echo “test data” > testfile
rados put -p datapool testfile testfile
rados -p datapool ls
testfile

 

To set a key/value pair to that object:

 

rados -p datapool setomapval testfile mykey myvalue
rados -p datapool getomapval testfile mykey
(length 7) : 0000 : 6d 79 76 61 6c 75 65 : myvalue

 

To download the file:

 

rados get -p datapool testfile testfile2
md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2

 

 

 

[root@ceph-mon ~]# echo “test data” > testfile
[root@ceph-mon ~]# rados put -p datapool testfile testfile
[root@ceph-mon ~]# rados -p datapool ls
testfile
[root@ceph-mon ~]# rados -p datapool setomapval testfile mykey myvalue
[root@ceph-mon ~]# rados -p datapool getomapval testfile mykey
value (7 bytes) :
00000000 6d 79 76 61 6c 75 65 |myvalue|
00000007

 

[root@ceph-mon ~]# rados get -p datapool testfile testfile2
[root@ceph-mon ~]# md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2
[root@ceph-mon ~]#

 

 

How To Check If Your Datastore is BlueStore or FileStore

 

[root@ceph-mon ~]# ceph osd metadata 0 | grep -e id -e hostname -e osd_objectstore
“id”: 0,
“hostname”: “ceph-osd0”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 1 | grep -e id -e hostname -e osd_objectstore
“id”: 1,
“hostname”: “ceph-osd1”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 2 | grep -e id -e hostname -e osd_objectstore
“id”: 2,
“hostname”: “ceph-osd2”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

You can also display a large amount of information with this command:

 

[root@ceph-mon ~]# ceph osd metadata 2
{
“id”: 2,
“arch”: “x86_64”,
“back_addr”: “10.0.9.12:6801/1138”,
“back_iface”: “eth1”,
“bluefs”: “1”,
“bluefs_single_shared_device”: “1”,
“bluestore_bdev_access_mode”: “blk”,
“bluestore_bdev_block_size”: “4096”,
“bluestore_bdev_dev”: “253:2”,
“bluestore_bdev_dev_node”: “dm-2”,
“bluestore_bdev_driver”: “KernelDevice”,
“bluestore_bdev_model”: “”,
“bluestore_bdev_partition_path”: “/dev/dm-2”,
“bluestore_bdev_rotational”: “1”,
“bluestore_bdev_size”: “2143289344”,
“bluestore_bdev_type”: “hdd”,
“ceph_release”: “mimic”,
“ceph_version”: “ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)”,
“ceph_version_short”: “13.2.10”,
“cpu”: “AMD EPYC-Rome Processor”,
“default_device_class”: “hdd”,
“devices”: “dm-2,sda”,
“distro”: “centos”,
“distro_description”: “CentOS Linux 7 (Core)”,
“distro_version”: “7”,
“front_addr”: “10.0.9.12:6800/1138”,
“front_iface”: “eth1”,
“hb_back_addr”: “10.0.9.12:6802/1138”,
“hb_front_addr”: “10.0.9.12:6803/1138”,
“hostname”: “ceph-osd2”,
“journal_rotational”: “1”,
“kernel_description”: “#1 SMP Thu Apr 8 19:51:47 UTC 2021”,
“kernel_version”: “3.10.0-1160.24.1.el7.x86_64”,
“mem_swap_kb”: “1048572”,
“mem_total_kb”: “1530760”,
“os”: “Linux”,
“osd_data”: “/var/lib/ceph/osd/ceph-2”,
“osd_objectstore”: “bluestore”,
“rotational”: “1”
}
[root@ceph-mon ~]#

 

or you can use:

 

[root@ceph-mon ~]# ceph osd metadata osd.0 | grep osd_objectstore
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

Which Version of Ceph Is Your Cluster Running?

 

[root@ceph-mon ~]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

How To List Your Cluster Pools

 

To list your cluster pools, execute:

 

ceph osd lspools

 

[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]#

 

 

Placement Groups PG Information

 

To display the number of placement groups in a pool:

 

ceph osd pool get {pool-name} pg_num

 

 

To display statistics for the placement groups in the cluster:

 

ceph pg dump [–format {format}]

 

To display pool statistics:

 

[root@ceph-mon ~]# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
datapool 10 B 1 0 2 0 0 0 2 2 KiB 2 2 KiB

 

total_objects 1
total_used 3.0 GiB
total_avail 3.0 GiB
total_space 6.0 GiB
[root@ceph-mon ~]#

 

 

How To Repair a Placement Group PG

 

Ascertain with ceph -s which PG has a problem

 

To identify stuck placement groups:

 

ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]

 

Then do:

 

ceph pg repair <PG ID>

For more info on troubleshooting PGs see https://documentation.suse.com/ses/7/html/ses-all/bp-troubleshooting-pgs.html

 

 

How To Activate Ceph Dashboard

 

The Ceph Dashboard runs without an Apache or other webserver active, the functionality is provided by the Ceph system.

 

All HTTP connections to the Ceph dashboard use SSL/TLS by default.

 

For testing lab purposes you can simply generate and install a self-signed certificate as follows:

 

ceph dashboard create-self-signed-cert

 

However in production environments this is unsuitable since web browsers will object to self-signed certificates and require explicit confirmation from a certificate authority or CA before opening a connection to the Ceph dashboard.

 

You can use your own certificate authority to ensure the certificate warning does not appear.

 

For example by doing:

 

$ openssl req -new -nodes -x509 -subj “/O=IT/CN=ceph-mgr-dashboard” -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca

 

The generated dashboard.crt file then needs to be signed by a CA. Once signed, it can then be enabled for all Ceph manager instances as follows:

 

ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt

 

After changing the SSL certificate and key you must restart the Ceph manager processes manually. Either by:

 

ceph mgr fail mgr

 

or by disabling and re-enabling the dashboard module:

 

ceph mgr module disable dashboard
ceph mgr module enable dashboard

 

By default, the ceph-mgr daemon that runs the dashboard (i.e., the currently active manager) binds to TCP port 8443 (or 8080 if SSL is disabled).

 

You can change these ports by doing:

ceph config set mgr mgr/dashboard/server_addr $IP
ceph config set mgr mgr/dashboard/server_port $PORT

 

For the purposes of this lab I did:

 

[root@ceph-mon ~]# ceph mgr module enable dashboard
[root@ceph-mon ~]# ceph dashboard create-self-signed-cert
Self-signed certificate created
[root@ceph-mon ~]#

 

Dashboard enabling can be automated by adding following to ceph.conf:

 

[mon]
mgr initial modules = dashboard

 

 

 

[root@ceph-mon ~]# ceph mgr module ls | grep -A 5 enabled_modules
“enabled_modules”: [
“balancer”,
“crash”,
“dashboard”,
“iostat”,
“restful”,
[root@ceph-mon ~]#

 

check SSL is installed correctly. You should see the keys displayed in output from these commands:

 

 

ceph config-key get mgr/dashboard/key
ceph config-key get mgr/dashboard/crt

 

This command does not work on Centos7, Ceph Mimic version as the full functionality was not implemented by the Ceph project for this version.

 

 

ceph dashboard ac-user-create admin password administrator

 

 

Use this command instead:

 

 

[root@ceph-mon etc]# ceph dashboard set-login-credentials cephuser <password not shown here>
Username and password updated
[root@ceph-mon etc]#

 

Also make sure you have the respective firewall ports open for the dashboard, ie 8443 for SSL/TLS https (or 8080 for http – latter however not advisable due to insecure unencrypted connection – password interception risk)

 

 

Logging in to the Ceph Dashboard

 

To log in, open the URL:

 

 

To display the current URL and port for the Ceph dashboard, do:

 

[root@ceph-mon ~]# ceph mgr services
{
“dashboard”: “https://ceph-mon:8443/”
}
[root@ceph-mon ~]#

 

and enter the user name and password you set as above.

 

 

Pools and Placement Groups In More Detail

 

Remember that pools are not PGs. PGs go inside pools.

 

To create a pool:

 

 

ceph osd pool create <pool name> <PG_NUM> <PGP_NUM>

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pools by default are replicated.

 

There are two kinds:

 

replicated

 

erasure coding EC

 

For replicated you set the number of data copies or replicas that each data obkect will have. The number of copies that can be lost will be one less than the number of replicas.

 

For EC its more complicated.

 

you have

 

k : number of data chunks
m : number of coding chunks

 

 

Pools have to be associated with an application. Pools to be used with CephFS, or pools automatically created by Object Gateway are automatically associated with cephfs or rgw respectively.

 

For CephFS the name associated application name is cephfs,
for RADOS Block Device it is rbd,
and for Object Gateway it is rgw.

 

Otherwise, the format to associate a free-form application name with a pool is:

 

ceph osd pool application enable POOL_NAME APPLICATION_NAME

To see which applications a pool is associated with use:

 

ceph osd pool application get pool_name

 

 

To set pool quotas for the maximum number of bytes and/or the maximum number of objects permitted per pool:

 

ceph osd pool set-quota POOL_NAME MAX_OBJECTS OBJ_COUNT MAX_BYTES BYTES

 

eg

 

ceph osd pool set-quota data max_objects 20000

 

To set the number of object replicas on a replicated pool use:

 

ceph osd pool set poolname size num-replicas

 

Important:
The num-replicas value includes the object itself. So if you want the object and two replica copies of the object for a total of three instances of the object, you need to specify 3. You should not set this value to anything less than 3! Also bear in mind that setting 4 replicas for a pool will increase the reliability by 25%.

 

To display the number of object replicas, use:

 

ceph osd dump | grep ‘replicated size’

 

 

If you want to remove a quota, set this value to 0.

 

To set pool values, use:

 

ceph osd pool set POOL_NAME KEY VALUE

 

To display a pool’s stats use:

 

rados df

 

To list all values related to a specific pool use:

 

ceph osd pool get POOL_NAME all

 

You can also display specific pool values as follows:

 

ceph osd pool get POOL_NAME KEY

 

The number of placement groups for the pool.

 

ceph osd pool get POOL_NAME KEY

In particular:

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pool Created:

 

[root@ceph-mon ~]# ceph osd pool create datapool 128 128 replicated
pool ‘datapool’ created
[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:Block Lists
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 3.2 GiB used, 2.8 GiB / 6.0 GiB avail
pgs: 34.375% pgs unknown
84 active+clean
44 unknown

[root@ceph-mon ~]#

 

To remove a Placement Pool

 

two ways, ie two different commands can be used:

 

[root@ceph-mon ~]# rados rmpool datapool –yes-i-really-really-mean-it
WARNING:
This will PERMANENTLY DESTROY an entire pool of objects with no way back.
To confirm, pass the pool to remove twice, followed by
–yes-i-really-really-mean-it

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.

[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
[root@ceph-mon ~]#

 

 

You have to set the mon_allow_pool_delete option first to true

 

first get the value of

 

ceph osd pool get pool_name nodelete

 

[root@ceph-mon ~]# ceph osd pool get datapool nodelete
nodelete: false
[root@ceph-mon ~]#

 

Because inadvertent pool deletion is a real danger, Ceph implements two mechanisms that prevent pools from being deleted. Both mechanisms must be disabled before a pool can be deleted.

 

The first mechanism is the NODELETE flag. Each pool has this flag, and its default value is ‘false’. To find out the value of this flag on a pool, run the following command:

 

ceph osd pool get pool_name nodelete

If it outputs nodelete: true, it is not possible to delete the pool until you change the flag using the following command:

 

ceph osd pool set pool_name nodelete false

 

 

The second mechanism is the cluster-wide configuration parameter mon allow pool delete, which defaults to ‘false’. This means that, by default, it is not possible to delete a pool. The error message displayed is:

 

Error EPERM: pool deletion is disabled; you must first set the
mon_allow_pool_delete config option to true before you can destroy a pool

 

To delete the pool despite this safety setting, you can temporarily set value of mon allow pool delete to ‘true’, then delete the pool. Then afterwards reset the value back to ‘false’:

 

ceph tell mon.* injectargs –mon-allow-pool-delete=true
ceph osd pool delete pool_name pool_name –yes-i-really-really-mean-it
ceph tell mon.* injectargs –mon-allow-pool-delete=false

 

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=true
injectargs:
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.
[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
pool ‘datapool’ removed
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=false
injectargs:mon_allow_pool_delete = ‘false’
[root@ceph-mon ~]#

 

NOTE The injectargs command displays following to confirm the command was carried out ok, this is NOT an error:

 

injectargs:mon_allow_pool_delete = ‘true’ (not observed, change may require restart)

 

 

 

Creating a Ceph MetaData Server MDS

 

A metadata or mds server node is a requirement if you want to run cephfs.

 

First add the mds server node name to the hosts name of all machines in the cluster, both mon, mgr and osds.

 

For this lab I am using the ceph-mon machine for the mds server ie not a separate additional node.

 

Note the SSH has to work, this is a prerequisite.

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph-deploy mds create ceph-mds
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create ceph-mds
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f29c54e55f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7f29c54b01b8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [(‘ceph-mds’, ‘ceph-mds’)]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts ceph-mds:ceph-mds
The authenticity of host ‘ceph-mds (10.0.9.40)’ can’t be established.
ECDSA key fingerprint is SHA256:OOvumn9VbVuPJbDQftpI3GnpQXchomGLwQ4J/1ADy6I.
ECDSA key fingerprint is MD5:1f:dd:66:01:b0:9c:6f:9b:5e:93:f4:80:7e:ad:eb:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘ceph-mds,10.0.9.40’ (ECDSA) to the list of known hosts.
root@ceph-mds’s password:
root@ceph-mds’s password:
[ceph-mds][DEBUG ] connected to host: ceph-mds
[ceph-mds][DEBUG ] detect platform information from remote host
[ceph-mds][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to ceph-mds
[ceph-mds][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mds][WARNIN] mds keyring does not exist yet, creating one
[ceph-mds][DEBUG ] create a keyring file
[ceph-mds][DEBUG ] create path if it doesn’t exist
[ceph-mds][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-mds –keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.ceph-mds osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-ceph-mds/keyring
[ceph-mds][INFO ] Running command: systemctl enable ceph-mds@ceph-mds
[ceph-mds][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph-mds.service to /usr/lib/systemd/system/ceph-mds@.service.
[ceph-mds][INFO ] Running command: systemctl start ceph-mds@ceph-mds
[ceph-mds][INFO ] Running command: systemctl enable ceph.target
[root@ceph-mon ~]#

 

 

Note the correct systemd service name used!

 

[root@ceph-mon ~]# systemctl status ceph-mds
Unit ceph-mds.service could not be found.
[root@ceph-mon ~]# systemctl status ceph-mds@ceph-mds
● ceph-mds@ceph-mds.service – Ceph metadata server daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; enabled; vendor preset: disabled)
Active: active (running) since Mo 2021-05-03 04:14:07 CEST; 4min 5s ago
Main PID: 22897 (ceph-mds)
CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph-mds.service
└─22897 /usr/bin/ceph-mds -f –cluster ceph –id ceph-mds –setuser ceph –setgroup ceph

Mai 03 04:14:07 ceph-mon systemd[1]: Started Ceph metadata server daemon.
Mai 03 04:14:07 ceph-mon ceph-mds[22897]: starting mds.ceph-mds at –
[root@ceph-mon ~]#

 

Next, I used ceph-deploy to copy the configuration file and admin key to the metadata server so I can use the ceph CLI without needing to specify monitor address and ceph.client.admin.keyring for each command execution:

 

[root@ceph-mon ~]# ceph-deploy admin ceph-mds
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy admin ceph-mds
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa99fae82d8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘ceph-mds’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7fa9a05fb488>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-mds
root@ceph-mds’s password:
root@ceph-mds’s password:
[ceph-mds][DEBUG ] connected to host: ceph-mds
[ceph-mds][DEBUG ] detect platform information from remote host
[ceph-mds][DEBUG ] detect machine type
[ceph-mds][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[root@ceph-mon ~]#

 

then set correct permissions for the ceph.client.admin.keyring:

 

[root@ceph-mon ~]# chmod +r /etc/ceph/ceph.client.admin.keyring
[root@ceph-mon ~]#

 

 

 

How To Create a CephsFS

 

A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata.

 

Bear in mind that:

 

Using a higher replication level for the metadata pool, as any data loss in this pool can render the whole filesystem inaccessible!

 

Using lower-latency storage such as SSDs for the metadata pool, as this will directly affect the observed latency of filesystem operations on clients.

 

 

Create a data pool, one for data, one for metadata:

 

[root@ceph-mon ~]# ceph osd pool create cephfs_data 128
pool ‘cephfs_data’ created
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph osd pool create cephfs_metadata 128
pool ‘cephfs_metadata’ created
[root@ceph-mon ~]#

 

then enable the filesystem using the fs new command:

 

ceph fs new <fs_name> <metadata> <data>

 

 

so we do:

 

ceph fs new cephfs cephfs_metadata cephfs_data

 

 

then verify with:

 

ceph fs ls

 

and

 

ceph mds stat

 

 

 

[root@ceph-mon ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 5 and data pool 4
[root@ceph-mon ~]# ceph fs ls
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph-mon ~]#
[root@ceph-mon ~]# ceph mds stat
cephfs-1/1/1 up {0=ceph-mds=up:active}
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
mds: cephfs-1/1/1 up {0=ceph-mds=up:active}
osd: 4 osds: 3 up, 3 in

data:
pools: 2 pools, 256 pgs
objects: 183 objects, 46 MiB
usage: 3.4 GiB used, 2.6 GiB / 6.0 GiB avail
pgs: 256 active+clean

[root@ceph-mon ~]#

 

Once the filesystem is created and the MDS is active you can mount the filesystem:

 

 

How To Mount Cephfs

 

To mount the Ceph file system use the mount command if you know the monitor host IP address, else use the mount.ceph utility to resolve the monitor host name to IP address. eg:

 

mkdir /mnt/cephfs
mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs

 

To mount the Ceph file system with cephx authentication enabled, you need to specify a user name and a secret.

 

mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs -o name=admin,secret=DUWEDduoeuroFDWVMWDqfdffDWLSRT==

 

However, a safer method reads the secret from a file, eg:

 

mount -t ceph 192.168.122.21:6789:/ /mnt/cephfs -o name=admin,secretfile=/etc/ceph/admin.secret

 

To unmount cephfs simply use the umount command as per usual:

 

eg

 

umount /mnt/cephfs

 

[root@ceph-mon ~]# mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 736M 0 736M 0% /dev
tmpfs 748M 0 748M 0% /dev/shm
tmpfs 748M 8,7M 739M 2% /run
tmpfs 748M 0 748M 0% /sys/fs/cgroup
/dev/mapper/centos-root 8,0G 2,4G 5,7G 30% /
/dev/vda1 1014M 172M 843M 17% /boot
tmpfs 150M 0 150M 0% /run/user/0
10.0.9.40:6789:/ 1,4G 0 1,4G 0% /mnt/cephfs
[root@ceph-mon ~]#

 

 

To mount from asus laptop had to copy

 

scp ceph.client.admin.keyring asus:/root/

 

then I could do

 

mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`

root@asus:~#
root@asus:~# mount -t ceph ceph-mds:6789:/ /mnt/cephfs -o name=admin,secret=`ceph-authtool -p ceph.client.admin.keyring`
root@asus:~#
root@asus:~#
root@asus:~# df
Filesystem 1K-blocks Used Available Use% Mounted on
tmpfs 1844344 2052 1842292 1% /run
/dev/nvme0n1p4 413839584 227723904 165024096 58% /
tmpfs 9221712 271220 8950492 3% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 4096 0 4096 0% /sys/fs/cgroup
/dev/nvme0n1p1 98304 33547 64757 35% /boot/efi
tmpfs 1844340 88 1844252 1% /run/user/1000
10.0.9.40:6789:/ 1372160 0 1372160 0% /mnt/cephfs
root@asus:~#

 

 

rbd block devices

 

 

You must create a pool first before you can specify it as a source.

 

[root@ceph-mon ~]# ceph osd pool create rbdpool 128 128
Error ERANGE: pg_num 128 size 2 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
[root@ceph-mon ~]# ceph osd pool create rbdpool 64 64
pool ‘rbdpool’ created
[root@ceph-mon ~]# ceph osd lspools
4 cephfs_data
5 cephfs_metadata
6 rbdpool
[root@ceph-mon ~]# rbd -p rbdpool create rbimage –size 5120
[root@ceph-mon ~]# rbd ls rbdpool
rbimage
[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
rbd: error opening image rbdimage: (2) No such file or directory
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd feature disable rbdpool/rbimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 5G 0 disk
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbimage – /dev/rbd0
[root@ceph-mon ~]# rbd –image rbimage -p rbdpool info
rbd image ‘rbimage’:
size 5 GiB in 1280 objects
order 22 (4 MiB objects)
id: d3956b8b4567
block_name_prefix: rbd_data.d3956b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 15:32:48 2021
[root@ceph-mon ~]#

 

 

 

to remove an image:

 

rbd rm {pool-name}/{image-name}

[root@ceph-mon ~]# rbd rm rbdpool/rbimage
Removing image: 100% complete…done.
[root@ceph-mon ~]# rbd rm rbdpool/image
Removing image: 100% complete…done.
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
[root@ceph-mon ~]#

 

 

To create an image

 

rbd create –size {megabytes} {pool-name}/{image-name}

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd create –size 2048 rbdpool/rbdimage
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbdimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 2G 0 disk
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]# mkfs.xfs /dev/rbd0
Discarding blocks…Done.
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ceph-mon ~]#

 

[root@ceph-mon mnt]# mkdir /mnt/rbd
[root@ceph-mon mnt]# mount /dev/rbd0 /mnt/rbd
[root@ceph-mon mnt]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753596 0 753596 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8844 756536 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2441472 5932800 30% /
/dev/vda1 1038336 175296 863040 17% /boot
tmpfs 153076 0 153076 0% /run/user/0
/dev/rbd0 2086912 33184 2053728 2% /mnt/rbd
[root@ceph-mon mnt]#

 

 

 

How to resize an rbd image

eg to 10GB.

rbd resize –size 10000 mypool/myimage

Resizing image: 100% complete…done.

Grow the file system to fill up the new size of the device.

xfs_growfs /mnt
[…]
data blocks changed from 2097152 to 2560000

 

Creating rbd snapshots

An RBD snapshot is a snapshot of a RADOS Block Device image. An rbd snapshot creates a history of the image’s state.

It is important to stop input and output operations and flush all pending writes before creating a snapshot of an rbd image.

If the image contains a file system, the file system must be in a consistent state before creating the snapshot.

rbd –pool pool-name snap create –snap snap-name image-name

rbd snap create pool-name/image-name@snap-name

eg

rbd –pool rbd snap create –snap snapshot1 image1
rbd snap create rbd/image1@snapshot1

 

To list snapshots of an image, specify the pool name and the image name.

rbd –pool pool-name snap ls image-name
rbd snap ls pool-name/image-name

eg

rbd –pool rbd snap ls image1
rbd snap ls rbd/image1

 

How to rollback to a snapshot

To rollback to a snapshot with rbd, specify the snap rollback option, the pool name, the image name, and the snapshot name.

rbd –pool pool-name snap rollback –snap snap-name image-name
rbd snap rollback pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rollback –snap snapshot1 image1
rbd snap rollback pool1/image1@snapshot1

IMPORTANT NOTE:

Note that it is faster to clone from a snapshot than to rollback an image to a snapshot. This is actually the preferred method of returning to a pre-existing state rather than rolling back a snapshot.

 

To delete a snapshot

To delete a snapshot with rbd, specify the snap rm option, the pool name, the image name, and the user name.

rbd –pool pool-name snap rm –snap snap-name image-name
rbd snap rm pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rm –snap snapshot1 image1
rbd snap rm pool1/image1@snapshot1

Note also that Ceph OSDs delete data asynchronously, so deleting a snapshot will not free the disk space straight away.

To delete or purge all snapshots

To delete all snapshots for an image with rbd, specify the snap purge option and the image name.

rbd –pool pool-name snap purge image-name
rbd snap purge pool-name/image-name

eg

rbd –pool pool1 snap purge image1
rbd snap purge pool1/image1

 

Important when cloning!

Note that clones access the parent snapshots. This means all clones will break if a user deletes the parent snapshot. To prevent this happening, you must protect the snapshot before you can clone it.

 

do this by:

 

rbd –pool pool-name snap protect –image image-name –snap snapshot-name
rbd snap protect pool-name/image-name@snapshot-name

 

eg

 

rbd –pool pool1 snap protect –image image1 –snap snapshot1
rbd snap protect pool1/image1@snapshot1

 

Note that you cannot delete a protected snapshot.

How to clone a snapshot

To clone a snapshot, you must specify the parent pool, image, snapshot, the child pool, and the image name.

 

You must also protect the snapshot before you can clone it.

 

rbd clone –pool pool-name –image parent-image –snap snap-name –dest-pool pool-name –dest child-image

rbd clone pool-name/parent-image@snap-name pool-name/child-image-name

eg

 

rbd clone pool1/image1@snapshot1 pool1/image2

 

 

To delete a snapshot, you must unprotect it first.

 

However, you cannot delete snapshots that have references from clones unless you first “flatten” each clone of a snapshot.

 

rbd –pool pool-name snap unprotect –image image-name –snap snapshot-name
rbd snap unprotect pool-name/image-name@snapshot-name

 

eg

rbd –pool pool1 snap unprotect –image image1 –snap snapshot1
rbd snap unprotect pool1/image1@snapshot1

 

 

To list the children of a snapshot

 

rbd –pool pool-name children –image image-name –snap snap-name

 

eg

 

rbd –pool pool1 children –image image1 –snap snapshot1
rbd children pool1/image1@snapshot1

 

 

RGW Rados Object Gateway

 

 

first, install the ceph rgw package:

 

[root@ceph-mon ~]# ceph-deploy install –rgw ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –rgw ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f33f0221320>

 

… long list of package install output

….

[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependency Installed:
[ceph-mon][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

check which package is installed with

 

[root@ceph-mon ~]# rpm -q ceph-radosgw
ceph-radosgw-13.2.10-0.el7.x86_64
[root@ceph-mon ~]#

 

next do:

 

[root@ceph-mon ~]# ceph-deploy rgw create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [(‘ceph-mon’, ‘rgw.ceph-mon’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3bc2dd9e18>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f3bc38a62a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-mon:rgw.ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create path recursively if it doesn’t exist
[ceph-mon][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-rgw –keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-mon osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-mon/keyring
[ceph-mon][INFO ] Running command: systemctl enable ceph-radosgw@rgw.ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-mon.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-radosgw@rgw.ceph-mon
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph-mon and default port 7480
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: active (running) since Mi 2021-05-05 21:54:57 CEST; 531ms ago
Main PID: 7041 (radosgw)
CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-mon.service
└─7041 /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph

Mai 05 21:54:57 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:54:57 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:54:57 ceph-mon systemd[1]: Started Ceph rados gateway.
[root@ceph-mon ~]#

 

but then stops:

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mi 2021-05-05 21:55:01 CEST; 16s ago
Process: 7143 ExecStart=/usr/bin/radosgw -f –cluster ${CLUSTER} –name client.%i –setuser ceph –setgroup ceph (code=exited, status=5)
Main PID: 7143 (code=exited, status=5)

 

Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service: main process exited, code=exited, status=5/NOTINSTALLED
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:55:01 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: start request repeated too quickly for ceph-radosgw@rgw.ceph-mon.service
Mai 05 21:55:01 ceph-mon systemd[1]: Failed to start Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
[root@ceph-mon ~]#

 

 

why…

 

[root@ceph-mon ~]# /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph
2021-05-05 22:45:41.994 7fc9e6388440 -1 Couldn’t init storage provider (RADOS)
[root@ceph-mon ~]#

 

[root@ceph-mon ceph]# radosgw-admin user create –uid=cephuser –key-type=s3 –access-key cephuser –secret-key cephuser –display-name=”cephuser”
2021-05-05 22:13:54.255 7ff4152ec240 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2021-05-05 22:13:54.255 7ff4152ec240 0 failed reading realm info: ret -34 (34) Numerical result out of range
couldn’t init storage provider
[root@ceph-mon ceph]#

 

 

 

Continue Reading

LPIC3-306 COURSE NOTES: CEPH – An Overview

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering.

They are in “rough format”, presented as they were written.

 

 

LPIC3-306 Clustering – 363.2 Ceph Syllabus Requirements

 

 

Exam Weighting: 8

 

Description: Candidates should be able to manage and maintain a Ceph Cluster. This
includes the configuration of RGW, RDB devices and CephFS.

 

Key Knowledge Areas:
• Understand the architecture and components of Ceph
• Manage OSD, MGR, MON and MDS
• Understand and manage placement groups and pools
• Understand storage backends (FileStore and BlueStore)
• Initialize a Ceph cluster
• Create and manage Rados Block Devices
• Create and manage CephFS volumes, including snapshots
• Mount and use an existing CephFS
• Understand and adjust CRUSH maps

 

Configure high availability aspects of Ceph
• Scale up a Ceph cluster
• Restore and verify the integrity of a Ceph cluster after an outage
• Understand key concepts of Ceph updates, including update order, tunables and
features

 

Partial list of the used files, terms and utilities:
• ceph-deploy (including relevant subcommands)
• ceph.conf
• ceph (including relevant subcommands)
• rados (including relevant subcommands)
• rdb (including relevant subcommands)
• cephfs (including relevant subcommands)
• ceph-volume (including relevant subcommands)
• ceph-authtool
• ceph-bluestore-tool
• crushtool

 

 

 

What is Ceph

 

Ceph is an open-source, massively scalable, software-defined storage system or “SDS”

 

It provides object, block and file system storage via a single clustered high-availability platform.

 

The intention of Ceph is to be a fully distributed system with no single point of failure which is self-healing and self-managing. Although production environment Ceph systems are best run on a high-grade hardware specification,  Ceph runs on standard commodity computer hardware.

 

An Overview of Ceph  

 

 

When Ceph services start, the initialization process activates a series of daemons that run in the background.

 

A Ceph Cluster runs with a minimum of three types of daemons:

 

Ceph Monitor (ceph-mon)

 

Ceph Manager (ceph-mgr)

 

Ceph OSD Daemon (ceph-osd)

 

Ceph Storage Clusters that support the Ceph File System also run at least one Ceph Metadata Server (ceph-mds).

 

Clusters that support Ceph Object Storage run Ceph RADOS Gateway daemons (radosgw) as well.

 

 

OSD or Object Storage Daemon:  An OSD stores data, handles data replication, recovery, backfilling, and rebalancing. An OSD also provides monitoring data for Ceph Monitors by checking other Ceph OSD Daemons for an active heartbeat.  A Ceph Storage Cluster requires at least two Ceph OSD Daemons in order to maintain an active + clean state.

 

Monitor or Mon: maintains maps of the cluster state, including the monitor map, the OSD map, the Placement Group (PG) map, and the CRUSH map.

 

Ceph also maintains a history or “epoch” of each state change in the Monitors, Ceph OSD Daemons, and the PGs.

 

Metadata Server or MDS: The MDS holds metadata relating to the Ceph Filesystem and enables POSIX file system users to execute standard POSIX commands such as ls, find, etc. without creating overhead on the Ceph Storage Cluster. MDS is only required if you are intending to run CephFS. It is not necessary if only block and object storage is to be used.  

 

A Ceph Storage Cluster requires at least one Ceph Monitor, one Ceph Manager, one Ceph Metadata Server or MDS, and at least one and preferably two or more Ceph OSDs or Object Storage Daemon servers.

 

Ceph stores data in the form of objects within logical storage pools. The CRUSH algorithm is used by Ceph to decide which placement group should contain the object and which Ceph OSD Daemon should store the placement group.

 

The CRUSH algorithm is also used by Ceph to scale, rebalance, and recover from failures.

 

Note that the newer version of ceph is not supported by Debian. Ceph is in general much better supported by CentOS since RedHat maintains both CentOS and Ceph. 

 

Ceph-deploy now replaced by cephadm

 

NOTE that ceph-deploy is now an outdated tool and is no longer maintained. It is also not available for Centos8. You should either use an installation method such as the above, or alternatively, use the cephadm tool for installing ceph on cluster nodes. However, a working knowledge of ceph-deploy is at time of writing still required for the LPIC3 exam.

 

For more on cephadm see https://ceph.io/ceph-management/introducing-cephadm/

 

 

 

The Client nodes know about monitors, OSDs and MDS’s but have no knowledge of object locations. Ceph clients communicate directly with the OSDs rather than going through a dedicated server.

 

The OSDs (Object Storage Daemons) store the data. They can be up and in the map or can be down and out if they have failed. An OSD can be down but still in the map which means that the PG has not yet been remapped. When OSDs come on line they inform the monitor.

 

The Monitor nodes store a master copy of the cluster map.

 

 

RADOS (Reliable Autonomic Distributed Object Store)

 

RADOS  makes up the heart of the scalable object storage service. 

 

In addition to accessing RADOS via the defined interfaces, it is also possible to access RADOS directly via a set of library calls.

 

 

CRUSH (Controlled Replication Under Scalable Hashing)

 

The CRUSH map contains the topology of the system and is location aware. Objects are mapped to Placement Groups and Placement Groups are in turn  mapped to OSDs. This allows for allows dynamic rebalancing and controls which Placement Group holds the objects. It also defines  which of the OSDs should hold the Placement Group.

 

The CRUSH map holds a list of OSDs, buckets and rules that hold replication directives.

 

CRUSH will try not to move data during rebalancing whereas a true hash function would be likely to cause greater data movement.

 

 

The CRUSH map allows for different resiliency models such as:

 

#0 for a 1-node cluster.

 

#1 for a multi node cluster in a single rack

 

#2 for a multi node, multi chassis cluster with multiple hosts in a chassis

 

#3 for a multi node cluster with hosts across racks, etc.

 

osd crush chooseleaf type = {n}

 

Buckets

 

Buckets are a hierarchical structure of storage locations; a bucket in the CRUSH map context is a location.

 

Placement Groups (PGs)

 

Ceph subdivides a storage pool into placement groups, assigning each individual object to a placement group, and then assigns the placement group to a primary OSD.

 

If an OSD node fails or the cluster re-balances, Ceph is able to replicate or move a placement group and all the objects stored within it without the need to move or replicate each object individually. This allows for an efficient re-balancing or recovery of the Ceph cluster.

 

Objects are mapped to Placement Groups by hashing the object’s name along with the replication factor and a bitmask.

 

 

When you create a pool, a number of placement groups are automatically created by Ceph for the pool. If you don’t directly specify a number of placement groups, Ceph uses the default value of 8 which is extremely low.

 

A more useful default value is 128. For example:

 

osd pool default pg num = 128
osd pool default pgp num = 128

 

You need to set both the number of total placement groups and the number of placement groups used for objects in PG splitting to the same value. As a general guide use the following values:

 

Less than 5 OSDs: set pg_num and pgp_num to 128.
Between 5 and 10 OSDs: set pg_num and pgp_num to 512
Between 10 and 50 OSDs: set pg_num and pgp_num to 4096

 

 

To specifically define the number of PGs:

 

set pool x pg_num to {pg_num}

 

ceph osd pool set {pool-name} pg_num {pg_num}

 

 

set pool x pgp_num to {pgp_num}

 

ceph osd pool set {pool-name} pgp_num {pgp_num}

 

How To Create OSD Nodes on Ceph Using ceph-deploy

 

 

BlueStore OSD is the now the default storage system used for Ceph OSDs.

 

Before you add a BlueStore OSD node to Ceph, first delete all data on the device/s that will serve as OSDs.

 

You can do this with the zap command:

 

$CEPH_CONFIG_DIR/ceph-deploy disk zap node device

 

Replace node with the node name or host name where the disk is located.

 

Replace device with the path to the device on the host where the disk is located.

 

Eg to delete the data on a device named /dev/sdc on a node named ceph-node3 in the Ceph Storage Cluster, use:

 

$CEPH_CONFIG_DIR/ceph-deploy disk zap ceph-node3 /dev/sdc

 

 

Next, to create a filestore OSD, enter:

 

$CEPH_CONFIG_DIR/ceph-deploy osd create –data device node

 

This creates a volume group and logical volume on the specified disk. Both data and journal are stored on the same logical volume.

 

Eg

 

$CEPH_CONFIG_DIR/ceph-deploy osd create –data /dev/sdc ceph-node3

 

 

 

How To Create A FileStore OSD Manually

 

Quoted from the Ceph website:

 

FileStore is the legacy approach to storing objects in Ceph. It relies on a standard file system (normally XFS) in combination with a key/value database (traditionally LevelDB, now RocksDB) for some metadata.

 

FileStore is well-tested and widely used in production but suffers from many performance deficiencies due to its overall design and reliance on a traditional file system for storing object data.

 

Although FileStore is generally capable of functioning on most POSIX-compatible file systems (including btrfs and ext4), we only recommend that XFS be used. Both btrfs and ext4 have known bugs and deficiencies and their use may lead to data loss. By default all Ceph provisioning tools will use XFS.

 

The official Ceph default storage system is now BlueStore. Prior to Ceph version Luminous, the default (and only option available) was Filestore.

 

 

Note the instructions below create a FileStore and not a BlueStore system!

 

To create a FileStore OSD manually ie without using ceph-deploy or cephadm:

 

first create the required partitions on the OSD node concerned: one for data, one for journal.

 

This example creates a 40 GB data partition on /dev/sdc1 and a journal partition of 12GB on /dev/sdc2:

 

 

parted /dev/sdc –script — mklabel gpt
parted –script /dev/sdc mkpart primary 0MB 40000MB
parted –script /dev/sdc mkpart primary 42000MB 55000MB

 

dd if=/dev/zero of=/dev/sdc1 bs=1M count=1000

 

sgdisk –zap-all –clear –mbrtogpt -g — /dev/sdc2

 

ceph-volume lvm zap /dev/sdc2

 

 

 

From the deployment node, create the FileStore OSD. To specify OSD file type, use –filestore and –fs-type.

 

Eg, to create a FileStore OSD with XFS filesystem:

 

CEPH_CONFIG_DIR/ceph-deploy osd create –filestore –fs-type xfs –data /dev/sdc1 –journal /dev/sdc2 ceph-node2

 

 

What is BlueStore?

 

Any new OSDs (e.g., when the cluster is expanded) can be deployed using BlueStore. This is the default behavior so no specific change is needed.

 

There are two methods OSDs can use to manage the data they store.

 

The default is now BlueStore. Prior to Ceph version Luminous, the default (and only option available) was Filestore.

 

BlueStore is a new back-end object storage system for Ceph OSD daemons. The original object store used by Ceph, FileStore, required a file system placed on top of raw block devices. Objects were then written to the file system.

 

By contrast, BlueStore does not require a file system for itself, because BlueStore stores objects directly on the block device. This improves cluster performance as it removes file system overhead.

 

BlueStore can use different block devices for storing different data. As an example, Hard Disk Drive (HDD) storage for data, Solid-state Drive (SSD) storage for metadata, Non-volatile Memory (NVM) or persistent or Non-volatile RAM (NVRAM) for the RocksDB WAL (write-ahead log).

 

In the simplest implementation, BlueStore resides on a single storage device which is partitioned into two parts, one containing OSD metadata and actual data partition.

 

The OSD metadata partition is formatted with XFS and holds information about the OSD, such as its identifier, the cluster it belongs to, and its private keyring.

 

Data partition contains the actual OSD data and is managed by BlueStore. The primary partition is identified by a block symbolic link in the data directory.

 

Two additional devices can also be implemented:

 

A WAL (write-ahead-log) device: This contains the BlueStore internal journal or write-ahead log and is identified by the block.wal symbolic link in the data directory.

 

Best practice is to use an SSH disk to implement a WAL device in order to provide for optimum performance.

 

 

A DB device: this stores BlueStore internal metadata. The embedded RocksDB database will then place as much metadata as possible on the DB device instead of on the primary device to optimize performance.

 

Only if the DB device becomes full will it then place metadata on the primary device. As for WAL, best practice for the Bluestore DB device is to deploy an SSD.

 

 

 

Starting and Stopping Ceph

 

To start all Ceph daemons:

 

[root@admin ~]# systemctl start ceph.target

 

To stop all Ceph daemons:

 

[root@admin ~]# systemctl stop ceph.target

 

To restart all Ceph daemons:

 

[root@admin ~]# systemctl restart ceph.target

 

To start, stop, and restart individual Ceph daemons:

 

 

On Ceph Monitor nodes:

 

systemctl start ceph-mon.target

 

systemctl stop ceph-mon.target

 

systemctl restart ceph-mon.target

 

On Ceph Manager nodes:

 

systemctl start ceph-mgr.target

 

systemctl stop ceph-mgr.target

 

systemctl restart ceph-mgr.target

 

On Ceph OSD nodes:

 

systemctl start ceph-osd.target

 

systemctl stop ceph-osd.target

 

systemctl restart ceph-osd.target

 

On Ceph Object Gateway nodes:

 

systemctl start ceph-radosgw.target

 

systemctl stop ceph-radosgw.target

 

systemctl restart ceph-radosgw.target

 

 

To perform stop, start, restart actions on specific Ceph monitor, manager, OSD or object gateway node instances:

 

On a Ceph Monitor node:

 

systemctl start ceph-mon@$MONITOR_HOST_NAME
systemctl stop ceph-mon@$MONITOR_HOST_NAME
systemctl restart ceph-mon@$MONITOR_HOST_NAME

 

On a Ceph Manager node:

systemctl start ceph-mgr@MANAGER_HOST_NAME
systemctl stop ceph-mgr@MANAGER_HOST_NAME
systemctl restart ceph-mgr@MANAGER_HOST_NAME

 

 

On a Ceph OSD node:

 

systemctl start ceph-osd@$OSD_NUMBER
systemctl stop ceph-osd@$OSD_NUMBER
systemctl restart ceph-osd@$OSD_NUMBER

 

substitute $OSD_NUMBER with the ID number of the Ceph OSD.

 

On a Ceph Object Gateway node:

 

systemctl start ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAME
systemctl stop ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAME
systemctl restart ceph-radosgw@rgw.$OBJ_GATEWAY_HOST_NAME

 

 

Placement Groups PG Information

 

To display the number of placement groups in a pool:

 

ceph osd pool get {pool-name} pg_num

 

 

To display statistics for the placement groups in the cluster:

 

ceph pg dump [–format {format}]

 

 

How To Check Status of the Ceph Cluster

 

 

To check the status and health of the cluster from the administration node, use:

 

ceph health
ceph status

 

Note it often can take up to several minutes for the cluster to stabilize before the cluster health will indicate HEALTH_OK.

 

You can also check the cluster quorum status of the cluster monitors:

 

ceph quorum_status –format json-pretty

 

 

For more Ceph admin commands, see https://sabaini.at/pages/ceph-cheatsheet.html#monit

 

 

The ceph.conf File

 

Each Ceph daemon looks for a ceph.conf file that contains its configuration settings.  For manual deployments, you need to create a ceph.conf file to define your cluster.

 

ceph.conf contains the following definitions:

 

Cluster membership
Host names
Host addresses
Paths to keyrings
Paths to journals
Paths to data
Other runtime options

 

The default ceph.conf locations in sequential order are as follows:

 

$CEPH_CONF (i.e., the path following the $CEPH_CONF environment variable)

 

-c path/path (i.e., the -c command line argument)

 

/etc/ceph/ceph.conf

 

~/.ceph/config

 

./ceph.conf (i.e., in the current working directory)

 

ceph-conf is a utility for getting information from a ceph configuration file.

 

As with most Ceph programs, you can specify which Ceph configuration file to use with the -c flag.

 

 

ceph-conf -L = lists all sections

 

 

Ceph Journals 

 

Note that journals are used only on FileStore.

 

Journals are deprecated on BlueStore and thus are not explicitly defined for BlueStore systems.

 

 

How To List Your Cluster Pools

 

To list your cluster pools, execute:

 

ceph osd lspools

 

Rename a Pool

 

To rename a pool, execute:

 

ceph osd pool rename <current-pool-name> <new-pool-name>

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Ceph on Centos8

Notes in preparation – not yet complete

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering.

They are in “rough format”, presented as they were written.

 

 

LAB on Ceph Clustering on Centos 8

 

 

The cluster comprises four nodes installed with Centos 8 and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

centos4 is the admin-node and ceph-deploy server

 

centos1 is the MON (monitor) server

 

centos2 is OSD0 (Object Store Daemon server)

 

centos3 is OSD1 (Object Store Daemon server)

 

 

Ceph Installation

 

Instructions below are for installing on Centos 8.

 

NOTE: Ceph comes with an installation utility called ceph-deploy which can traditionally be executed on the admin node to install Ceph onto the other nodes in the cluster. However, ceph-deploy is now an outdated tool and is no longer maintained. It is also not available for Centos8. You should theerfore either use an installation method such as the above, or alternatively, use the cephadm tool for installing ceph on cluster nodes.

 

However, in this lab we are installing Ceph directly onto each node without using cephadm.

 

 

Install the ceph packages and dependency package repos:

 

On centos4:

 

[root@centos4 yum.repos.d]# dnf -y install centos-release-ceph-octopus epel-release; dnf -y install ceph
Failed to set locale, defaulting to C.UTF-8
Last metadata expiration check: 1 day, 2:40:00 ago on Sun Apr 18 19:34:24 2021.
Dependencies resolved.

 

 

Having successfully checked that it installs ok with this command, I then executed it for the rest of the centos ceph cluster from asus laptop using:

 

root@asus:~# for NODE in centos1 centos2 centos3
> do
ssh $NODE “dnf -y install centos-release-ceph-octopus epel-release; dnf -y install ceph”
done

 

 

 

Configure Ceph-Monitor 

 

 

Next configure the monitor daemon on the admin node centos4:

 

[root@centos4 ~]# uuidgen
9b45c9d5-3055-4089-9a97-f488fffda1b4
[root@centos4 ~]#

 

# create new config
# file name ⇒ (any Cluster Name).conf

 

# set Cluster Name [ceph] (default) on this example ⇒ [ceph.conf]

 

configure /etc/ceph/ceph.conf

 

[root@centos4 ceph]# nano ceph.conf

 

[global]
# specify cluster network for monitoring
cluster network = 10.0.8.0/24
# specify public network
public network = 10.0.8.0/24
# specify UUID genarated above
fsid = 9b45c9d5-3055-4089-9a97-f488fffda1b4
# specify IP address of Monitor Daemon
mon host = 10.0.8.14
# specify Hostname of Monitor Daemon
mon initial members = centos4
osd pool default crush rule = -1

 

 

# mon.(Node name)
[mon.centos4]
# specify Hostname of Monitor Daemon
host = centos4
# specify IP address of Monitor Daemon
mon addr = 10.0.8.14
# allow to delete pools
mon allow pool delete = true

 

 

next generate the keys:

 

 

# generate secret key for Cluster monitoring
[root@node01 ~]#

 

 

[root@centos4 ceph]# ceph-authtool –create-keyring /etc/ceph/ceph.mon.keyring –gen-key -n mon. –cap mon ‘allow *’
creating /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]#

 

# generate secret key for Cluster admin

 

[root@centos4 ceph]# ceph-authtool –create-keyring /etc/ceph/ceph.client.admin.keyring –gen-key -n client.admin –cap mon ‘allow *’ –cap osd ‘allow *’ –cap mds ‘allow *’ –cap mgr ‘allow *’
creating /etc/ceph/ceph.client.admin.keyring
[root@centos4 ceph]#

 

# generate key for bootstrap

 

[root@centos4 ceph]# ceph-authtool –create-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring –gen-key -n client.bootstrap-osd –cap mon ‘profile bootstrap-osd’ –cap mgr ‘allow r’
creating /var/lib/ceph/bootstrap-osd/ceph.keyring
[root@centos4 ceph]#

 

# import generated key

 

[root@centos4 ceph]# ceph-authtool /etc/ceph/ceph.mon.keyring –import-keyring /etc/ceph/ceph.client.admin.keyring
importing contents of /etc/ceph/ceph.client.admin.keyring into /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]#

 

 

[root@centos4 ceph]# ceph-authtool /etc/ceph/ceph.mon.keyring –import-keyring /var/lib/ceph/bootstrap-osd/ceph.keyring
importing contents of /var/lib/ceph/bootstrap-osd/ceph.keyring into /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]#

 

# generate monitor map

 

use following commands:

 

FSID=$(grep “^fsid” /etc/ceph/ceph.conf | awk {‘print $NF’})
NODENAME=$(grep “^mon initial” /etc/ceph/ceph.conf | awk {‘print $NF’})
NODEIP=$(grep “^mon host” /etc/ceph/ceph.conf | awk {‘print $NF’})

monmaptool –create –add $NODENAME $NODEIP –fsid $FSID /etc/ceph/monmap

 

[root@centos4 ceph]# FSID=$(grep “^fsid” /etc/ceph/ceph.conf | awk {‘print $NF’})
[root@centos4 ceph]# NODENAME=$(grep “^mon initial” /etc/ceph/ceph.conf | awk {‘print $NF’})
[root@centos4 ceph]# NODEIP=$(grep “^mon host” /etc/ceph/ceph.conf | awk {‘print $NF’})
[root@centos4 ceph]# monmaptool –create –add $NODENAME $NODEIP –fsid $FSID /etc/ceph/monmap
monmaptool: monmap file /etc/ceph/monmap
monmaptool: set fsid to 9b45c9d5-3055-4089-9a97-f488fffda1b4
monmaptool: writing epoch 0 to /etc/ceph/monmap (1 monitors)
[root@centos4 ceph]#

 

next,

 

# create a directory for Monitor Daemon
# directory name ⇒ (Cluster Name)-(Node Name)

 

[root@centos4 ceph]# mkdir /var/lib/ceph/mon/ceph-centos4

 

# associate key and monmap with Monitor Daemon
# –cluster (Cluster Name)

 

[root@centos4 ceph]# ceph-mon –cluster ceph –mkfs -i $NODENAME –monmap /etc/ceph/monmap –keyring /etc/ceph/ceph.mon.keyring
[root@centos4 ceph]# chown ceph. /etc/ceph/ceph.*
[root@centos4 ceph]# chown -R ceph. /var/lib/ceph/mon/ceph-centos4 /var/lib/ceph/bootstrap-osd

 

 

Enable the ceph-mon service:

 

[root@centos4 ceph]# systemctl enable –now ceph-mon@$NODENAME
Created symlink /etc/systemd/system/ceph-mon.target.wants/ceph-mon@centos4.service → /usr/lib/systemd/system/ceph-mon@.service.
[root@centos4 ceph]#

 

# enable Messenger v2 Protocol

 

[root@centos4 ceph]# ceph mon enable-msgr2
[root@centos4 ceph]#

 

 

Configure Ceph-Manager

 

# enable Placement Groups auto scale module

 

[root@centos4 ceph]# ceph mgr module enable pg_autoscaler
module ‘pg_autoscaler’ is already enabled (always-on)
[root@centos4 ceph]#

 

# create a directory for Manager Daemon

 

# directory name ⇒ (Cluster Name)-(Node Name)

 

[root@centos4 ceph]# mkdir /var/lib/ceph/mgr/ceph-centos4
[root@centos4 ceph]#

 

# create auth key

 

[root@centos4 ceph]# ceph auth get-or-create mgr.$NODENAME mon ‘allow profile mgr’ osd ‘allow *’ mds ‘allow *’
[mgr.centos4]
key = AQBv7H1gSiJSNxAAWBpbuZE00TN35YZoZudNeA==
[root@centos4 ceph]#

 

[root@centos4 ceph]# ceph auth get-or-create mgr.node01 > /etc/ceph/ceph.mgr.admin.keyring

[root@centos4 ceph]# cp /etc/ceph/ceph.mgr.admin.keyring /var/lib/ceph/mgr/ceph-centos4/keyring
[root@centos4 ceph]#
[root@centos4 ceph]# chown ceph. /etc/ceph/ceph.mgr.admin.keyring

 

[root@centos4 ceph]# chown -R ceph. /var/lib/ceph/mgr/ceph-centos4

 

 

Enable the ceph-mgr service:

 

[root@centos4 ceph]# systemctl enable –now ceph-mgr@$NODENAME
Created symlink /etc/systemd/system/ceph-mgr.target.wants/ceph-mgr@centos4.service → /usr/lib/systemd/system/ceph-mgr@.service.
[root@centos4 ceph]#

 

 

Firewalling for Ceph

 

 

Configure or disable firewall:

 

 

[root@centos4 ceph]# systemctl stop firewalld
[root@centos4 ceph]# systemctl disable firewalld
Removed /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@centos4 ceph]#

 

otherwise you need to do:

 

firewall-cmd –add-service=ceph-mon –permanent
firewall-cmd –reload

 

 

Ceph Status Check

 

 Confirm cluster status:

 

OSD (Object Storage Device) will be configured later.

 

[root@centos4 ceph]# ceph -s
cluster:
id: 9b45c9d5-3055-4089-9a97-f488fffda1b4
health: HEALTH_OK

services:
mon: 1 daemons, quorum centos4 (age 5m)
mgr: no daemons active
osd: 0 osds: 0 up, 0 in

 

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:

[root@centos4 ceph]#

 

Adding An Extra OSD Node:

 

I then added a third OSD, centos1:

 

 

for NODE in centos1
do

scp /etc/ceph/ceph.conf ${NODE}:/etc/ceph/ceph.conf
scp /etc/ceph/ceph.client.admin.keyring ${NODE}:/etc/ceph
scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${NODE}:/var/lib/ceph/bootstrap-osd

ssh $NODE “chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*;
parted –script /dev/sdb ‘mklabel gpt’;
parted –script /dev/sdb “mkpart primary 0% 100%”;
ceph-volume lvm create –data /dev/sdb1″
done

 

 

[root@centos4 ~]# for NODE in centos1
> do
> scp /etc/ceph/ceph.conf ${NODE}:/etc/ceph/ceph.conf
> scp /etc/ceph/ceph.client.admin.keyring ${NODE}:/etc/ceph
> scp /var/lib/ceph/bootstrap-osd/ceph.keyring ${NODE}:/var/lib/ceph/bootstrap-osd
> ssh $NODE “chown ceph. /etc/ceph/ceph.* /var/lib/ceph/bootstrap-osd/*;
> parted –script /dev/sdb ‘mklabel gpt’;
> parted –script /dev/sdb “mkpart primary 0% 100%”;
> ceph-volume lvm create –data /dev/sdb1″
> done
ceph.conf 100% 569 459.1KB/s 00:00
ceph.client.admin.keyring 100% 151 130.4KB/s 00:00
ceph.keyring 100% 129 46.6KB/s 00:00
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i – osd new 88c09649-e489-410e-be29-333ddd29282d
Running command: /usr/sbin/vgcreate –force –yes ceph-6ac6963e-474a-4450-ab87-89d6881af0d7 /dev/sdb1
stdout: Physical volume “/dev/sdb1” successfully created.
stdout: Volume group “ceph-6ac6963e-474a-4450-ab87-89d6881af0d7” successfully created
Running command: /usr/sbin/lvcreate –yes -l 255 -n osd-block-88c09649-e489-410e-be29-333ddd29282d ceph-6ac6963e-474a-4450-ab87-89d6881af0d7
stdout: Logical volume “osd-block-88c09649-e489-410e-be29-333ddd29282d” created.
Running command: /usr/bin/ceph-authtool –gen-print-key
Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Running command: /usr/sbin/restorecon /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/chown -h ceph:ceph /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/ln -s /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/ceph –cluster ceph –name client.bootstrap-osd –keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
stderr: got monmap epoch 2
Running command: /usr/bin/ceph-authtool /var/lib/ceph/osd/ceph-2/keyring –create-keyring –name osd.2 –add-key AQAchH9gq4osHRAAFGD2AMQgQrD+UjjgciHJCw==
stdout: creating /var/lib/ceph/osd/ceph-2/keyring
added entity osd.2 auth(key=AQAchH9gq4osHRAAFGD2AMQgQrD+UjjgciHJCw==)
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Running command: /usr/bin/ceph-osd –cluster ceph –osd-objectstore bluestore –mkfs -i 2 –monmap /var/lib/ceph/osd/ceph-2/activate.monmap –keyfile – –osd-data /var/lib/ceph/osd/ceph-2/ –osd-uuid 88c09649-e489-410e-be29-333ddd29282d –setuser ceph –setgroup ceph
stderr: 2021-04-21T03:47:09.890+0200 7f558dbd0f40 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
stderr: 2021-04-21T03:47:09.924+0200 7f558dbd0f40 -1 freelist read_size_meta_from_db missing size meta in DB
–> ceph-volume lvm prepare successful for: /dev/sdb1
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/ceph-bluestore-tool –cluster=ceph prime-osd-dir –dev /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d –path /var/lib/ceph/osd/ceph-2 –no-mon-config
Running command: /usr/bin/ln -snf /dev/ceph-6ac6963e-474a-4450-ab87-89d6881af0d7/osd-block-88c09649-e489-410e-be29-333ddd29282d /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Running command: /usr/bin/systemctl enable ceph-volume@lvm-2-88c09649-e489-410e-be29-333ddd29282d
stderr: Created symlink /etc/systemd/system/multi-user.target.wants/ceph-volume@lvm-2-88c09649-e489-410e-be29-333ddd29282d.service → /usr/lib/systemd/system/ceph-volume@.service.
Running command: /usr/bin/systemctl enable –runtime ceph-osd@2
stderr: Created symlink /run/systemd/system/ceph-osd.target.wants/ceph-osd@2.service → /usr/lib/systemd/system/ceph-osd@.service.
Running command: /usr/bin/systemctl start ceph-osd@2
–> ceph-volume lvm activate successful for osd ID: 2
–> ceph-volume lvm create successful for: /dev/sdb1
[root@centos4 ~]#

 

 

root@centos4 ceph]# systemctl status –now ceph-mgr@$NODENAME
● ceph-mgr@centos4.service – Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2021-04-20 17:08:39 CEST; 1min 26s ago
Main PID: 6028 (ceph-mgr)
Tasks: 70 (limit: 8165)
Memory: 336.1M
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@centos4.service
└─6028 /usr/bin/ceph-mgr -f –cluster ceph –id centos4 –setuser ceph –setgroup ceph

 

 

Apr 20 17:08:39 centos4 systemd[1]: Started Ceph cluster manager daemon.
[root@centos4 ceph]# systemctl status –now ceph-mon@$NODENAME
● ceph-mon@centos4.service – Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2021-04-19 22:45:12 CEST; 18h ago
Main PID: 3510 (ceph-mon)
Tasks: 27
Memory: 55.7M
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@centos4.service
└─3510 /usr/bin/ceph-mon -f –cluster ceph –id centos4 –setuser ceph –setgroup ceph

 

 

Apr 19 22:45:12 centos4 systemd[1]: Started Ceph cluster monitor daemon.
Apr 19 22:45:13 centos4 ceph-mon[3510]: 2021-04-19T22:45:13.064+0200 7fded82af700 -1 WARNING: ‘mon addr’ config option [v2:10.0.8.14:3>
Apr 19 22:45:13 centos4 ceph-mon[3510]: continuing with monmap configuration
Apr 19 22:46:14 centos4 ceph-mon[3510]: 2021-04-19T22:46:14.945+0200 7fdebf1b1700 -1 mon.centos4@0(leader) e2 stashing newest monmap >
Apr 19 22:46:14 centos4 ceph-mon[3510]: ignoring –setuser ceph since I am not root
Apr 19 22:46:14 centos4 ceph-mon[3510]: ignoring –setgroup ceph since I am not root
Apr 20 16:40:31 centos4 ceph-mon[3510]: 2021-04-20T16:40:31.572+0200 7f10e0e99700 -1 log_channel(cluster) log [ERR] : Health check fai>
Apr 20 17:08:53 centos4 sudo[6162]: ceph : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/sbin/smartctl -a –json=o /dev/
[root@centos4 ceph]#

 

 

[root@centos4 ~]# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 0.00099 root default
-3 0.00099 host centos1
2 hdd 0.00099 osd.2 up 1.00000 1.00000
0 0 osd.0 down 0 1.00000
1 0 osd.1 down 0 1.00000

 

[root@centos4 ~]# ceph df
— RAW STORAGE —
CLASS SIZE AVAIL USED RAW USED %RAW USED
hdd 1020 MiB 1014 MiB 1.6 MiB 6.2 MiB 0.61
TOTAL 1020 MiB 1014 MiB 1.6 MiB 6.2 MiB 0.61

— POOLS —
POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL
device_health_metrics 1 1 0 B 0 0 B 0 321 MiB

 

[root@centos4 ~]# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
2 hdd 0.00099 1.00000 1020 MiB 6.2 MiB 1.5 MiB 0 B 4.6 MiB 1014 MiB 0.61 1.00 1 up
0 0 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 down
1 0 0 0 B 0 B 0 B 0 B 0 B 0 B 0 0 0 down
TOTAL 1020 MiB 6.2 MiB 1.5 MiB 0 B 4.6 MiB 1014 MiB 0.61
MIN/MAX VAR: 1.00/1.00 STDDEV: 0
[root@centos4 ~]#

 

 

[root@centos4 ~]# ceph -s
cluster:
id: 9b45c9d5-3055-4089-9a97-f488fffda1b4
health: HEALTH_WARN
Reduced data availability: 1 pg inactive
Degraded data redundancy: 1 pg undersized

services:
mon: 1 daemons, quorum centos4 (age 47h)
mgr: centos4(active, since 29h)
osd: 3 osds: 1 up (since 18h), 1 in (since 18h)

data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 6.2 MiB used, 1014 MiB / 1020 MiB avail
pgs: 100.000% pgs not active
1 undersized+peered

[root@centos4 ~]#

 

 

notes to be completed

 

 

 

Continue Reading