Tags Archives: storage

AWS Snow Family

AWS Snow Family

 

for data migration in and out of AWS:

 

Snowcone

 

Snowball Edge

 

Snowmobile

 

for Edge computing data migration:

 

Snowcone

Snowball Edge

 

These are offline devices that perform data migration for migration that would take more than a week by network transfer.

 

Snowcone and Snowball Edge are sent by post using a physical route rather than digitally.

 

Snowball Edge, used for TB or PB data transfer

 

pay per data transfer job

 

 

2 versions

 

Snowball Edge Storage Optimized:

 

80TB max of HDD capacity for block volume and S3 compatible object storage

 

Snowball Edge Compute Optimized:

 

42TB of HDD capacity for block volume and S3 compatible object storage

 

 

use case for Edge:

 

large data cloud migration, Disaster recovery prep, disaster recovery action

 

For Snowcone:

 

a much smaller device, can withstand harsh environments, very light and secure

 

used for smaller size transfers, up to 8TB storage

 

10 times less than snowball edge.

 

you must provide own battery and cables

 

can be sent to AWS in post or via internet – and use AWS DataSync to send data.

 

 

Snowmobile is a truck – very high-scale transfers.

 

request snowball devices on the AWS website

 

install a snowball client on servers

 

connect the snowball to your servers, then copy the file,

 

then ship the device back

 

many EBs of data 1EB = 1PB

 

very high security, staffed,

 

best for more than 10PB data transfer

 

Edge Computing:

 

somewhere creating data or needing data but has no internet access or limited access…

 

snowball edge or snowcone can be embedded into your edge site

so it gives you a way to transfer data despite having no strong internet connection

 

The Snowcone and Edge can run EC2 instances and Lambda functions!

good for preprocessing data, transcoding media screens, machine learning at the edge

 

Snowcone 2 CPUs 4GB RAM, wired/wi-fi, powered by usb-c or optional battery

 

Snowcone Edge:

many more vCPUs up to 52vCPUs and 208 GiB RAM
optional GPU
object storage clustering available

 

can get discount pricing 1 or 3 years

 

 

Also AWS OpsHub – management software for snow family

 

This enables you to manage the device from your client pc or laptop, you install on your client

 

 

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Ceph RGW Gateway

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

 

 

RGW Rados Object Gateway

 

 

first, install the ceph rgw package:

 

[root@ceph-mon ~]# ceph-deploy install –rgw ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy install –rgw ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f33f0221320>

 

… long list of package install output

….

[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Dependency Installed:
[ceph-mon][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[ceph-mon][DEBUG ]
[ceph-mon][DEBUG ] Complete!
[ceph-mon][INFO ] Running command: ceph –version
[ceph-mon][DEBUG ] ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

check which package is installed with

 

[root@ceph-mon ~]# rpm -q ceph-radosgw
ceph-radosgw-13.2.10-0.el7.x86_64
[root@ceph-mon ~]#

 

next do:

 

[root@ceph-mon ~]# ceph-deploy rgw create ceph-mon
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create ceph-mon
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [(‘ceph-mon’, ‘rgw.ceph-mon’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3bc2dd9e18>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7f3bc38a62a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts ceph-mon:rgw.ceph-mon
[ceph-mon][DEBUG ] connected to host: ceph-mon
[ceph-mon][DEBUG ] detect platform information from remote host
[ceph-mon][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to ceph-mon
[ceph-mon][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph-mon][DEBUG ] create path recursively if it doesn’t exist
[ceph-mon][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-rgw –keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.ceph-mon osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.ceph-mon/keyring
[ceph-mon][INFO ] Running command: systemctl enable ceph-radosgw@rgw.ceph-mon
[ceph-mon][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/ceph-radosgw@rgw.ceph-mon.service to /usr/lib/systemd/system/ceph-radosgw@.service.
[ceph-mon][INFO ] Running command: systemctl start ceph-radosgw@rgw.ceph-mon
[ceph-mon][INFO ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host ceph-mon and default port 7480
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: active (running) since Mi 2021-05-05 21:54:57 CEST; 531ms ago
Main PID: 7041 (radosgw)
CGroup: /system.slice/system-ceph\x2dradosgw.slice/ceph-radosgw@rgw.ceph-mon.service
└─7041 /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph

Mai 05 21:54:57 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:54:57 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:54:57 ceph-mon systemd[1]: Started Ceph rados gateway.
[root@ceph-mon ~]#

 

but then stops:

 

[root@ceph-mon ~]# systemctl status ceph-radosgw@rgw.ceph-mon
● ceph-radosgw@rgw.ceph-mon.service – Ceph rados gateway
Loaded: loaded (/usr/lib/systemd/system/ceph-radosgw@.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Mi 2021-05-05 21:55:01 CEST; 16s ago
Process: 7143 ExecStart=/usr/bin/radosgw -f –cluster ${CLUSTER} –name client.%i –setuser ceph –setgroup ceph (code=exited, status=5)
Main PID: 7143 (code=exited, status=5)

 

Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service: main process exited, code=exited, status=5/NOTINSTALLED
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service holdoff time over, scheduling restart.
Mai 05 21:55:01 ceph-mon systemd[1]: Stopped Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: start request repeated too quickly for ceph-radosgw@rgw.ceph-mon.service
Mai 05 21:55:01 ceph-mon systemd[1]: Failed to start Ceph rados gateway.
Mai 05 21:55:01 ceph-mon systemd[1]: Unit ceph-radosgw@rgw.ceph-mon.service entered failed state.
Mai 05 21:55:01 ceph-mon systemd[1]: ceph-radosgw@rgw.ceph-mon.service failed.
[root@ceph-mon ~]#

 

 

why…

 

[root@ceph-mon ~]# /usr/bin/radosgw -f –cluster ceph –name client.rgw.ceph-mon –setuser ceph –setgroup ceph
2021-05-05 22:45:41.994 7fc9e6388440 -1 Couldn’t init storage provider (RADOS)
[root@ceph-mon ~]#

 

[root@ceph-mon ceph]# radosgw-admin user create –uid=cephuser –key-type=s3 –access-key cephuser –secret-key cephuser –display-name=”cephuser”
2021-05-05 22:13:54.255 7ff4152ec240 0 rgw_init_ioctx ERROR: librados::Rados::pool_create returned (34) Numerical result out of range (this can be due to a pool or placement group misconfiguration, e.g. pg_num < pgp_num or mon_max_pg_per_osd exceeded)
2021-05-05 22:13:54.255 7ff4152ec240 0 failed reading realm info: ret -34 (34) Numerical result out of range
couldn’t init storage provider
[root@ceph-mon ceph]#

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Ceph RDB Block Devices

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

 

Ceph RDB Block Devices

 

 

You must create a pool first before you can specify it as a source.

 

[root@ceph-mon ~]# ceph osd pool create rbdpool 128 128
Error ERANGE: pg_num 128 size 2 would mean 768 total pgs, which exceeds max 750 (mon_max_pg_per_osd 250 * num_in_osds 3)
[root@ceph-mon ~]# ceph osd pool create rbdpool 64 64
pool ‘rbdpool’ created
[root@ceph-mon ~]# ceph osd lspools
4 cephfs_data
5 cephfs_metadata
6 rbdpool
[root@ceph-mon ~]# rbd -p rbdpool create rbimage –size 5120
[root@ceph-mon ~]# rbd ls rbdpool
rbimage
[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
rbd: error opening image rbdimage: (2) No such file or directory
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd feature disable rbdpool/rbimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 5G 0 disk
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbimage – /dev/rbd0
[root@ceph-mon ~]# rbd –image rbimage -p rbdpool info
rbd image ‘rbimage’:
size 5 GiB in 1280 objects
order 22 (4 MiB objects)
id: d3956b8b4567
block_name_prefix: rbd_data.d3956b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 15:32:48 2021
[root@ceph-mon ~]#

 

 

 

to remove an image:

 

rbd rm {pool-name}/{image-name}

[root@ceph-mon ~]# rbd rm rbdpool/rbimage
Removing image: 100% complete…done.
[root@ceph-mon ~]# rbd rm rbdpool/image
Removing image: 100% complete…done.
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
[root@ceph-mon ~]#

 

 

To create an image

 

rbd create –size {megabytes} {pool-name}/{image-name}

 

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd create –size 2048 rbdpool/rbdimage
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd ls rbdpool
rbdimage
[root@ceph-mon ~]#

[root@ceph-mon ~]# rbd feature disable rbdpool/rbdimage object-map fast-diff deep-flatten
[root@ceph-mon ~]# rbd map rbdpool/rbdimage –id admin
/dev/rbd0
[root@ceph-mon ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 10G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 9G 0 part
├─centos-root 253:0 0 8G 0 lvm /
└─centos-swap 253:1 0 1G 0 lvm [SWAP]
rbd0 251:0 0 2G 0 disk
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]#

[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]#
[root@ceph-mon ~]#
[root@ceph-mon ~]# rbd –image rbdimage -p rbdpool info
rbd image ‘rbdimage’:
size 2 GiB in 512 objects
order 22 (4 MiB objects)
id: fab06b8b4567
block_name_prefix: rbd_data.fab06b8b4567
format: 2
features: layering, exclusive-lock
op_features:
flags:
create_timestamp: Wed May 5 16:24:08 2021
[root@ceph-mon ~]# rbd showmapped
id pool image snap device
0 rbdpool rbdimage – /dev/rbd0
[root@ceph-mon ~]# mkfs.xfs /dev/rbd0
Discarding blocks…Done.
meta-data=/dev/rbd0 isize=512 agcount=8, agsize=65536 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@ceph-mon ~]#

 

[root@ceph-mon mnt]# mkdir /mnt/rbd
[root@ceph-mon mnt]# mount /dev/rbd0 /mnt/rbd
[root@ceph-mon mnt]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 753596 0 753596 0% /dev
tmpfs 765380 0 765380 0% /dev/shm
tmpfs 765380 8844 756536 2% /run
tmpfs 765380 0 765380 0% /sys/fs/cgroup
/dev/mapper/centos-root 8374272 2441472 5932800 30% /
/dev/vda1 1038336 175296 863040 17% /boot
tmpfs 153076 0 153076 0% /run/user/0
/dev/rbd0 2086912 33184 2053728 2% /mnt/rbd
[root@ceph-mon mnt]#

 

 

 

How to resize an rbd image

eg to 10GB.

rbd resize –size 10000 mypool/myimage

Resizing image: 100% complete…done.

Grow the file system to fill up the new size of the device.

xfs_growfs /mnt
[…]
data blocks changed from 2097152 to 2560000

 

Creating rbd snapshots

An RBD snapshot is a snapshot of a RADOS Block Device image. An rbd snapshot creates a history of the image’s state.

It is important to stop input and output operations and flush all pending writes before creating a snapshot of an rbd image.

If the image contains a file system, the file system must be in a consistent state before creating the snapshot.

rbd –pool pool-name snap create –snap snap-name image-name

rbd snap create pool-name/image-name@snap-name

eg

rbd –pool rbd snap create –snap snapshot1 image1
rbd snap create rbd/image1@snapshot1

 

To list snapshots of an image, specify the pool name and the image name.

rbd –pool pool-name snap ls image-name
rbd snap ls pool-name/image-name

eg

rbd –pool rbd snap ls image1
rbd snap ls rbd/image1

 

How to rollback to a snapshot

To rollback to a snapshot with rbd, specify the snap rollback option, the pool name, the image name, and the snapshot name.

rbd –pool pool-name snap rollback –snap snap-name image-name
rbd snap rollback pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rollback –snap snapshot1 image1
rbd snap rollback pool1/image1@snapshot1

IMPORTANT NOTE:

Note that it is faster to clone from a snapshot than to rollback an image to a snapshot. This is actually the preferred method of returning to a pre-existing state rather than rolling back a snapshot.

 

To delete a snapshot

To delete a snapshot with rbd, specify the snap rm option, the pool name, the image name, and the user name.

rbd –pool pool-name snap rm –snap snap-name image-name
rbd snap rm pool-name/image-name@snap-name

eg

rbd –pool pool1 snap rm –snap snapshot1 image1
rbd snap rm pool1/image1@snapshot1

Note also that Ceph OSDs delete data asynchronously, so deleting a snapshot will not free the disk space straight away.

To delete or purge all snapshots

To delete all snapshots for an image with rbd, specify the snap purge option and the image name.

rbd –pool pool-name snap purge image-name
rbd snap purge pool-name/image-name

eg

rbd –pool pool1 snap purge image1
rbd snap purge pool1/image1

 

Important when cloning!

Note that clones access the parent snapshots. This means all clones will break if a user deletes the parent snapshot. To prevent this happening, you must protect the snapshot before you can clone it.

 

do this by:

 

rbd –pool pool-name snap protect –image image-name –snap snapshot-name
rbd snap protect pool-name/image-name@snapshot-name

 

eg

 

rbd –pool pool1 snap protect –image image1 –snap snapshot1
rbd snap protect pool1/image1@snapshot1

 

Note that you cannot delete a protected snapshot.

How to clone a snapshot

To clone a snapshot, you must specify the parent pool, image, snapshot, the child pool, and the image name.

 

You must also protect the snapshot before you can clone it.

 

rbd clone –pool pool-name –image parent-image –snap snap-name –dest-pool pool-name –dest child-image

rbd clone pool-name/parent-image@snap-name pool-name/child-image-name

eg

 

rbd clone pool1/image1@snapshot1 pool1/image2

 

 

To delete a snapshot, you must unprotect it first.

 

However, you cannot delete snapshots that have references from clones unless you first “flatten” each clone of a snapshot.

 

rbd –pool pool-name snap unprotect –image image-name –snap snapshot-name
rbd snap unprotect pool-name/image-name@snapshot-name

 

eg

rbd –pool pool1 snap unprotect –image image1 –snap snapshot1
rbd snap unprotect pool1/image1@snapshot1

 

 

To list the children of a snapshot

 

rbd –pool pool-name children –image image-name –snap snap-name

 

eg

 

rbd –pool pool1 children –image image1 –snap snapshot1
rbd children pool1/image1@snapshot1

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: Lesson Ceph Centos7 – Pools & Placement Groups

LAB on Ceph Clustering on Centos7

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

This lab uses the ceph-deploy tool to set up the ceph cluster.  However, note that ceph-deploy is now an outdated Ceph tool and is no longer being maintained by the Ceph project. It is also not available for Centos8. The notes below relate to Centos7.

 

For OS versions of Centos higher than 7 the Ceph project advise you to use the cephadm tool for installing ceph on cluster nodes. 

 

At the time of writing (2021) knowledge of ceph-deploy is a stipulated syllabus requirement of the LPIC3-306 Clustering Diploma Exam, hence this Centos7 Ceph lab refers to ceph-deploy.

 

 

As Ceph is a large and complex subject, these notes have been split into several different pages.

 

 

Overview of Cluster Environment 

 

 

The cluster comprises three nodes installed with Centos7 and housed on a KVM virtual machine system on a Linux Ubuntu host. We are installing with Centos7 rather than the recent version because the later versions are not compatible with the ceph-deploy tool.

 

Create a Storage Pool

 

 

To create a pool:

 

ceph osd pool create datapool 1

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph osd pool create datapool 1
pool ‘datapool’ created
[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]# ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
6.0 GiB 3.0 GiB 3.0 GiB 50.30
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
datapool 1 0 B 0 1.8 GiB 0
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph health detail
HEALTH_WARN application not enabled on 1 pool(s)
POOL_APP_NOT_ENABLED application not enabled on 1 pool(s)
application not enabled on pool ‘datapool’
use ‘ceph osd pool application enable <pool-name> <app-name>’, where <app-name> is ‘cephfs’, ‘rbd’, ‘rgw’, or freeform for custom applications.
[root@ceph-mon ~]#

 

so we need to enable the pool:

 

[root@ceph-mon ~]# ceph osd pool application enable datapool rbd
enabled application ‘rbd’ on pool ‘datapool’
[root@ceph-mon ~]#

[root@ceph-mon ~]# ceph health detail
HEALTH_OK
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:
pools: 1 pools, 1 pgs
objects: 1 objects, 10 B
usage: 3.0 GiB used, 3.0 GiB / 6.0 GiB avail
pgs: 1 active+clean

[root@ceph-mon ~]#

 

 

 

How To Check All Ceph Services Are Running

 

Use 

 

ceph -s 

 

 

 

 

 

or alternatively:

 

 

[root@ceph-mon ~]# systemctl status ceph\*.service
● ceph-mon@ceph-mon.service – Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 989 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@ceph-mon.service
└─989 /usr/bin/ceph-mon -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster monitor daemon.

 

● ceph-mgr@ceph-mon.service – Ceph cluster manager daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mgr@.service; enabled; vendor preset: disabled)
Active: active (running) since Di 2021-04-27 11:47:36 CEST; 6h ago
Main PID: 992 (ceph-mgr)
CGroup: /system.slice/system-ceph\x2dmgr.slice/ceph-mgr@ceph-mon.service
└─992 /usr/bin/ceph-mgr -f –cluster ceph –id ceph-mon –setuser ceph –setgroup ceph

 

Apr 27 11:47:36 ceph-mon systemd[1]: Started Ceph cluster manager daemon.
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:41 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:46 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:51 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setuser ceph since I am not root
Apr 27 11:47:56 ceph-mon ceph-mgr[992]: ignoring –setgroup ceph since I am not root

 

● ceph-crash.service – Ceph crash dump collector
Loaded: loaded (/usr/lib/systemd/system/ceph-crash.service; enabled; vendor preset: enabled)
Active: active (running) since Di 2021-04-27 11:47:34 CEST; 6h ago
Main PID: 695 (ceph-crash)
CGroup: /system.slice/ceph-crash.service
└─695 /usr/bin/python2.7 /usr/bin/ceph-crash

 

Apr 27 11:47:34 ceph-mon systemd[1]: Started Ceph crash dump collector.
Apr 27 11:47:34 ceph-mon ceph-crash[695]: INFO:__main__:monitoring path /var/lib/ceph/crash, delay 600s
[root@ceph-mon ~]#

 

 

Object Manipulation

 

 

To create an object and upload a file into that object:

 

Example:

 

echo “test data” > testfile
rados put -p datapool testfile testfile
rados -p datapool ls
testfile

 

To set a key/value pair to that object:

 

rados -p datapool setomapval testfile mykey myvalue
rados -p datapool getomapval testfile mykey
(length 7) : 0000 : 6d 79 76 61 6c 75 65 : myvalue

 

To download the file:

 

rados get -p datapool testfile testfile2
md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2

 

 

 

[root@ceph-mon ~]# echo “test data” > testfile
[root@ceph-mon ~]# rados put -p datapool testfile testfile
[root@ceph-mon ~]# rados -p datapool ls
testfile
[root@ceph-mon ~]# rados -p datapool setomapval testfile mykey myvalue
[root@ceph-mon ~]# rados -p datapool getomapval testfile mykey
value (7 bytes) :
00000000 6d 79 76 61 6c 75 65 |myvalue|
00000007

 

[root@ceph-mon ~]# rados get -p datapool testfile testfile2
[root@ceph-mon ~]# md5sum testfile testfile2
39a870a194a787550b6b5d1f49629236 testfile
39a870a194a787550b6b5d1f49629236 testfile2
[root@ceph-mon ~]#

 

 

How To Check If Your Datastore is BlueStore or FileStore

 

[root@ceph-mon ~]# ceph osd metadata 0 | grep -e id -e hostname -e osd_objectstore
“id”: 0,
“hostname”: “ceph-osd0”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 1 | grep -e id -e hostname -e osd_objectstore
“id”: 1,
“hostname”: “ceph-osd1”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]# ceph osd metadata 2 | grep -e id -e hostname -e osd_objectstore
“id”: 2,
“hostname”: “ceph-osd2”,
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

You can also display a large amount of information with this command:

 

[root@ceph-mon ~]# ceph osd metadata 2
{
“id”: 2,
“arch”: “x86_64”,
“back_addr”: “10.0.9.12:6801/1138”,
“back_iface”: “eth1”,
“bluefs”: “1”,
“bluefs_single_shared_device”: “1”,
“bluestore_bdev_access_mode”: “blk”,
“bluestore_bdev_block_size”: “4096”,
“bluestore_bdev_dev”: “253:2”,
“bluestore_bdev_dev_node”: “dm-2”,
“bluestore_bdev_driver”: “KernelDevice”,
“bluestore_bdev_model”: “”,
“bluestore_bdev_partition_path”: “/dev/dm-2”,
“bluestore_bdev_rotational”: “1”,
“bluestore_bdev_size”: “2143289344”,
“bluestore_bdev_type”: “hdd”,
“ceph_release”: “mimic”,
“ceph_version”: “ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)”,
“ceph_version_short”: “13.2.10”,
“cpu”: “AMD EPYC-Rome Processor”,
“default_device_class”: “hdd”,
“devices”: “dm-2,sda”,
“distro”: “centos”,
“distro_description”: “CentOS Linux 7 (Core)”,
“distro_version”: “7”,
“front_addr”: “10.0.9.12:6800/1138”,
“front_iface”: “eth1”,
“hb_back_addr”: “10.0.9.12:6802/1138”,
“hb_front_addr”: “10.0.9.12:6803/1138”,
“hostname”: “ceph-osd2”,
“journal_rotational”: “1”,
“kernel_description”: “#1 SMP Thu Apr 8 19:51:47 UTC 2021”,
“kernel_version”: “3.10.0-1160.24.1.el7.x86_64”,
“mem_swap_kb”: “1048572”,
“mem_total_kb”: “1530760”,
“os”: “Linux”,
“osd_data”: “/var/lib/ceph/osd/ceph-2”,
“osd_objectstore”: “bluestore”,
“rotational”: “1”
}
[root@ceph-mon ~]#

 

or you can use:

 

[root@ceph-mon ~]# ceph osd metadata osd.0 | grep osd_objectstore
“osd_objectstore”: “bluestore”,
[root@ceph-mon ~]#

 

 

Which Version of Ceph Is Your Cluster Running?

 

[root@ceph-mon ~]# ceph -v
ceph version 13.2.10 (564bdc4ae87418a232fc901524470e1a0f76d641) mimic (stable)
[root@ceph-mon ~]#

 

 

How To List Your Cluster Pools

 

To list your cluster pools, execute:

 

ceph osd lspools

 

[root@ceph-mon ~]# ceph osd lspools
1 datapool
[root@ceph-mon ~]#

 

 

Placement Groups PG Information

 

To display the number of placement groups in a pool:

 

ceph osd pool get {pool-name} pg_num

 

 

To display statistics for the placement groups in the cluster:

 

ceph pg dump [–format {format}]

 

To display pool statistics:

 

[root@ceph-mon ~]# rados df
POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
datapool 10 B 1 0 2 0 0 0 2 2 KiB 2 2 KiB

 

total_objects 1
total_used 3.0 GiB
total_avail 3.0 GiB
total_space 6.0 GiB
[root@ceph-mon ~]#

 

 

How To Repair a Placement Group PG

 

Ascertain with ceph -s which PG has a problem

 

To identify stuck placement groups:

 

ceph pg dump_stuck [unclean|inactive|stale|undersized|degraded]

 

Then do:

 

ceph pg repair <PG ID>

For more info on troubleshooting PGs see https://documentation.suse.com/ses/7/html/ses-all/bp-troubleshooting-pgs.html

 

 

How To Activate Ceph Dashboard

 

The Ceph Dashboard runs without an Apache or other webserver active, the functionality is provided by the Ceph system.

 

All HTTP connections to the Ceph dashboard use SSL/TLS by default.

 

For testing lab purposes you can simply generate and install a self-signed certificate as follows:

 

ceph dashboard create-self-signed-cert

 

However in production environments this is unsuitable since web browsers will object to self-signed certificates and require explicit confirmation from a certificate authority or CA before opening a connection to the Ceph dashboard.

 

You can use your own certificate authority to ensure the certificate warning does not appear.

 

For example by doing:

 

$ openssl req -new -nodes -x509 -subj “/O=IT/CN=ceph-mgr-dashboard” -days 3650 -keyout dashboard.key -out dashboard.crt -extensions v3_ca

 

The generated dashboard.crt file then needs to be signed by a CA. Once signed, it can then be enabled for all Ceph manager instances as follows:

 

ceph config-key set mgr mgr/dashboard/crt -i dashboard.crt

 

After changing the SSL certificate and key you must restart the Ceph manager processes manually. Either by:

 

ceph mgr fail mgr

 

or by disabling and re-enabling the dashboard module:

 

ceph mgr module disable dashboard
ceph mgr module enable dashboard

 

By default, the ceph-mgr daemon that runs the dashboard (i.e., the currently active manager) binds to TCP port 8443 (or 8080 if SSL is disabled).

 

You can change these ports by doing:

ceph config set mgr mgr/dashboard/server_addr $IP
ceph config set mgr mgr/dashboard/server_port $PORT

 

For the purposes of this lab I did:

 

[root@ceph-mon ~]# ceph mgr module enable dashboard
[root@ceph-mon ~]# ceph dashboard create-self-signed-cert
Self-signed certificate created
[root@ceph-mon ~]#

 

Dashboard enabling can be automated by adding following to ceph.conf:

 

[mon]
mgr initial modules = dashboard

 

 

 

[root@ceph-mon ~]# ceph mgr module ls | grep -A 5 enabled_modules
“enabled_modules”: [
“balancer”,
“crash”,
“dashboard”,
“iostat”,
“restful”,
[root@ceph-mon ~]#

 

check SSL is installed correctly. You should see the keys displayed in output from these commands:

 

 

ceph config-key get mgr/dashboard/key
ceph config-key get mgr/dashboard/crt

 

This command does not work on Centos7, Ceph Mimic version as the full functionality was not implemented by the Ceph project for this version.

 

 

ceph dashboard ac-user-create admin password administrator

 

 

Use this command instead:

 

 

[root@ceph-mon etc]# ceph dashboard set-login-credentials cephuser <password not shown here>
Username and password updated
[root@ceph-mon etc]#

 

Also make sure you have the respective firewall ports open for the dashboard, ie 8443 for SSL/TLS https (or 8080 for http – latter however not advisable due to insecure unencrypted connection – password interception risk)

 

 

Logging in to the Ceph Dashboard

 

To log in, open the URL:

 

 

To display the current URL and port for the Ceph dashboard, do:

 

[root@ceph-mon ~]# ceph mgr services
{
“dashboard”: “https://ceph-mon:8443/”
}
[root@ceph-mon ~]#

 

and enter the user name and password you set as above.

 

 

Pools and Placement Groups In More Detail

 

Remember that pools are not PGs. PGs go inside pools.

 

To create a pool:

 

 

ceph osd pool create <pool name> <PG_NUM> <PGP_NUM>

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pools by default are replicated.

 

There are two kinds:

 

replicated

 

erasure coding EC

 

For replicated you set the number of data copies or replicas that each data obkect will have. The number of copies that can be lost will be one less than the number of replicas.

 

For EC its more complicated.

 

you have

 

k : number of data chunks
m : number of coding chunks

 

 

Pools have to be associated with an application. Pools to be used with CephFS, or pools automatically created by Object Gateway are automatically associated with cephfs or rgw respectively.

 

For CephFS the name associated application name is cephfs,
for RADOS Block Device it is rbd,
and for Object Gateway it is rgw.

 

Otherwise, the format to associate a free-form application name with a pool is:

 

ceph osd pool application enable POOL_NAME APPLICATION_NAME

To see which applications a pool is associated with use:

 

ceph osd pool application get pool_name

 

 

To set pool quotas for the maximum number of bytes and/or the maximum number of objects permitted per pool:

 

ceph osd pool set-quota POOL_NAME MAX_OBJECTS OBJ_COUNT MAX_BYTES BYTES

 

eg

 

ceph osd pool set-quota data max_objects 20000

 

To set the number of object replicas on a replicated pool use:

 

ceph osd pool set poolname size num-replicas

 

Important:
The num-replicas value includes the object itself. So if you want the object and two replica copies of the object for a total of three instances of the object, you need to specify 3. You should not set this value to anything less than 3! Also bear in mind that setting 4 replicas for a pool will increase the reliability by 25%.

 

To display the number of object replicas, use:

 

ceph osd dump | grep ‘replicated size’

 

 

If you want to remove a quota, set this value to 0.

 

To set pool values, use:

 

ceph osd pool set POOL_NAME KEY VALUE

 

To display a pool’s stats use:

 

rados df

 

To list all values related to a specific pool use:

 

ceph osd pool get POOL_NAME all

 

You can also display specific pool values as follows:

 

ceph osd pool get POOL_NAME KEY

 

The number of placement groups for the pool.

 

ceph osd pool get POOL_NAME KEY

In particular:

 

PG_NUM
This holds the number of placement groups for the pool.

 

PGP_NUM
This is the effective number of placement groups to be used to calculate data placement. It must be equal to or less than PG_NUM.

 

Pool Created:

 

[root@ceph-mon ~]# ceph osd pool create datapool 128 128 replicated
pool ‘datapool’ created
[root@ceph-mon ~]# ceph -s
cluster:
id: 2e490f0d-41dc-4be2-b31f-c77627348d60
health: HEALTH_OK

services:
mon: 1 daemons, quorum ceph-mon
mgr: ceph-mon(active)
osd: 4 osds: 3 up, 3 in

data:Block Lists
pools: 1 pools, 128 pgs
objects: 0 objects, 0 B
usage: 3.2 GiB used, 2.8 GiB / 6.0 GiB avail
pgs: 34.375% pgs unknown
84 active+clean
44 unknown

[root@ceph-mon ~]#

 

To remove a Placement Pool

 

two ways, ie two different commands can be used:

 

[root@ceph-mon ~]# rados rmpool datapool –yes-i-really-really-mean-it
WARNING:
This will PERMANENTLY DESTROY an entire pool of objects with no way back.
To confirm, pass the pool to remove twice, followed by
–yes-i-really-really-mean-it

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.

[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
Error EPERM: pool deletion is disabled; you must first set the mon_allow_pool_delete config option to true before you can destroy a pool
[root@ceph-mon ~]#

 

 

You have to set the mon_allow_pool_delete option first to true

 

first get the value of

 

ceph osd pool get pool_name nodelete

 

[root@ceph-mon ~]# ceph osd pool get datapool nodelete
nodelete: false
[root@ceph-mon ~]#

 

Because inadvertent pool deletion is a real danger, Ceph implements two mechanisms that prevent pools from being deleted. Both mechanisms must be disabled before a pool can be deleted.

 

The first mechanism is the NODELETE flag. Each pool has this flag, and its default value is ‘false’. To find out the value of this flag on a pool, run the following command:

 

ceph osd pool get pool_name nodelete

If it outputs nodelete: true, it is not possible to delete the pool until you change the flag using the following command:

 

ceph osd pool set pool_name nodelete false

 

 

The second mechanism is the cluster-wide configuration parameter mon allow pool delete, which defaults to ‘false’. This means that, by default, it is not possible to delete a pool. The error message displayed is:

 

Error EPERM: pool deletion is disabled; you must first set the
mon_allow_pool_delete config option to true before you can destroy a pool

 

To delete the pool despite this safety setting, you can temporarily set value of mon allow pool delete to ‘true’, then delete the pool. Then afterwards reset the value back to ‘false’:

 

ceph tell mon.* injectargs –mon-allow-pool-delete=true
ceph osd pool delete pool_name pool_name –yes-i-really-really-mean-it
ceph tell mon.* injectargs –mon-allow-pool-delete=false

 

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=true
injectargs:
[root@ceph-mon ~]#

 

 

[root@ceph-mon ~]# ceph osd pool delete datapool –yes-i-really-really-mean-it
Error EPERM: WARNING: this will *PERMANENTLY DESTROY* all data stored in pool datapool. If you are *ABSOLUTELY CERTAIN* that is what you want, pass the pool name *twice*, followed by –yes-i-really-really-mean-it.
[root@ceph-mon ~]# ceph osd pool delete datapool datapool –yes-i-really-really-mean-it
pool ‘datapool’ removed
[root@ceph-mon ~]#

 

[root@ceph-mon ~]# ceph tell mon.* injectargs –mon-allow-pool-delete=false
injectargs:mon_allow_pool_delete = ‘false’
[root@ceph-mon ~]#

 

NOTE The injectargs command displays following to confirm the command was carried out ok, this is NOT an error:

 

injectargs:mon_allow_pool_delete = ‘true’ (not observed, change may require restart)

 

 

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: GlusterFS Configuration on Ubuntu

LAB for installing and configuring GlusterFS on Ubuntu

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

 

Overview

 

The cluster comprises three nodes (ubuntu31, ubuntu32, ubuntu33) installed with Ubuntu Version 20 LTS and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

each node has a 1gb scsi disk called /dev/sda (the root system disk is /dev/vda)
 
brick1
brick2
brick3
 
respectively (these are NOT host definitions, just gluster identities)
 

on each machine:
 

88 wget -O- https://download.gluster.org/pub/gluster/glusterfs/3.12/rsa.pub | apt-key add –
89 sudo add-apt-repository ppa:gluster/glusterfs-3.12
90 apt install glusterfs-server -y
91 systemctl start glusterd
92 systemctl enable glusterd

 
 
Created a trusted pool. This is done on ubuntu31 with the command:
 
gluster peer probe ubuntu32
 
You should immediately see peer probe: success.

 

root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
 

You can check the status of peers with the command:
 
gluster peer status

 

We want the trusted pool to include all three bricks. So we do:

 
root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer probe ubuntu33
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer status
Number of Peers: 2
 
Hostname: ubuntu32
Uuid: 6b4ca918-e77c-40d9-821c-e24fe7130afa
State: Peer in Cluster (Connected)
 
Hostname: ubuntu33
Uuid: e3b02490-9a14-45a3-ad0d-fcc66dd1c731
State: Peer in Cluster (Connected)
root@ubuntu31:/home/kevin#

 

Add the disk for the gluster storage on each machine:

 
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xffb101f9
 
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 2097151 2095104 1023M 83 Linux
 
NOTE: on these ubuntu cluster nodes the root system partition is on /dev/vda – hence the next free scsi disk is sda!

 
Format and mount the bricks
 
Perform this step on all the nodes
 
Note: We are going to use the XFS filesystem for the backend bricks.
 
But Gluster is designed to work on top of any filesystem, which supports extended attributes.
 
The following examples assume that the brick will be residing on /dev/sda1.

 

mkfs.xfs -i size=512 /dev/sda1
mkdir -p /gluster
echo ‘/dev/sda1 /gluster/brick1 xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount
 

You should now see sda1 mounted at /gluster

 

root@ubuntu31:/home/kevin# mkfs.xfs -i size=512 /dev/sda1
meta-data=/dev/sda1 isize=512 agcount=4, agsize=65472 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=261888, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1566, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root@ubuntu31:/home/kevin#

 

do the same on the other two nodes, using
 
/gluster and /gluster respectively
 
echo ‘/dev/sda1 /gluster xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount

 

/dev/sda1 on /gluster type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
root@ubuntu31:/home/kevin# d
 
/dev/sda1 1041288 40296 1000992 4% /gluster
root@ubuntu31:/home/kevin
 

root@ubuntu31:/home/kevin# gluster pool list
UUID Hostname State
6b4ca918-e77c-40d9-821c-e24fe7130afa ubuntu32 Connected
e3b02490-9a14-45a3-ad0d-fcc66dd1c731 ubuntu33 Connected
2eb4eca2-11e4-40ef-9b70-43bfa551121c localhost Connected
root@ubuntu31:/home/kevin#

 
on ubuntu31, ubuntu32, ubuntu33:
 
mkdir -p /gluster/brick

 

replica n is the number of nodes in the gluster
 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/glusterfs/distributed ubuntu32:/glusterfs/distributed ubuntu33:/glusterfs/distributed

 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick

 
root@ubuntu31:/home/kevin# gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick
volume create: glustervol1: success: please start the volume to access data
root@ubuntu31:/home/kevin#
 

Now we’ve created the distributed volume ‘glustervol1’ – start the ‘glustervol1’ and check the volume info.
 
gluster volume start glustervol1
gluster volume info glustervol1

 
root@ubuntu31:/home/kevin# gluster volume start glustervol1
volume start: glustervol1: success
root@ubuntu31:/home/kevin#

 
root@ubuntu31:/home/kevin# gluster volume info glustervol1
 
Volume Name: glustervol1
Type: Replicate
Volume ID: 9335962f-342e-423e-aefc-a87777a5b081
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ubuntu31:/gluster/brick
Brick2: ubuntu32:/gluster/brick
Brick3: ubuntu33:/gluster/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
root@ubuntu31:/home/kevin#

 

on the client machines:

 

Install glusterfs-client to the Ubuntu system using the apt command.
 
sudo apt install glusterfs-client -y
 
Now create a new directory ‘/mnt/glusterfs’ when the glusterfs-client installation is complete.
 
mkdir -p /mnt/glusterfs

And mount the distributed glusterfs volume to the ‘/mnt/glusterfs’ directory.

 

mount -t glusterfs ubuntu31:/glustervol1 /mnt/glusterfs

 

ubuntu31:/glustervol1 1041288 50808 990480 5% /mnt/glusterfs
root@yoga:/home/kevin#

 

Continue Reading

LPIC3 DIPLOMA Linux Clustering – LAB NOTES: GlusterFS

LAB for installing and configuring GlusterFS on Ubuntu

 

These are my notes made during my lab practical as part of my LPIC3 Diploma course in Linux Clustering. They are in “rough format”, presented as they were written.

 

 

Overview

 

The cluster comprises three nodes (ubuntu31, ubuntu32, ubuntu33) installed with Ubuntu Version 20 LTS and housed on a KVM virtual machine system on a Linux Ubuntu host.

 

each node has a 1gb scsi disk called /dev/sda (the root system disk is /dev/vda)
 
brick1
brick2
brick3
 
respectively (these are NOT host definitions, just gluster identities)
 

on each machine:
 

88 wget -O- https://download.gluster.org/pub/gluster/glusterfs/3.12/rsa.pub | apt-key add –
89 sudo add-apt-repository ppa:gluster/glusterfs-3.12
90 apt install glusterfs-server -y
91 systemctl start glusterd
92 systemctl enable glusterd

 
 
Created a trusted pool. This is done on ubuntu31 with the command:
 
gluster peer probe ubuntu32
 
You should immediately see peer probe: success.

 

root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
 

You can check the status of peers with the command:
 
gluster peer status

 

We want the trusted pool to include all three bricks. So we do:

 
root@ubuntu31:/home/kevin# gluster peer probe ubuntu32
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer probe ubuntu33
peer probe: success.
root@ubuntu31:/home/kevin# gluster peer status
Number of Peers: 2
 
Hostname: ubuntu32
Uuid: 6b4ca918-e77c-40d9-821c-e24fe7130afa
State: Peer in Cluster (Connected)
 
Hostname: ubuntu33
Uuid: e3b02490-9a14-45a3-ad0d-fcc66dd1c731
State: Peer in Cluster (Connected)
root@ubuntu31:/home/kevin#

 

Add the disk for the gluster storage on each machine:

 
Disk /dev/sda: 1 GiB, 1073741824 bytes, 2097152 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xffb101f9
 
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 2097151 2095104 1023M 83 Linux
 
NOTE: on these ubuntu cluster nodes the root system partition is on /dev/vda – hence the next free scsi disk is sda!

 
Format and mount the bricks
 
Perform this step on all the nodes
 
Note: We are going to use the XFS filesystem for the backend bricks.
 
But Gluster is designed to work on top of any filesystem, which supports extended attributes.
 
The following examples assume that the brick will be residing on /dev/sda1.

 

mkfs.xfs -i size=512 /dev/sda1
mkdir -p /gluster
echo ‘/dev/sda1 /gluster/brick1 xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount
 

You should now see sda1 mounted at /gluster

 

root@ubuntu31:/home/kevin# mkfs.xfs -i size=512 /dev/sda1
meta-data=/dev/sda1 isize=512 agcount=4, agsize=65472 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1
data = bsize=4096 blocks=261888, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=1566, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
root@ubuntu31:/home/kevin#

 

do the same on the other two nodes, using
 
/gluster and /gluster respectively
 
echo ‘/dev/sda1 /gluster xfs defaults 1 2’ >> /etc/fstab ; mount -a && mount

 

/dev/sda1 on /gluster type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
root@ubuntu31:/home/kevin# d
 
/dev/sda1 1041288 40296 1000992 4% /gluster
root@ubuntu31:/home/kevin
 

root@ubuntu31:/home/kevin# gluster pool list
UUID Hostname State
6b4ca918-e77c-40d9-821c-e24fe7130afa ubuntu32 Connected
e3b02490-9a14-45a3-ad0d-fcc66dd1c731 ubuntu33 Connected
2eb4eca2-11e4-40ef-9b70-43bfa551121c localhost Connected
root@ubuntu31:/home/kevin#

 
on ubuntu31, ubuntu32, ubuntu33:
 
mkdir -p /gluster/brick

 

replica n is the number of nodes in the gluster
 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/glusterfs/distributed ubuntu32:/glusterfs/distributed ubuntu33:/glusterfs/distributed

 

gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick

 
root@ubuntu31:/home/kevin# gluster volume create glustervol1 replica 3 transport tcp ubuntu31:/gluster/brick ubuntu32:/gluster/brick ubuntu33:/gluster/brick
volume create: glustervol1: success: please start the volume to access data
root@ubuntu31:/home/kevin#
 

Now we’ve created the distributed volume ‘glustervol1’ – start the ‘glustervol1’ and check the volume info.
 
gluster volume start glustervol1
gluster volume info glustervol1

 
root@ubuntu31:/home/kevin# gluster volume start glustervol1
volume start: glustervol1: success
root@ubuntu31:/home/kevin#

 
root@ubuntu31:/home/kevin# gluster volume info glustervol1
 
Volume Name: glustervol1
Type: Replicate
Volume ID: 9335962f-342e-423e-aefc-a87777a5b081
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ubuntu31:/gluster/brick
Brick2: ubuntu32:/gluster/brick
Brick3: ubuntu33:/gluster/brick
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
root@ubuntu31:/home/kevin#

 

on the client machines:

 

Install glusterfs-client to the Ubuntu system using the apt command.
 
sudo apt install glusterfs-client -y
 
Now create a new directory ‘/mnt/glusterfs’ when the glusterfs-client installation is complete.
 
mkdir -p /mnt/glusterfs

And mount the distributed glusterfs volume to the ‘/mnt/glusterfs’ directory.

 

mount -t glusterfs ubuntu31:/glustervol1 /mnt/glusterfs

 

ubuntu31:/glustervol1 1041288 50808 990480 5% /mnt/glusterfs
root@yoga:/home/kevin#

 

Continue Reading

How To Configure The ISCSI Client Initiator

The ISCSI Client machine is known as the initiator and connects to the ISCSI target server.

 

This page explains how to configure the ISCSI initiator.

 

Preliminaries

 

Two services are essential for the initiator:

 

isci
iscsid

 

 

Important SUSE NOTE: Initiator and Target may not run on the same SUSE server!

 

SUSE does not support the running of iSCSI target software and iSCSI initiator software on the same server in a production environment.

 

On the client machines, install the iSCSI initiator software:

 

centos:
yum install iscsi-initiator-utils

 

or

 

dnf -y install iscsi-initiator-utils

 

suse1:~ # zypper in yast2-iscsi-client
Loading repository data…
Reading installed packages…
‘yast2-iscsi-client’ is already installed.

 

ubuntu:

 

# yum install iscsi-initiator-utils
The package installs several files including the following:

 

/etc/iscsi/iscsid.conf: The configuration file read by iscsid and iscsiadm. This file is heavily commented with descriptions for each configuration directive.
/sbin/iscsid: The Open-iSCSI daemon that implements the control path and management facilities
/sbin/iscsiadm: The Open-iSCSI administration utility used to discover and log in to iSCSI targets

 

[root@mars ~]# yum install iscsi-initiator-utils
Last metadata expiration check: 1:06:41 ago on Mon 01 Feb 2021 19:55:33 CET.
Package iscsi-initiator-utils-6.2.0.878-5.gitd791ce0.el8.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
[root@mars ~]#

 

 

Configuring the Initiator

 
Next enable and start the iscsid service:

 

systemctl enable iscsid ; systemctl start iscsid ; systemctl status iscsid

 

then map the client initiator to the target server iscsi lun using the iscsi utility iscsiadm.

 

We “discover the iscsi lan” first:

 

note: -m = mode, -t = target

 

[root@centos1 ~]# iscsiadm -m discovery -t sendtargets -p 10.0.8.10
10.0.8.10:3260,1 iqn.2003-01.org.linux-iscsi.centosstorage.x8664:sn.3ad620590c10
[root@centos1 ~]#

 

If you cannot connect, try

 

iscsiadm -m discovery -t st -d8 -p ipaddress

 

this sets debug mode

 

After discovery, the nodes table and the send_targets tables in the database are updated:

 

[root@mars ~]# ls /var/lib/iscsi/nodes
iqn.2003-01.org.linux-iscsi.clusterserver.x8664:sn.43a4217f336e
[root@mars ~]#

 

[root@mars ~]# ls /var/lib/iscsi/send_targets
10.0.2.10,3260
[root@mars ~]#

 

on the client node (ie initiator machine)

 

[root@centos1 ~]# ls /var/lib/iscsi/nodes
iqn.2003-01.org.linux-iscsi.centosstorage.x8664:sn.3ad620590c10
[root@centos1 ~]#

 

This tells you if the node has the riqht iqn for the target. In this case I had changed the target iqn through reinstallations, but the old iqn is still on the nodes.

 

I went into /etc/iscsi/send_targets/ and deleted the respective directories for this iqn (by ip address)

 

and also under /nodes

 

OR you can delete the record thus:

 

iscsiadm -m discoverydb -o delete -t sendtargets -p 10.0.5.10

 

Then on the iscsi client, we now can map or mount the iscsi disk on the client

 

(port has to be opened on the iscsi target server for this to work).

 

Next we can mount the iscsi disk, we have to login to the iscsi target (make sure you have the CORRECT IQN for the target server that it is currently using – it can be changed!):

 

iscsiadm -m node -T iqn.2003-01.org.linux-iscsi.centosstorage.x8664:sn.3ad620590c10 -p 10.0.8.10 -l

 

note: uppercase T, not lower case, -l = login

 

if you now do fdisk -l on the client, you will see the drive presented at the client level.

 

but how to know if its a local disk or a iscsi disk?

 

root@ubuntu1:~# fdisk -l

 

note that on kvm cluster, the guest hard drive is called vda

 

Device Start End Sectors Size Type
/dev/vda1 2048 4095 2048 1M BIOS boot
/dev/vda2 4096 2101247 2097152 1G Linux filesystem
/dev/vda3 2101248 41940991 39839744 19G Linux filesystem

 

Disk /dev/mapper/ubuntu–vg-ubuntu–lv: 18.102 GiB, 20396900352 bytes, 39837696 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

Disk /dev/sda: 976.58 MiB, 1024000000 bytes, 2000000 sectors
Disk model: lun0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1310720 bytes
root@ubuntu4:~#

 

cat /proc/scsi/scsi

 

NOTE that the path is scsi/scsi – NOT iscsi/iscsi

 

root@ubuntu4:~# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: QEMU Model: QEMU DVD-ROM Rev: 2.5+
Type: CD-ROM ANSI SCSI revision: 05
Host: scsi6 Channel: 00 Id: 00 Lun: 00
Vendor: LIO-ORG Model: lun0 Rev: 4.0
Type: Direct-Access ANSI SCSI revision: 05
root@ubuntu4:~#

 

Our iscsi disk is the one with the Host: scsi6 Channel: 00 Id: 00 Lun: 00
Vendor: LIO-ORG Model: lun0
, as this is the name of our target disk lun (it does not have to be lunN – we just named it that, it could be anything testlun, testdisk, iscsidisk1, iscsilun1 or whatever we choose).

 

and you can use

 

dmesg | grep -i “attached ”

 

root@ubuntu4:~# dmesg | grep -i “attached ”
[ 1.234130] sr 0:0:0:0: Attached scsi CD-ROM sr0
[ 1.234292] sr 0:0:0:0: Attached scsi generic sg0 type 5
[ 6289.190304] sd 6:0:0:0: Attached scsi generic sg1 type 0
[ 6289.200739] sd 6:0:0:0: [sda] Attached SCSI disk
root@ubuntu4:~#

 

If you cannot see the disk in fdisk -l

 

then try

 

service iscsi stop
service iscsid stop
service iscsid start
service iscsi start

 

At this point the disk is “attached” ie available to the client system, but it is not yet formatted for the client, nor is it mounted.

 

So the next step is to partition the disk as normal using fdisk, then check with:

 

root@ubuntu4:~# fdisk -l /dev/sda
Disk /dev/sda: 976.58 MiB, 1024000000 bytes, 2000000 sectors
Disk model: lun0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 1310720 bytes
Disklabel type: dos
Disk identifier: 0x66588ba2

 

Device Boot Start End Sectors Size Id Type
/dev/sda1 2560 1999999 1997440 975.3M 83 Linux
root@ubuntu4:~#

 

Then create a file system on the disk:

 

root@ubuntu4:~# mkfs.ext4 /dev/sda1
mke2fs 1.45.5 (07-Jan-2020)
Creating filesystem with 249680 4k blocks and 62464 inodes
Filesystem UUID: 883be9c3-ffc9-4e24-b359-c0d308fd8da3
Superblock backups stored on blocks:
32768, 98304, 163840, 229376

 

Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

 

root@ubuntu4:~#

 

Next, create a mountpoint:

 

root@ubuntu4:/# mkdir /media/ISCSIDISK

 

and then mount the disk:

 

mount -t ext4 /dev/sda1 /media/ISCSIDISK/

 

df

… … ..details omitted for brevity…
/dev/sda1 966616 2444 897852 1% /media/ISCSIDISK

 

Use the -u (or –logout) option to close a session. To view session information:

 

# iscsiadm -m session [-P [printlevel]]

The print levels are 1, 2, and 3. Each shows more detail.

 

There are three ways to disable or delete an iSCSI target.

 

First, to disable an iSCSI target:

 

# iscsiadm –m node -T iqn.2006-01.com.openfiler:tsn.d625a0d9cb77 –portal 192.168.1.141:3260 -u

 

Second, it is possible to delete the target’s record ID:/p>

 

# iscsiadm -m node -o delete -T iqn.2006-01.com.openfiler:tsn.d625a0d9cb77 –portal 192.168.1.141:3260

 

Thirdly, stop the iSCSI service.

 

How to tell if a scsi disk is mounted?

 

you can use: iscsiadm -m session -P 3 | grep ‘Target\|disk’

 

root@ubuntu1:/# iscsiadm -m session -P 3 | grep ‘Target\|disk’
Target: iqn.2003-01.org.linux-iscsi.storage.x8664:sn.e1034e062623 (non-flash)
Target Reset Timeout: 30
Attached scsi disk sdb State: running
root@ubuntu1:/#

 

You can display indepth info about the devices with:

 

iscsiadm -m session -P 3

 

[root@mars ~]# ls /dev/disk/by-path/
ip-10.0.2.10:3260-iscsi-iqn.2003-01.org.linux-iscsi.pxeserver.x8664:sn.855935678496-lun-0 pci-0000:00:01.1-ata-1-part1 pci-0000:00:01.1-ata-2
pci-0000:00:01.1-ata-1 pci-0000:00:01.1-ata-1-part2
[root@mars ~]#

 

[root@mars ~]# dmesg | grep -i “attached ”
[ 2.756694] scsi 0:0:0:0: Attached scsi generic sg0 type 0
[ 2.756785] scsi 1:0:0:0: Attached scsi generic sg1 type 5
[ 2.762075] sd 0:0:0:0: [sda] Attached SCSI disk
[ 2.795943] sr 1:0:0:0: Attached scsi CD-ROM sr0
[55977.808879] sd 2:0:0:0: Attached scsi generic sg2 type 0
[55977.837483] sd 2:0:0:0: [sdb] Attached SCSI disk
[root@mars ~]#

 

To get WWID of LUN you can use the /dev/disk/by-id/ file:

 

[root@mars ~]# ls -la /dev/disk/by-id/
total 0
drwxr-xr-x. 2 root root 480 Feb 2 12:48 .
drwxr-xr-x. 6 root root 120 Feb 1 21:01 ..
lrwxrwxrwx. 1 root root 9 Feb 1 21:01 ata-VBOX_CD-ROM_VB2-01700376 -> ../../sr0
lrwxrwxrwx. 1 root root 9 Feb 1 21:01 ata-VBOX_HARDDISK_VBf3c88a6a-cf0578ec -> ../../sda
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 ata-VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 ata-VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 dm-name-cl-root -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 dm-name-cl-swap -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 dm-uuid-LVM-FuVL5Fn8Dp2pso0dIHewV8iE9N1knbQH9DpPYinWb5ODH4UFhKvVQAq50g5Qs17F -> ../../dm-0
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 dm-uuid-LVM-FuVL5Fn8Dp2pso0dIHewV8iE9N1knbQHGcKKvFjrcod5uBLityueqeQP9KzqgESq -> ../../dm-1
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 lvm-pv-uuid-DeJN3u-fLwO-z5PJ-1Lnn-FW73-w1Mv-2sCf0U -> ../../sda2
lrwxrwxrwx. 1 root root 9 Feb 1 21:01 scsi-0ATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec -> ../../sda
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 scsi-0ATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 scsi-0ATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Feb 1 21:01 scsi-1ATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec -> ../../sda
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 scsi-1ATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 scsi-1ATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Feb 2 12:48 scsi-1LIO-ORG_lun0:8cde0c81-987d-43cf-a43c-6258cfadad32 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Feb 2 12:48 scsi-360014058cde0c81987d43cfa43c6258c -> ../../sdb
lrwxrwxrwx. 1 root root 9 Feb 1 21:01 scsi-SATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec -> ../../sda
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 scsi-SATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part1 -> ../../sda1
lrwxrwxrwx. 1 root root 10 Feb 1 21:01 scsi-SATA_VBOX_HARDDISK_VBf3c88a6a-cf0578ec-part2 -> ../../sda2
lrwxrwxrwx. 1 root root 9 Feb 2 12:48 scsi-SLIO-ORG_lun0_8cde0c81-987d-43cf-a43c-6258cfadad32 -> ../../sdb
lrwxrwxrwx. 1 root root 9 Feb 2 12:48 wwn-0x60014058cde0c81987d43cfa43c6258c -> ../../sdb
[root@mars ~]#

 

 

Before using the iscsiadm command to connect to an iSCSI target, you have to make sure that the supporting
modules are loaded.

 

 

Typically, you do that by starting the iSCSI client-support script. The names of these scripts
differ among the various distributions.

 

Assuming that the name of the service script is

 

iscsi.service,

 

systemctl start iscsi.service; systemctl enable iscsi.service

 

(service iscsi start; chkconfig iscsi on on a System-V server).

 

you can check they are loaded:

 

[root@mars init.d]# lsmod | grep iscsi

iscsi_tcp 24576 2
libiscsi_tcp 28672 1 iscsi_tcp
libiscsi 61440 2 libiscsi_tcp,iscsi_tcp
scsi_transport_iscsi 122880 4 libiscsi_tcp,iscsi_tcp,libiscsi
[root@mars init.d]#

 

or with the systemctl status command

 

To discover what targets are available on a specific server:

 

iscsiadm –mode discovery –type sendtargets –portal 10.0.2.10:3260 –discover

 

[root@mars init.d]# iscsiadm –mode discovery –type sendtargets –portal 10.0.2.10:3260 –discover
10.0.2.10:3260,1 iqn.2003-01.org.linux-iscsi.pxeserver.x8664:sn.855935678496
[root@mars init.d]#

 

OR

 

[root@mars init.d]# iscsiadm –mode discoverydb -P 1
SENDTARGETS:
DiscoveryAddress: 10.0.2.10,3260
Target: iqn.2003-01.org.linux-iscsi.pxeserver.x8664:sn.855935678496
Portal: 10.0.2.10:3260,1
Iface Name: default
iSNS:
No targets found.
STATIC:
No targets found.
FIRMWARE:
No targets found.
[root@mars init.d]#

 

the -P is a “print level”, ie level of verbosity of information.

 

[root@mars iscsi]# iscsiadm –mode node -P 0
10.0.2.10:3260,1 iqn.2003-01.org.linux-iscsi.pxeserver.x8664:sn.855935678496
[root@mars iscsi]# iscsiadm –mode node -P 1
Target: iqn.2003-01.org.linux-iscsi.pxeserver.x8664:sn.855935678496
Portal: 10.0.2.10:3260,1
Iface Name: default
[root@mars iscsi]#

 

iscsiadm node mode:

 

To log in, you’ll use the node mode. Node in iSCSI terminology means the actual connection that is established between an iSCSI target
and a specific portal.

 

The portal is the IP address and the port number that have to be used to make a connection to the iSCSI target.

 

lsscsi shows the existing connections:

 

[root@mars init.d]# lsscsi
[0:0:0:0] disk ATA VBOX HARDDISK 1.0 /dev/sda
[1:0:0:0] cd/dvd VBOX CD-ROM 1.0 /dev/sr0
[2:0:0:0] disk LIO-ORG lun0 4.0 /dev/sdb
[root@mars init.d]#

 

IMPORTANT: rebooting and persistence:

 

iSCSI does not usually require manual modification of configuration files.

 

To establish a connection, you just log into the iSCSI target. This automatically creates the configuration files and these
config files are persistent. Which means after a reboot, the server will automatically remember its last iSCSI
connections.

 

Disconnecting iSCSI

 

iSCSI is set up to reestablish all sessions on reboot of the server. If your configuration changes, you might have to remove the configuration.

 

To do this you must disconnect, which also means that the connection is gone from the iSCSI target server perspective.

 

Use

 

iscsiadm –mode node –logout

 

This disconnects you from all iSCSI disks, which allows you to do maintenance on the iSCSI storage area network.

 

If, after a reboot, you also want the iSCSI sessions not to be reestablished automatically, the easiest approach is to remove the entire contents of the $ISCSI_ROOT/node directory.

 

On a reboot after this, the iSCSI service won’t find any configuration; so you have to restablish the connections manually again.

 

NOTE: to restore the config, use:

/> restoreconfig /etc/target/saveconfig.json
Configuration restored from /etc/target/saveconfig.json
/>

 

Troubleshooting

 

IQN – these can change, it is possible for more than one IQN on the target to exist!

 

you may need to delete superfluous ones, eg if the hostname of the server changes then a new IQN will need to be generated on the target.

 

also, if drives are not available for whatever reason for the backstore, then iSCSI clients will not be able to connect to them.

 

client problems: try logging out of all connections, rediscover, and then re-login.

 

check the sections in targetcli are all correct.

 

make sure the configuration is saved (saveconfig)

 

make sure any loop devices on the server are available via the losetup command

 

Essential, else iscsid and other commands will not work!

 

Use the systemctl command to enable and start the iscsid service.

CLIENT:

 

systemctl enable iscsid
systemctl start iscsid

 

SERVER:
systemctl status target.service

Continue Reading

How To Configure ISCSI Target Server

 (Intro from Wikipedia)

 

 iSCSI an acronym for Internet Small Computer Systems Interface , an Internet Protocol (IP)-based storage networking standard for linking data storage facilities. It provides block-level access to storage devices by carrying SCSI commands over a TCP/IP network.

 

iSCSI is used to facilitate data transfers over intranets and to manage storage over long distances. It can be used to transmit data over local area networks (LANs), wide area networks (WANs), or the Internet and can enable location-independent data storage and retrieval.

 

The protocol allows clients (called initiators) to send SCSI commands (CDBs) to storage devices (targets) on remote servers. It is a storage area network (SAN) protocol, allowing organizations to consolidate storage into storage arrays while providing clients (such as database and web servers) with the illusion of locally attached SCSI disks.

 

iSCSI mainly competes with Fibre Channel, but unlike traditional Fibre Channel, which usually requires dedicated cabling, iSCSI can be run over long distances using existing network infrastructure.

 

 

POINTS TO WATCH WHEN CONFIGURING ISCSI

 

[root@centosstorage iscsi]# cat initiatorname.iscsi
InitiatorName=iqn.1994-05.com.redhat:7db3edc45
[root@centosstorage iscsi]#

 

Make sure you set

 

systemctl enable –now target

 

otherwise the iscsi configuration gets lost on reboots!

 

systemctl status target.service

 

 

To disconnect initiator client from target server

 

on the client do:

 

iscsiadm –mode node –logout

 

 

on ubuntu: to disable firewall:

 

root@storageserver:/# systemctl disable ufw
Synchronizing state of ufw.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install disable ufw
Removed /etc/systemd/system/multi-user.target.wants/ufw.service.
root@storageserver:/#
root@storageserver:/#

 

 

Installing Target ISCSI server on Ubuntu:

 

Preliminaries

 

note:
The Linux SCSI target framework, also known as TGT, can be used to create an iSCSI target server. TGT simplifies the creations and maintenance of iSCSI targets. TGT is supported by various Linux flavors (SUSE Linux, Fedora, RHEL, Debian, and Ubuntu). Execute the following command to install the TGT package.

 

root@LinuxSQL01:/# sudo apt-get install TGT

 

(I have NOT used TGT here – iscsi was already available in the ubuntu server installation)

 

Be sure to start and enable both iscsid and iscsi. Note that you will likely need to restart these if you edit the IQN of the initiator later.

 

 

systemctl enable iscsid iscsi
systemctl start iscsid iscsi

 

create the storage file:

 

root@storageserver:~# dd if=/dev/zero of=/storage/VHD1.img bs=10k count=100000
100000+0 records in
100000+0 records out
1024000000 bytes (1.0 GB, 977 MiB) copied, 1.44653 s, 708 MB/s
root@storageserver:~#

 

this creates a 1GB virtual disk

 

next use losetup to make the device available – this is NOT a file system mount — it must NOT be mounted for the iscsi to be able to access it.

 

 

check to see exactly which loop device is available, loop1 might already be taken on the machine!

 

use df to find out

 

on my VM loop5 is the next one free:

 

root@storageserver:~# losetup /dev/loop5 /storage/VHD1.img

 

NOTE if you reboot the storage server, then you first have to run losetup for your storage device before you can use iscsi!

 

 

the iscsi port must be open, otherwise clients cannot connect to the target!

 

on the server:

 

firewall-cmd –permanent –add-port=3260/tcp

 

then do a

 

firewall-cmd –reload

 

then call up

 

[root@clusterserver ~]# targetcli

 

root@storageserver:~# apt install targetcli-fb

 

ISCSI TARGET CONFIG using targetcli

 

The ISCSI server which manages connections to the storage is known as the target.

 

NOTE the iSCSI target drive must be UNMOUNTED before configuring for iSCSI

 

Configuration of the target is carried out using a commandline interface tool called targetcli. This can be used as pure command line commands, or alternatively in interactive mode. The description below refers to the interactive mode.

 

[root@centosstorage iscsi]# targetcli
Warning: Could not load preferences file /root/.targetcli/prefs.bin.
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type ‘help’.

/>

 

First, create the backstore/s. These are the data storage devices. They can be physical or virtual drives.

 

The types of backstores are described as follows:

 

block: Linux block devices such as /dev/sda
fileio: Any file on a mounted file system such as /tmp/disk1.img
pscsi: Any storage object that supports pass-through SCSI commands
ramdisk: Memory copy RAM disks

 

/> /backstores/block create lun0 /dev/loop1
Created block storage object lun0 using /dev/loop1.
/>

 

NOTE: if you dont have a physical storage device available, for testing purposes, you can simply create an iSCSI target for a
disk file by using the backstores fileio command, for example as follows:

 

/backstores/fileio create lun0 /media/storage/VHD1.img 100M

 

This creates a 100MB virtual drive file.

 

Then create the iscsi IQN target for the server. The IQN is a globally-unique identifying number for the server.

 

/> /iscsi create
Created target iqn.2003-01.org.linux-iscsi.centosstorage.x8664:sn.3ad620590c10.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.
/>

 

Then add the lun to the iscsi system using the IQN we have just created for the server:

 

/iscsi/iqn.20…c10/tpg1/luns> create /backstores/block/lun0
Created LUN 0.
/iscsi/iqn.20…c10/tpg1/luns>

/iscsi/iqn.20…c10/tpg1/luns> ls
o- luns …………………………………………………………………………………………………… [LUNs: 1]
o- lun0 ………………………………………………………………….. [block/lun0 (/dev/loop1) (default_tg_pt_gp)]
/iscsi/iqn.20…c10/tpg1/luns>

 

 

ADD THE ACL PERMISSION:

 

Note we use tpgi and the iqn of the server and the client: tpg1 means target permission group 1

 

you need the client initiator iqn names for this. Call them up on the client machines from shell command line:

 

cat /etc/iscsi/initiatorname.iscsi

 

[root@centos1 ~]# cat /etc/iscsi/initiatorname.iscsi

InitiatorName=iqn.1994-05.com.redhat:d1136c5524b9

 

NOTE – if you clone machines, a NEW hostname, ip, mac: means a new iscsi iqn!

 

if the iqn is identical with the other nodes (due to Oracle VM machine cloning), then you need to do following to generate a new unique iqn for the node:

 

mv /etc/iscsi/initiatorname.iscsi /var/tmp/initiatorname.iscsi.backup
echo “InitiatorName=`/sbin/iscsi-iname`” > /etc/iscsi/initiatorname.iscsi
cat /etc/iscsi/initiatorname.iscsi

 

root@ubuntu2:~# mv /etc/iscsi/initiatorname.iscsi /var/tmp/initiatorname.iscsi.backup
root@ubuntu2:~# echo “InitiatorName=`/sbin/iscsi-iname`” > /etc/iscsi/initiatorname.iscsi
root@ubuntu2:~# cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.2005-03.org.open-iscsi:48e32ccf636f
root@ubuntu2:~#

 

/>

 

then in targetcli on the target iscsi server ie centossstorage, you need to do this for each client who wishes to connect to the target.

 

 

/> /iscsi/iqn.2003-01.org.linux-iscsi.centosstorage.x8664:sn.3ad620590c10/tpg1/acls create iqn.1994-05.com.redhat:d1136c5524b9
Created Node ACL for iqn.1994-05.com.redhat:d1136c5524b9
Created mapped LUN 0.

/>

/> ls
o- / …………………………………………………………………………………………………………. […]
o- backstores ……………………………………………………………………………………………….. […]
| o- block …………………………………………………………………………………….. [Storage Objects: 1]
| | o- lun0 ………………………………………………………………… [/dev/loop1 (4.7GiB) write-thru activated]
| | o- alua ……………………………………………………………………………………… [ALUA Groups: 1]
| | o- default_tg_pt_gp …………………………………………………………….. [ALUA state: Active/optimized]
| o- fileio ……………………………………………………………………………………. [Storage Objects: 0]
| o- pscsi …………………………………………………………………………………….. [Storage Objects: 0]
| o- ramdisk …………………………………………………………………………………… [Storage Objects: 0]
o- iscsi ……………………………………………………………………………………………… [Targets: 1]
| o- iqn.2003-01.org.linux-iscsi.centosstorage.x8664:sn.3ad620590c10 …………………………………………… [TPGs: 1]
| o- tpg1 ………………………………………………………………………………….. [no-gen-acls, no-auth]
| o- acls ……………………………………………………………………………………………. [ACLs: 3]
| | o- iqn.1994-05.com.redhat:ce78a43c3f55 ………………………………………………………… [Mapped LUNs: 1]
| | | o- mapped_lun0 ………………………………………………………………………. [lun0 block/lun0 (rw)]
| | o- iqn.1994-05.com.redhat:d1136c5524b9 ………………………………………………………… [Mapped LUNs: 1]
| | | o- mapped_lun0 ………………………………………………………………………. [lun0 block/lun0 (rw)]
| | o- iqn.1994-05.com.redhat:e6a379479d2 …………………………………………………………. [Mapped LUNs: 1]
| | o- mapped_lun0 ………………………………………………………………………. [lun0 block/lun0 (rw)]
| o- luns ……………………………………………………………………………………………. [LUNs: 1]
| | o- lun0 …………………………………………………………… [block/lun0 (/dev/loop1) (default_tg_pt_gp)]
| o- portals ………………………………………………………………………………………. [Portals: 1]
| o- 0.0.0.0:3260 ……………………………………………………………………………………….. [OK]
o- loopback …………………………………………………………………………………………… [Targets: 0]
/>

 

 

next do a saveconfig:

 

/> saveconfig

Configuration saved to /etc/target/saveconfig.json

/>
/> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup/.
Configuration saved to /etc/target/saveconfig.json
[root@centosstorage iscsi]#

 

NOTE: to restore the config, use:

 

/> restoreconfig VHD1.img

 

 

IMPORTANT

 

make sure you set

 

systemctl enable –now target

 

otherwise the iscsi configuration gets lost on reboots!

 

 

Continue Reading