RAID50 Lab: Creating a RAID50 Using MDADM

You are here:
< All Topics

Creating a RAID50

 

on virtual machine centos1vm

 

this is 2 x RAID5 configs in a RAID1 configuration.

 

so we have 6 disks for the 2 raids, plus one spare for each

 

sdb9 10,11,12

 

sdb13,14,15,16

 

each disk is 500MB

 

we will have 3 x 500MB = 1.5GB total net disk storage available

 

so we need the following commands:

 

IMPORTANT NOTE OUR RAID DEVICE DESIGNATION :

 

we will use md1, md2 and md3  as we are already using md0 for our existing raid10 on this machine.

 

 

DON’T MIX UP YOUR RAID DEVICES!

 

Number Major Minor RaidDevice State
4 8 22 0 active sync set-A /dev/sdb6
6 8 18 1 active sync set-B /dev/sdb2
5 8 19 2 active sync set-A /dev/sdb3
7 8 21 3 active sync set-B /dev/sdb5

8 8 23 – spare /dev/sdb7
9 8 24 – spare /dev/sdb8

 

 

since I have 2 spares defined for md0 we have to use disks starting from sdb9 – see above.

 

we create 2 raid5 arrays, then we combine them into a raid 1:  md1 and md2

 

the combined raid will be md3 –name RAID50

 

 

the new raid device names are:

 

md1:

 

centos1vm:RAID5a

 

md2:

 

centos1vm:RAID5b

 

currently we have:

 

 [root@centos1vm ~]# cat /proc/mdstat

md0 : active raid10 sdb8[9](S) sdb7[8](S) sdb6[4] sdb5[7] sdb3[5] sdb2[6]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@centos1vm ~]#

 

so our designations are:

 

md1: centos1vm:RAID5a

md2: centos1vm:RAID5b

md3: centos1vm:RAID50

 

create the directories /RAID5a, /RAID5b and /RAID50

actually in practice the RAID5a and RAID5b directories are not needed, since we will mount on RAID50 only

 

Disklabel type: dos
Disk identifier: 0xf513fbff

Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1026047 1024000 500M fd Linux raid autodetect
/dev/sdb2 1026048 2050047 1024000 500M fd Linux raid autodetect
/dev/sdb3 2050048 3074047 1024000 500M fd Linux raid autodetect
/dev/sdb4 3074048 20971519 17897472 8.5G 5 Extended
/dev/sdb5 3076096 4100095 1024000 500M fd Linux raid autodetect
/dev/sdb6 4102144 5126143 1024000 500M fd Linux raid autodetect
/dev/sdb7 5128192 6152191 1024000 500M fd Linux raid autodetect
/dev/sdb8 6154240 7178239 1024000 500M fd Linux raid autodetect
/dev/sdb9 7180288 8204287 1024000 500M fd Linux raid autodetect
/dev/sdb10 8206336 9230335 1024000 500M fd Linux raid autodetect
/dev/sdb11 9232384 10256383 1024000 500M fd Linux raid autodetect
/dev/sdb12 10258432 11282431 1024000 500M fd Linux raid autodetect
/dev/sdb13 11284480 12308479 1024000 500M fd Linux raid autodetect
/dev/sdb14 12310528 13334527 1024000 500M fd Linux raid autodetect
/dev/sdb15 13336576 14360575 1024000 500M fd Linux raid autodetect
/dev/sdb16 14362624 15386623 1024000 500M fd Linux raid autodetect
/dev/sdb17 15388672 16412671 1024000 500M fd Linux raid autodetect
/dev/sdb18 16414720 17438719 1024000 500M fd Linux raid autodetect
/dev/sdb19 17440768 18464767 1024000 500M fd Linux raid autodetect

 

[root@centos1vm ~]# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdb8[9](S) sdb7[8](S) sdb3[5] sdb2[6] sdb5[7] sdb6[4]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@centos1vm ~]#

 

 

so we can use from sda9 onwards: 4 each raid – ie 3 active disks plus 1 spare:

 

mdadm –create /dev/md1 –level raid5 –name RAID5a –raid-disks 4 /dev/sdb9 /dev/sdb10 /dev/sdb11 /dev/sdb12

 

mdadm –create /dev/md2 –level raid5 –name RAID5b –raid-disks 4 /dev/sdb13 /dev/sdb14 /dev/sdb15 /dev/sdb16

 

 

[root@centos1vm ~]# cat /proc/mdstat
Personalities : [raid10]
md0 : active raid10 sdb8[9](S) sdb7[8](S) sdb3[5] sdb2[6] sdb5[7] sdb6[4]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@centos1vm ~]# mdadm –create /dev/md1 –level raid5 –name RAID5a –raid-disks 4 /dev/sdb9 /dev/sdb10 /dev/sdb11 /dev/sdb12
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
[root@centos1vm ~]# mdadm –create /dev/md2 –level raid5 –name RAID5b –raid-disks 4 /dev/sdb13 /dev/sdb14 /dev/sdb15 /dev/sdb16
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
[root@centos1vm ~]# cd ..

 

[root@centos1vm /]# ls
bin boot dev disk etc home lib lib64 media mnt opt proc RAID10 RAID50 RAID5a RAID5b root run sbin srv sys tmp usr var
[root@centos1vm /]#

 

next:

 

[root@centos1vm /]# cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 sdb16[4] sdb15[2] sdb14[1] sdb13[0]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md127 : inactive md1[0](S)
1527808 blocks super 1.2

md1 : active raid5 sdb12[4] sdb11[2] sdb10[1] sdb9[0]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid10 sdb8[9](S) sdb7[8](S) sdb3[5] sdb2[6] sdb5[7] sdb6[4]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@centos1vm /]#

 

 

mdadm –detail –scan >> /etc/mdadm.conf

 

[root@centos1vm /]# cat /etc/mdadm.conf
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md0 metadata=1.2 spares=2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
INACTIVE-ARRAY /dev/md127 metadata=1.2 name=centos1vm:RAID50 UUID=0673461d:7c532d23:00f07114:99cdf7f3
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=cb67bc06:d32061ea:eda8873f:c15b9fbf
[root@centos1vm /]#

 

 

mkfs.ext4 /dev/disk/by-id/md-name-centos1vm:RAID5a

 

 

[root@centos1vm /]# mdadm –stop /dev/md127
mdadm: Unknown keyword INACTIVE-ARRAY
mdadm: stopped /dev/md127
[root@centos1vm /]#

[root@centos1vm /]# mkfs.ext4 /dev/disk/by-id/md-name-centos1vm:RAID5a
mke2fs 1.45.6 (20-Mar-2020)
/dev/disk/by-id/md-name-centos1vm:RAID5a contains a linux_raid_member file system labelled ‘centos1vm:RAID50’
Proceed anyway? (y,N) Y
Creating filesystem with 382464 4k blocks and 95616 inodes
Filesystem UUID: ea9122bc-acec-465e-ad76-687b977bd7ff
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

[root@centos1vm /]# mkfs.ext4 /dev/disk/by-id/md-name-centos1vm:RAID5b
mke2fs 1.45.6 (20-Mar-2020)
/dev/disk/by-id/md-name-centos1vm:RAID5b contains a ext4 file system
last mounted on Sat Apr 9 00:20:10 2022
Proceed anyway? (y,N) Y
Creating filesystem with 382464 4k blocks and 95616 inodes
Filesystem UUID: a54bfabb-22f2-466e-82d6-52c06c72d065
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

 

 

then create the raid50 using md1 and md2:

 

[root@centos1vm /]# mdadm –create /dev/md3 –level raid1 –name RAID50 –raid-disks 2 /dev/md1 /dev/md2
mdadm: Unknown keyword INACTIVE-ARRAY
mdadm: /dev/md1 appears to contain an ext2fs file system
size=1529856K mtime=Thu Jan 1 01:00:00 1970
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store ‘/boot’ on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
–metadata=0.90
mdadm: /dev/md2 appears to contain an ext2fs file system
size=1529856K mtime=Thu Jan 1 01:00:00 1970
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md3 started.
[root@centos1vm /]#

 

 

 

 

check with:

 

 

[root@centos1vm /]# cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1]
md3 : active raid1 md2[1] md1[0]
1527808 blocks super 1.2 [2/2] [UU]

md2 : active raid5 sdb16[4] sdb15[2] sdb14[1] sdb13[0]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid5 sdb12[4] sdb11[2] sdb10[1] sdb9[0]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid10 sdb8[9](S) sdb7[8](S) sdb3[5] sdb2[6] sdb5[7] sdb6[4]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@centos1vm /]#

 

 

[root@centos1vm /]# mkfs.ext4 /dev/md3
mke2fs 1.45.6 (20-Mar-2020)
/dev/md3 contains a ext4 file system
last mounted on Fri Apr 8 19:23:05 2022
Proceed anyway? (y,N) Y
Creating filesystem with 381952 4k blocks and 95616 inodes
Filesystem UUID: e9bfd7c1-8788-4050-a7ba-5aa8674815ac
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912

 

Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

 

[root@centos1vm /]#

 

 

[root@centos1vm /]# mount /dev/md3 /RAID50
[root@centos1vm /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 653280 0 653280 0% /dev
tmpfs 672308 0 672308 0% /dev/shm
tmpfs 672308 8892 663416 2% /run
tmpfs 672308 0 672308 0% /sys/fs/cgroup
/dev/mapper/cs_centos–base-root 8374272 3195516 5178756 39% /
/dev/sda1 1038336 356816 681520 35% /boot
/dev/md0 987480 879072 41032 96% /RAID10
tmpfs 134460 0 134460 0% /run/user/0
/dev/md3 1470992 4488 1373732 1% /RAID50
[root@centos1vm /]#

 

Success!

 

disk total net

 

capacity is also correct ie 50% of raid5a and raid5b 1.5GB each : 3gb/2 = 1.5GB for md3

 

[root@centos1vm ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 638M 0 638M 0% /dev
tmpfs 657M 0 657M 0% /dev/shm
tmpfs 657M 8.7M 648M 2% /run
tmpfs 657M 0 657M 0% /sys/fs/cgroup
/dev/mapper/cs_centos–base-root 8.0G 3.1G 5.0G 39% /
/dev/sda1 1014M 349M 666M 35% /boot
/dev/md0 965M 859M 41M 96% /RAID10
tmpfs 132M 0 132M 0% /run/user/0
/dev/md3 1.5G 471M 876M 35% /RAID50
[root@centos1vm ~]#

 

[root@centos1vm ~]# cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1]
md3 : active raid1 md2[1] md1[0]
1527808 blocks super 1.2 [2/2] [UU]

md2 : active raid5 sdb16[4] sdb15[2] sdb14[1] sdb13[0]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid5 sdb12[4] sdb11[2] sdb10[1] sdb9[0]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid10 sdb8[9](S) sdb7[8](S) sdb3[5] sdb2[6] sdb5[7] sdb6[4]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>
[root@centos1vm ~]#

 

root@centos1vm ~]# cat /etc/mdadm.conf
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md0 metadata=1.2 spares=2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
INACTIVE-ARRAY /dev/md127 metadata=1.2 name=centos1vm:RAID50 UUID=0673461d:7c532d23:00f07114:99cdf7f3
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=cb67bc06:d32061ea:eda8873f:c15b9fbf
[root@centos1vm ~]#

 

[root@centos1vm ~]# mdadm –detail –scan
mdadm: Unknown keyword INACTIVE-ARRAY
ARRAY /dev/md0 metadata=1.2 spares=2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=cb67bc06:d32061ea:eda8873f:c15b9fbf
ARRAY /dev/md3 metadata=1.2 name=centos1vm:RAID50 UUID=66470135:4f0bd96a:357fe54e:2baa3a7b
[root@centos1vm ~]#

 

 

This is how it now looks:

 

 

[root@centos1vm ~]# mdadm -D /dev/md127
/dev/md127:
Version : 1.2
Creation Time : Sat Apr 9 00:48:11 2022
Raid Level : raid1
Array Size : 1527808 (1492.00 MiB 1564.48 MB)
Used Dev Size : 1527808 (1492.00 MiB 1564.48 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

 

Update Time : Sat Apr 9 11:53:15 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

 

Consistency Policy : resync

 

Name : centos1vm:RAID50 (local to host centos1vm)
UUID : 66470135:4f0bd96a:357fe54e:2baa3a7b
Events : 17

 

Number Major Minor RaidDevice State
0 9 1 0 active sync /dev/md1
1 9 2 1 active sync /dev/md2
[root@centos1vm ~]#

 

 

the above is actually a RAID50: consisting of two RAID5s (md1 and md2 respectively)

 

see below:

 

 

[root@centos1vm ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Sat Apr 9 00:41:10 2022
Raid Level : raid5
Array Size : 1529856 (1494.00 MiB 1566.57 MB)
Used Dev Size : 509952 (498.00 MiB 522.19 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

 

 

Update Time : Sat Apr 9 11:53:15 2022
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

 

Layout : left-symmetric
Chunk Size : 512K

 

 

Consistency Policy : resync

 

Name : centos1vm:RAID5a (local to host centos1vm)
UUID : 57784eb2:7d894249:e368bbd6:4fef82a1
Events : 20

 

Number Major Minor RaidDevice State
0 8 25 0 active sync /dev/sdb9
1 8 26 1 active sync /dev/sdb10
2 8 27 2 active sync /dev/sdb11
4 8 28 3 active sync /dev/sdb12

 

[root@centos1vm ~]# mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Sat Apr 9 00:41:25 2022
Raid Level : raid5
Array Size : 1529856 (1494.00 MiB 1566.57 MB)
Used Dev Size : 509952 (498.00 MiB 522.19 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent

 

 

Update Time : Sat Apr 9 11:53:15 2022
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

 

 

Layout : left-symmetric
Chunk Size : 512K

 

Consistency Policy : resync

 

 

Name : centos1vm:RAID5b (local to host centos1vm)
UUID : cb67bc06:d32061ea:eda8873f:c15b9fbf

Events : 20

 

 

Number Major Minor RaidDevice State
0 8 29 0 active sync /dev/sdb13
1 8 30 1 active sync /dev/sdb14
2 8 31 2 active sync /dev/sdb15
4 259 0 3 active sync /dev/sdb16
[root@centos1vm ~]#

 

 

Our /etc/fstab:

 

 

 

# /etc/fstab
# Created by anaconda on Sun Apr 18 15:57:25 2021
#
# Accessible filesystems, by reference, are maintained under ‘/dev/disk/’.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run ‘systemctl daemon-reload’ to update systemd
# units generated from this file.
#
/dev/mapper/cs_centos–base-root / xfs defaults 0 0
UUID=8a1e22d6-888d-416f-b186-e24993c450ae /boot xfs defaults 0 0
/dev/mapper/cs_centos–base-swap none swap defaults 0 0

#/dev/disk/by-id/md-name-centos1vm:RAID10 /RAID10 ext4 defaults 1 1

 

#/dev/disk/by-id/md-name-centos1vm:RAID5a /RAID5a ext4 defaults 1 1
#/dev/disk/by-id/md-name-centos1vm:RAID5b /RAID5b ext4 defaults 1 1
#/dev/disk/by-id/md-name-centos1vm:RAID50 /RAID50 ext4 defaults 1 1

UUID=e9bfd7c1-8788-4050-a7ba-5aa8674815ac /RAID50 ext4 defaults 0 0
[root@centos1vm ~]#

 

 

 

 

and our /etc/mdadm.conf: 

 

 

 

[root@centos1vm ~]# cat /etc/mdadm.conf
MAILADDR root@localhost
ARRAY /dev/md0 metadata=1.2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md0 metadata=1.2 spares=2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
#ARRAY /dev/md127 metadata=1.2 name=centos1vm:RAID50 UUID=0673461d:7c532d23:00f07114:99cdf7f3
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=cb67bc06:d32061ea:eda8873f:c15b9fbf

[root@centos1vm ~]#

 

 

 

 

We now have the following config after defining filesystem UUIDs for the RAIDs into the /etc/fstab:

 

basically the mdadm system creates its own MD devices each time . not using the definitions displayed with df!

 

 

UUID=8a1e22d6-888d-416f-b186-e24993c450ae /boot xfs defaults 0 0
/dev/mapper/cs_centos–base-swap none swap defaults 0 0

 

 

#/dev/disk/by-id/md-name-centos1vm:RAID10 /RAID10 ext4 defaults 1 1

#/dev/disk/by-id/md-name-centos1vm:RAID5a /RAID5a ext4 defaults 1 1
#/dev/disk/by-id/md-name-centos1vm:RAID5b /RAID5b ext4 defaults 1 1
#/dev/disk/by-id/md-name-centos1vm:RAID50 /RAID50 ext4 defaults 1 1

 

UUID=7e92383a-e2fb-48a1-8602-722d5c394158 /RAID10 ext4 defaults 0 0
UUID=e9bfd7c1-8788-4050-a7ba-5aa8674815ac /RAID50 ext4 defaults 0 0

 

[root@centos1vm ~]# cat /etc/mdadm.conf
ARRAY /dev/md/RAID10 metadata=1.2 name=centos1vm:RAID10 UUID=60ae2e29:c1621782:65c1b59c:67c9d0b4
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
INACTIVE-ARRAY /dev/md126 metadata=1.2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=afc89f2b:03016129:218a6a68:2dd2e6e5
ARRAY /dev/md/RAID50 metadata=1.2 name=centos1vm:RAID50 UUID=66470135:4f0bd96a:357fe54e:2baa3a7b

 

[root@centos1vm ~]# mdadm –detail –scan
mdadm: Unknown keyword INACTIVE-ARRAY
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=afc89f2b:03016129:218a6a68:2dd2e6e5
ARRAY /dev/md/RAID10 metadata=1.2 name=centos1vm:RAID10 UUID=60ae2e29:c1621782:65c1b59c:67c9d0b4
INACTIVE-ARRAY /dev/md126 metadata=1.2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md/RAID50 metadata=1.2 name=centos1vm:RAID50 UUID=66470135:4f0bd96a:357fe54e:2baa3a7b

 

 

[root@centos1vm ~]# df
Filesystem 1K-blocks Used Available Use% Mounted on
devtmpfs 653280 0 653280 0% /dev
tmpfs 672308 0 672308 0% /dev/shm
tmpfs 672308 8892 663416 2% /run
tmpfs 672308 0 672308 0% /sys/fs/cgroup
/dev/mapper/cs_centos–base-root 8374272 3196760 5177512 39% /
/dev/sda1 1038336 356816 681520 35% /boot
/dev/md127 987480 440788 479316 48% /RAID10
/dev/md125 1470992 881312 496908 64% /RAID50
tmpfs 134460 0 134460 0% /run/user/0
[root@centos1vm ~]#

 

 

 

 

 

NOTE how we use different UUIDs for fstab and for mdadm.conf!

 

This is important in order for the system to boot up without hanging

 

 

/etc/fstab:

 

UUID=7e92383a-e2fb-48a1-8602-722d5c394158 /RAID10 ext4 defaults 0 0
UUID=e9bfd7c1-8788-4050-a7ba-5aa8674815ac /RAID50 ext4 defaults 0 0

 

[root@centos1vm by-id]# ll md*
lrwxrwxrwx. 1 root root 11 Apr 9 13:18 md-name-centos1vm:RAID10 -> ../../md127
lrwxrwxrwx. 1 root root 11 Apr 9 13:18 md-name-centos1vm:RAID50 -> ../../md125
lrwxrwxrwx. 1 root root 9 Apr 9 13:18 md-name-centos1vm:RAID5a -> ../../md1
lrwxrwxrwx. 1 root root 9 Apr 9 13:18 md-name-centos1vm:RAID5b -> ../../md2
lrwxrwxrwx. 1 root root 9 Apr 9 13:18 md-uuid-57784eb2:7d894249:e368bbd6:4fef82a1 -> ../../md1
lrwxrwxrwx. 1 root root 11 Apr 9 13:18 md-uuid-60ae2e29:c1621782:65c1b59c:67c9d0b4 -> ../../md127
lrwxrwxrwx. 1 root root 11 Apr 9 13:18 md-uuid-66470135:4f0bd96a:357fe54e:2baa3a7b -> ../../md125
lrwxrwxrwx. 1 root root 9 Apr 9 13:18 md-uuid-afc89f2b:03016129:218a6a68:2dd2e6e5 -> ../../md2
[root@centos1vm by-id]# blkid /dev/md127
/dev/md127: UUID=”7e92383a-e2fb-48a1-8602-722d5c394158″ BLOCK_SIZE=”4096″ TYPE=”ext4″
[root@centos1vm by-id]# blkid /dev/md125
/dev/md125: UUID=”e9bfd7c1-8788-4050-a7ba-5aa8674815ac” BLOCK_SIZE=”4096″ TYPE=”ext4″

 

 

[root@centos1vm by-id]# cat /etc/mdadm.conf
ARRAY /dev/md/RAID10 metadata=1.2 name=centos1vm:RAID10 UUID=60ae2e29:c1621782:65c1b59c:67c9d0b4
ARRAY /dev/md1 metadata=1.2 name=centos1vm:RAID5a UUID=57784eb2:7d894249:e368bbd6:4fef82a1
INACTIVE-ARRAY /dev/md126 metadata=1.2 name=centos1vm:RAID10 UUID=3dd59b4a:6f3cdf67:f89659d6:5e0f1c0d
ARRAY /dev/md2 metadata=1.2 name=centos1vm:RAID5b UUID=afc89f2b:03016129:218a6a68:2dd2e6e5
ARRAY /dev/md/RAID50 metadata=1.2 name=centos1vm:RAID50 UUID=66470135:4f0bd96a:357fe54e:2baa3a7b

 

 

[root@centos1vm by-id]# ll /dev/md/RAID10
lrwxrwxrwx. 1 root root 8 Apr 9 13:18 /dev/md/RAID10 -> ../md127
[root@centos1vm by-id]#

 

 

 

 

[root@centos1vm ~]# mdadm –manage /dev/md125 –fail /dev/md1
mdadm: set /dev/md1 faulty in /dev/md125
[root@centos1vm ~]#
[root@centos1vm ~]#

 

mdadm –manage /dev/md125 –remove /dev/md1

 

[root@centos1vm ~]# mdadm –manage /dev/md125 –remove /dev/md1
mdadm: hot removed /dev/md1 from /dev/md125
[root@centos1vm ~]#

 

mdadm –detail /dev/disk/by-id/md-name-centos1vm:RAID50

 

 

[root@centos1vm etc]# mdadm –detail /dev/disk/by-id/md-name-centos1vm:RAID50
/dev/disk/by-id/md-name-centos1vm:RAID50:
Version : 1.2
Creation Time : Sat Apr 9 00:48:11 2022
Raid Level : raid1
Array Size : 1527808 (1492.00 MiB 1564.48 MB)
Used Dev Size : 1527808 (1492.00 MiB 1564.48 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Sat Apr 9 13:38:41 2022
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

 

Consistency Policy : resync

 

Name : centos1vm:RAID50 (local to host centos1vm)
UUID : 66470135:4f0bd96a:357fe54e:2baa3a7b
Events : 22

 

Number Major Minor RaidDevice State
– 0 0 0 removed
1 9 2 1 active sync /dev/md2
[root@centos1vm etc]#

 

 

mdadm –manage /dev/md125 –add /dev/md1

 

[root@centos1vm etc]# mdadm –manage /dev/md125 –add /dev/md1
mdadm: added /dev/md1
[root@centos1vm etc]#

 

then when we do a cat /proc/mdstat we can see the F – fail status is now gone – we are back to normal:

 

 

[root@centos1vm etc]# cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1]
md125 : active raid1 md1[2] md2[1]
1527808 blocks super 1.2 [2/2] [UU]

md126 : inactive sdb8[9](S) sdb6[4](S) sdb7[8](S)
1529856 blocks super 1.2

md2 : active raid5 sdb16[4] sdb15[2] sdb13[0] sdb14[1]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md127 : active raid10 sdb5[3] sdb3[2] sdb2[1] sdb1[0]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md1 : active raid5 sdb9[0] sdb12[4] sdb11[2] sdb10[1]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>
[root@centos1vm etc]#

 

 

check the detail once more, and we see all is once again ok:

 

[root@centos1vm etc]# mdadm –detail /dev/disk/by-id/md-name-centos1vm:RAID50
/dev/disk/by-id/md-name-centos1vm:RAID50:
Version : 1.2
Creation Time : Sat Apr 9 00:48:11 2022
Raid Level : raid1
Array Size : 1527808 (1492.00 MiB 1564.48 MB)
Used Dev Size : 1527808 (1492.00 MiB 1564.48 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

 

Update Time : Sat Apr 9 13:41:36 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

 

Consistency Policy : resync

 

Name : centos1vm:RAID50 (local to host centos1vm)
UUID : 66470135:4f0bd96a:357fe54e:2baa3a7b
Events : 41

 

Number Major Minor RaidDevice State
2 9 1 0 active sync /dev/md1
1 9 2 1 active sync /dev/md2
[root@centos1vm etc]#

 

 

[root@centos1vm ~]# mdadm –manage /dev/md125 –fail /dev/md1
mdadm: set /dev/md1 faulty in /dev/md125
[root@centos1vm ~]#
[root@centos1vm ~]#

 

mdadm –manage /dev/md125 –remove /dev/md1

 

[root@centos1vm ~]# mdadm –manage /dev/md125 –remove /dev/md1
mdadm: hot removed /dev/md1 from /dev/md125
[root@centos1vm ~]#

 

mdadm –detail /dev/disk/by-id/md-name-centos1vm:RAID50

 

 

[root@centos1vm etc]# mdadm –detail /dev/disk/by-id/md-name-centos1vm:RAID50
/dev/disk/by-id/md-name-centos1vm:RAID50:
Version : 1.2
Creation Time : Sat Apr 9 00:48:11 2022
Raid Level : raid1
Array Size : 1527808 (1492.00 MiB 1564.48 MB)
Used Dev Size : 1527808 (1492.00 MiB 1564.48 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

 

Update Time : Sat Apr 9 13:38:41 2022
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

 

Consistency Policy : resync

 

Name : centos1vm:RAID50 (local to host centos1vm)
UUID : 66470135:4f0bd96a:357fe54e:2baa3a7b
Events : 22

 

Number Major Minor RaidDevice State
– 0 0 0 removed
1 9 2 1 active sync /dev/md2
[root@centos1vm etc]#

 

 

 

mdadm –manage /dev/md125 –add /dev/md1

 

 

[root@centos1vm etc]# mdadm –manage /dev/md125 –add /dev/md1
mdadm: added /dev/md1
[root@centos1vm etc]#

 

then when we do a cat /proc/mdstat we can see the F – fail status is now gone – we are back to normal:

 

 

[root@centos1vm etc]# cat /proc/mdstat
Personalities : [raid10] [raid6] [raid5] [raid4] [raid1]
md125 : active raid1 md1[2] md2[1]
1527808 blocks super 1.2 [2/2] [UU]

md126 : inactive sdb8[9](S) sdb6[4](S) sdb7[8](S)
1529856 blocks super 1.2

md2 : active raid5 sdb16[4] sdb15[2] sdb13[0] sdb14[1]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

md127 : active raid10 sdb5[3] sdb3[2] sdb2[1] sdb1[0]
1019904 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

md1 : active raid5 sdb9[0] sdb12[4] sdb11[2] sdb10[1]
1529856 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]

 

unused devices: <none>
[root@centos1vm etc]#

 

 

 

check the detail once more, and we see all is once again ok:

 

 

[root@centos1vm etc]# mdadm –detail /dev/disk/by-id/md-name-centos1vm:RAID50
/dev/disk/by-id/md-name-centos1vm:RAID50:
Version : 1.2
Creation Time : Sat Apr 9 00:48:11 2022
Raid Level : raid1
Array Size : 1527808 (1492.00 MiB 1564.48 MB)
Used Dev Size : 1527808 (1492.00 MiB 1564.48 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

 

Update Time : Sat Apr 9 13:41:36 2022
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

 

Consistency Policy : resync

 

Name : centos1vm:RAID50 (local to host centos1vm)
UUID : 66470135:4f0bd96a:357fe54e:2baa3a7b
Events : 41

 

Number Major Minor RaidDevice State
2 9 1 0 active sync /dev/md1
1 9 2 1 active sync /dev/md2
[root@centos1vm etc]#

 

 

Table of Contents