Linux使用mdadm创建软RAID
环境信息
项 | 值 |
---|---|
作业系统 | RockyLimux 9.4 |
系统盘 | /dev/nvme0n1p1 |
RAID盘 | /dev/nvme0n2,/dev/nvme0n3 |
组建RAID
查看硬盘
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 2.1G 0 rom
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─rl-root 253:0 0 17G 0 lvm /
└─rl-swap 253:1 0 2G 0 lvm [SWAP]
nvme0n2 259:3 0 20G 0 disk
nvme0n3 259:4 0 20G 0 disk
安装命令
[root@localhost ~]# dnf -y install mdadm
创建RAID
创建命令
mdadm --create RAID名称 --level=1 --raid-devices=2 硬盘1 硬盘2 ....
# 创建RAID1
[root@localhost ~]# mdadm -C /dev/md0 -l raid1 -n 2 /dev/nvme0n2 /dev/nvme0n3
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array [y/N]? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
参数说明
参数 | 释义 |
---|---|
-C --create | 创建新阵列 |
/dev/md0 | RAID设备的名称 |
-l 1 --level 1 | RAID级别为RAID1(镜像) |
-n 2 | 使用2块磁盘 |
/dev/sda1 /dev/sdc1 | 用于构建RAID的磁盘分区 |
查看RAID 级别
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 nvme0n3[1] nvme0n2[0]
20954112 blocks super 1.2 [2/2] [UU]
[=================>...] resync = 89.7% (18803712/20954112) finish=0.1min speed=206103K/sec
unused devices: <none>
查看RAID阵列
[root@localhost ~]# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 1 20:29:27 2025
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Oct 1 20:31:11 2025
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 869f9638:b936d498:23a8454a:9c04927c
Events : 17
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
1 259 4 1 active sync /dev/nvme0n3
查看当前磁盘状态
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 2.1G 0 rom
nvme0n1 259:0 0 20G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 19G 0 part
├─rl-root 253:0 0 17G 0 lvm /
└─rl-swap 253:1 0 2G 0 lvm [SWAP]
nvme0n2 259:3 0 20G 0 disk
└─md0 9:0 0 20G 0 raid1
nvme0n3 259:4 0 20G 0 disk
└─md0 9:0 0 20G 0 raid1
挂载阵列
将RAID
设备/dev/md0
创建为xfs
文件系统,并挂载到/data
下。
格式化阵列
[root@localhost ~]# mkfs.xfs /dev/md0
meta-data=/dev/md0 isize=512 agcount=4, agsize=1309632 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
= reflink=1 bigtime=1 inobtcount=1 nrext64=0
data = bsize=4096 blocks=5238528, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0, ftype=1
log =internal log bsize=4096 blocks=16384, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
挂载
[root@localhost ~]# mkdir /data
[root@localhost ~]# mount /dev/md0 /data/
[root@localhost ~]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 725M 9.1M 716M 2% /run
/dev/mapper/rl-root 17G 1.6G 16G 9% /
/dev/nvme0n1p1 960M 302M 659M 32% /boot
tmpfs 363M 0 363M 0% /run/user/0
/dev/md0 20G 175M 20G 1% /data
自动挂载
[root@localhost ~]# blkid /dev/md0
/dev/md0: UUID="4f3e275e-783b-4c25-86e8-ff6c447d6acf" TYPE="xfs"
[root@localhost ~]# vim /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Jun 13 14:35:32 2025
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rl-root / xfs defaults 0 0
UUID=7b63505f-d653-4399-b1ea-647207286dcd /boot xfs defaults 0 0
/dev/mapper/rl-swap none swap defaults 0 0
UUID="4f3e275e-783b-4c25-86e8-ff6c447d6acf" /data xfs defaults 0 0
保存RAID配置
# 等同于mdadm --detail --scan --verbose >> /etc/mdadm.conf
[root@localhost ~]# mdadm -E -s -v >> /etc/mdadm.conf
[root@localhost ~]# cat /etc/mdadm.conf
ARRAY /dev/md/0 level=raid1 metadata=1.2 num-devices=2 UUID=869f9638:b936d498:23a8454a:9c04927c
devices=/dev/nvme0n3,/dev/nvme0n2
故障恢复
确认异常
[root@localhost ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 1 20:29:27 2025
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Oct 1 20:34:39 2025
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 869f9638:b936d498:23a8454a:9c04927c
Events : 17
Number Major Minor RaidDevice State
0 259 3 0 active sync /dev/nvme0n2
1 259 4 1 active sync /dev/nvme0n3
移除故障盘
通常系统会自动将RAID里面的故障盘移除的,如果需要手动更换可以执行下述命令。
# 模拟nvme0n2损坏
[root@localhost data]# mdadm /dev/md0 -f /dev/nvme0n2
[root@localhost data]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 1 20:29:27 2025
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Oct 1 20:48:18 2025
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 869f9638:b936d498:23a8454a:9c04927c
Events : 21
Number Major Minor RaidDevice State
- 0 0 0 removed
1 259 4 1 active sync /dev/nvme0n3
0 259 3 - faulty /dev/nvme0n2
# 手动移除故障盘
[root@localhost data]# mdadm --manage /dev/md0 --remove /dev/nvme0n2
mdadm: hot removed /dev/nvme0n2 from /dev/md0
[root@localhost data]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 1 20:29:27 2025
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Oct 1 20:48:18 2025
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 869f9638:b936d498:23a8454a:9c04927c
Events : 21
Number Major Minor RaidDevice State
- 0 0 0 removed
1 259 4 1 active sync /dev/nvme0n3
0 259 3 - faulty /dev/nvme0n2
[root@localhost data]# mdadm --manage /dev/md0 --remove /dev/nvme0n2
mdadm: hot removed /dev/nvme0n2 from /dev/md0
[root@localhost data]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 1 20:29:27 2025
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent
Update Time : Wed Oct 1 20:48:50 2025
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 869f9638:b936d498:23a8454a:9c04927c
Events : 22
Number Major Minor RaidDevice State
- 0 0 0 removed
1 259 4 1 active sync /dev/nvme0n3
添加新硬盘
[root@localhost data]# mdadm --manage /dev/md0 -a /dev/nvme0n2
mdadm: added /dev/nvme0n2
[root@localhost data]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Wed Oct 1 20:29:27 2025
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Wed Oct 1 20:50:37 2025
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Rebuild Status : 3% complete
Name : localhost.localdomain:0 (local to host localhost.localdomain)
UUID : 869f9638:b936d498:23a8454a:9c04927c
Events : 24
Number Major Minor RaidDevice State
2 259 3 0 spare rebuilding /dev/nvme0n2
1 259 4 1 active sync /dev/nvme0n3